<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:media="http://search.yahoo.com/mrss/"><channel><title>IEEE Spectrum</title><link>https://spectrum.ieee.org/</link><description>IEEE Spectrum</description><atom:link href="https://spectrum.ieee.org/feeds/feed.rss" rel="self"></atom:link><language>en-us</language><lastBuildDate>Fri, 06 Mar 2026 16:00:05 -0000</lastBuildDate><item><title>Video Friday: A Robot Hand With Artificial Muscles and Tendons</title><link>https://spectrum.ieee.org/video-friday-robot-hand-artificial-muscles</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/robotic-hand-grasping-a-red-bull-can-against-a-dark-background.png?id=65162441&width=2000&height=1500&coordinates=113%2C0%2C113%2C0"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://2026.ieee-icra.org/">ICRA 2026</a>: 1–5 June 2026, VIENNA</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="hd1hdfw1bhy"><em>The functional replication and actuation of complex structures inspired by nature is a longstanding goal for humanity. Creating such complex structures combining soft and rigid features and actuating them with artificial muscles would further our understanding of natural kinematic structures. We printed a biomimetic hand in a single print process comprised of a rigid skeleton, soft joint capsules, tendons, and printed touch sensors.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="d1520429687b7c6ef41cd204b2161ddc" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/hD1HDFw1BhY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://ieeexplore.ieee.org/abstract/document/10522043">Paper</a> ] via [ <a href="https://srl.ethz.ch/">SRL</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="u18ehtnvfd4">Two <a href="https://spectrum.ieee.org/tag/boston-dynamics" target="_blank">Boston Dynamics</a> product managers talk about their favorite classic BD robots, and then I talk about mine.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ee14e75b8b4fac354bdb72fef9eb1549" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/U18EHTnvFd4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>And this is Boston Dynamics’ LittleDog, doing legged locomotion research 16 or so years ago in what I’m pretty sure is Katie Byl’s lab at UCSB.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="27626ccc6010288122cf616a0f35aa3d" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/AdWpo43b2FI?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://bostondynamics.com/about/history/">Boston Dynamics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="gocorcrlgb4"><em>This is our latest work on the trajectory planning method for floating-based articulated robots, enabling the global path searching in complex and cluttered environments.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="f2d2afeed034c4c40136e41360360951" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/GOcorcrLGb4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.dragon.t.u-tokyo.ac.jp/">DRAGON Lab</a> ]</p><p>Thanks, Moju!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="i2jmf_z9ts8"><em>OmniPlanner is a unified solution for exploration and inspection path planning (as well as target reach) across aerial, ground, and underwater robots. It has been verified through extensive simulations and a multitude of field tests, including in underground mines, ballast water tanks, forests, university buildings, and submarine bunkers.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="fcaa6b98fc3995a5010528eb89bb8f14" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/I2JMF_Z9tS8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.autonomousrobotslab.com/">NTNU</a> ]</p><p>Thanks, Kostas!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="a_hwcpqbbly"><em>In the ARISE project, the <a href="https://www.fzi.de/en/" target="_blank">FZI Research Center for Information Technology</a> and its international partners ETH Zurich, University of Zurich, University of Bern, and University of Basel took a major step toward future lunar missions by testing cooperative autonomous multi-robot teams under outdoor conditions.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="17b9634bb780c7e02ba8230822684990" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/a_hwCPQbBlY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.fzi.de/en/2025/02/26/one-step-closer-to-the-moon-through-international-cooperation/">FZI</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="dmbjbwhwyeu">Welcome to the future, where there are no other humans.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="05f12866fdd4c32a9372563b0d407f5d" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/DmbJbwhWYEU?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.zj-humanoid.com/">Zhejiang Humanoid</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="8oot8cnpai0"><em>This is our latest work on robotic fish, and is also the first underwater robot of DRAGON Lab. </em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="2e719c55aa3bd82ab9f1c1123ecfe88f" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/8oot8CnpAi0?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.dragon.t.u-tokyo.ac.jp/">DRAGON Lab</a> ]</p><p>Thanks, Moju!</p><div class="horizontal-rule"></div><p class="rm-anchors" id="awrnl8rcbmk">Watch this one simple trick to make <a href="https://spectrum.ieee.org/topic/robotics/humanoid-robots/" target="_blank">humanoid robots</a> cheaper and safer!</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="e69112d5d83fdcd0226a652b2b7cb898" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/AwRnL8rcBmk?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.zj-humanoid.com/">Zhejiang Humanoid</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="90twy79yffo">Gugusse and the Automaton’ is a 1897 French film by <a href="https://en.wikipedia.org/wiki/Georges_M%C3%A9li%C3%A8s" target="_blank">Georges Méliès</a> featuring a humanoid robot in nearly as realistic of a way as some of the humanoid promo videos we’ve seen lately.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="4d2d9a469b74b0b57aa6d34c9859e471" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/90tWY79YfFo?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.loc.gov/item/2026125501/?loclr=blogloc">Library of Congress</a> ] via [ <a href="https://gizmodo.com/first-film-to-depict-a-robot-discovered-in-michigan-2000727995">Gizmodo</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="lm3htxushva"><em>At Agility, we create automated solutions for the hardest work. We’re incredibly proud of how far we’ve come, and can’t wait to show you what’s next.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="007204bb742016f199f77925109d19ef" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/LM3hTXUShvA?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.agilityrobotics.com/">Agility</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="si-jhnqcjt0"><a href="https://www.nist.gov/people/kamel-s-saidi" target="_blank">Kamel Saidi</a>, Robotics Program Manager at the <a href="https://www.nist.gov/" target="_blank">National Institute of Standards and Technology (NIST)</a>, on How Performance Standards can Pave the Way for Humanoid Adoption.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="4484b448f54ec06f10f3985953b03c9b" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/sI-jhnqcJt0?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://humanoidssummit.com/">Humanoids Summit</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="gwsl1oh1i4w"><em><a href="https://people.eecs.berkeley.edu/~anca/" target="_blank">Anca Dragan</a> is no stranger to Waymo. She worked with us for six years while also at UC Berkeley and now, Google DeepMind. Her focus on making AI safer helped Waymo as it launched commercially. In this final episode of our season, Anca describes how her work enables AI agents to work fluently with people, based on human goals and values.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="bafb48d4c6dec0871809a152ad842b8e" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/GwSl1OH1i4w?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.youtube.com/playlist?list=PLCkt0hth826G9AtnOrQsPbKKD5JmdaMXb">Waymo Podcast</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="r9ugdinfhbm">This <a href="https://www.grasp.upenn.edu/" target="_blank">UPenn GRASP</a> SFI Seminar is by Junyao Shi, on “Unlocking Generalist Robots with Human Data and Foundation Models.”</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="c1e2f8d6dc1171693ed8ee0180f30e9d" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/r9UGdInfhBM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><blockquote><em>Building general-purpose robots remains fundamentally constrained by data scarcity and labor-intensive engineering. Unlike vision and language, robotics lacks large, diverse datasets spanning tasks, environments, and embodiments, limiting both scalability and generalization. This talk explores how human data and foundation models trained at scale can help overcome these bottlenecks.</em></blockquote><p>[ <a href="https://www.grasp.upenn.edu/events/spring-2026-grasp-sfi-junyao-shi/">UPenn</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 06 Mar 2026 16:00:05 +0000</pubDate><guid>https://spectrum.ieee.org/video-friday-robot-hand-artificial-muscles</guid><category>Humanoid-robots</category><category>Video-friday</category><category>Underwater-robots</category><category>Bipedal-robots</category><category>Robot-videos</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/robotic-hand-grasping-a-red-bull-can-against-a-dark-background.png?id=65162441&amp;width=980"></media:content></item><item><title>The Millisecond That Could Change Cancer Treatment</title><link>https://spectrum.ieee.org/flash-radiotherapy</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/photo-of-a-man-in-a-lab-coat-adjusting-a-large-piece-of-medical-equipment-thats-pointed-at-the-head-of-a-partial-mannequin.jpg?id=65111419&width=2000&height=1500&coordinates=0%2C11%2C0%2C11"/><br/><br/><p><strong>Inside a cavernous hall</strong> at the Swiss-French border, the air hums with high voltage and possibility. From his perch on the wraparound observation deck, physicist <a href="https://www.researchgate.net/profile/Walter-Wuensch" rel="noopener noreferrer" target="_blank">Walter Wuensch</a> surveys a multimillion-dollar array of accelerating cavities, klystrons, modulators, and pulse compressors—hardware being readied to drive a new generation of linear particle accelerators.</p><p>Wuensch has spent decades working with these machines to crack the deepest mysteries of the universe. Now he and his colleagues are aiming at a new target: cancer. Here at <a href="https://home.cern/" rel="noopener noreferrer" target="_blank">CERN</a> (the European Organization for Nuclear Research) and other particle-physics labs, scientists and engineers are applying the tools of fundamental physics to develop a technique called FLASH radiotherapy that offers a radical and counterintuitive vision for treating the disease.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Photo of a white-haired man standing next to floor-to-ceiling experimental equipment with many tubes and wires. " class="rm-shortcode" data-rm-shortcode-id="ce95648ce39bd5c09f73bddf6af75766" data-rm-shortcode-name="rebelmouse-image" id="f8147" loading="lazy" src="https://spectrum.ieee.org/media-library/photo-of-a-white-haired-man-standing-next-to-floor-to-ceiling-experimental-equipment-with-many-tubes-and-wires.jpg?id=65111429&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">CERN researcher Walter Wuensch says the particle physics lab’s work on FLASH radiotherapy is “generating a lot of excitement.”</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">CERN</small></p><p>Radiation therapy has been a cornerstone of cancer treatment since shortly after <a href="https://medicalmuseum.health.mil/index.cfm/visit/exhibits/virtual/xraydiscovery/index" target="_blank">Wilhelm Conrad Röntgen</a> discovered X-rays in 1895. Today, more than half of all cancer patients receive it as part of their care, typically in relatively low doses of X-rays delivered over dozens of sessions. Although this approach often kills the tumor, it also wreaks havoc on nearby healthy tissue. Even with modern precision targeting, the potential for collateral damage limits how much radiation doctors can safely deliver.</p><p>FLASH radiotherapy flips the conventional approach on its head, delivering a single dose of ultrahigh-power radiation in a burst that typically lasts less than one-tenth of a second. In study after study, this technique causes significantly less injury to normal tissue than conventional radiation does, without compromising its antitumor effect.</p><p>At CERN, which I visited last July, the approach is being tested and refined on accelerators that were never intended for medicine. If ongoing experiments here and around the world continue to bear out results, FLASH could transform radiotherapy—delivering stronger treatments, fewer side effects, and broader access to lifesaving care.</p><p>“It’s generating a lot of excitement,” says Wuensch, a researcher at CERN’s Linear Electron Accelerator for Research (CLEAR) facility. “We accelerator people are thinking, Oh, wow, here’s an application of our technology that has a societal impact which is more immediate than most high-energy physics.”</p><h2>The Unlikely Birth of FLASH Therapy</h2><p>The breakthrough that led to FLASH emerged from a line of experiments that began in the 1990s at <a href="https://institut-curie.org/" target="_blank">Institut Curie</a> in Orsay, near Paris. Researcher <a href="https://institut-curie.org/person/vincent-favaudon" target="_blank">Vincent Favaudon</a> was using a low-energy electron accelerator to study radiation chemistry. Targeting the accelerator at mouse lungs, Favaudon expected the radiation to produce scar tissue, or fibrosis. But when he exposed the lungs to ultrafast blasts of radiation, at doses a thousand times as high as what’s used in conventional radiation therapy, the expected fibrosis never appeared.</p><p>Puzzled, Favaudon turned to <a href="https://scholar.google.com/citations?user=xx8VQkMAAAAJ&hl=fr" target="_blank">Marie-Catherine Vozenin</a>, a radiation biologist at Curie who specialized in radiation-induced fibrosis. “When I looked at the slides, there was indeed no fibrosis, which was very, very surprising for this type of dose,” recalls Vozenin, who now works at <a href="https://www.hug.ch/en" target="_blank">Geneva University Hospitals</a>, in Switzerland.</p><h3>How to Measure Radiation Doses</h3><br/><p>Radiation therapy uses a variety of units to refer to the amount of energy received by the patient. Here are the main ones under the International System of Units, or SI.</p><p><strong>Gray (Gy):</strong> A measure of the absorbed dose—that is, how much radiation energy is absorbed by the body. One gray equals 1 joule of radiation energy per kilogram of matter. FLASH delivers a single dose of 40 Gy or more in a fraction of a second. Conventional radiation therapy, by contrast, may deliver a total dose of 40 to 80 Gy but over the course of several weeks.</p><p><strong>Sievert (Sv):</strong> A measure of the effective dose—that is, the health effects of the radiation, with different types of ionizing radiation (gamma rays, X-rays, alpha particles, and so on) having different effects. One sievert equals 1 joule per kilogram weighted for the biological effectiveness of the radiation and the tissues exposed.</p><h3></h3><br/><p>The pair expanded the experiments to include cancerous tumors. The results upended a long-held trade-off of radiotherapy: the idea that you can’t destroy a tumor without also damaging the host. “This differential effect is really what we want in radiation oncology, not damaging normal tissue but killing the tumors,” Vozenin says.</p><p>They repeated the protocol across different types of tissue and tumors. By 2014, they had gathered enough evidence to publish their findings in <a href="https://www.science.org/doi/10.1126/scitranslmed.3008973" target="_blank"><em>Science Translational Medicine</em></a>. Their experiments confirmed that delivering an ultrahigh dose of 10 gray or more in less than a tenth of a second could eradicate tumors in mice while leaving surrounding healthy tissue virtually unharmed. For comparison, a typical chest X-ray delivers about 0.1 milligray, while a session of conventional radiation therapy might deliver a total of about 2 gray per day. (The authors called the effect “FLASH” because of the quick, high doses involved, but it’s not an acronym.)</p><h3></h3><br/><img alt="Three sets of images comparing highly magnified tissue samples." class="rm-shortcode" data-rm-shortcode-id="00fc1edc5ddb29e98aa8bb4755930278" data-rm-shortcode-name="rebelmouse-image" id="6ce44" loading="lazy" src="https://spectrum.ieee.org/media-library/three-sets-of-images-comparing-highly-magnified-tissue-samples.jpg?id=65111609&width=980"/><h3></h3><br/><p>Many cancer experts were skeptical. The FLASH effect seemed almost too good to be true. “It didn’t get a lot of traction at first,” recalls <a href="https://med.stanford.edu/profiles/Billy_Loo" target="_blank">Billy Loo</a>, a Stanford radiation oncologist specializing in lung cancer. “They described a phenomenon that ran counter to decades of established radiobiology dogma.”</p><p>But in the years since then, researchers have observed the effect across a wide range of tumor types and animals—beyond mice to zebra fish, fruit flies, and even a few human subjects, with the same protective effect in the brain, lungs, skin, muscle, heart, and bone.</p><p>Why this happens remains a mystery. “We have investigated a lot of hypotheses, and all of them have been wrong,” says Vozenin. Currently, the most plausible theory emerging from her team’s research points to metabolism: Healthy and cancerous cells may process reactive oxygen species—unstable oxygen-containing molecules generated during radiation—in very different ways.</p><h2>Adapting Accelerators for FLASH</h2><p>At the time of the first FLASH publication, Loo and his team at Stanford were also focused on dramatically speeding up radiation delivery. But Loo wasn’t chasing a radiobiological breakthrough. He was trying to solve a different problem: motion.</p><p>“The tumors that we treat are always moving targets,” he says. “That’s particularly true in the lung, where because of breathing motion, the tumors are constantly moving.”</p><p>To bring FLASH therapy out of the lab and into clinical use, researchers like Vozenin and Loo needed machines capable of delivering fast, high doses with pinpoint precision deep inside the body. Most early studies relied on low-energy electron beams like Favaudon’s 4.5-megaelectron-volt Kinetron—sufficient for surface tumors, but unable to reach more than a few centimeters into a human body. Treating deep-seated cancers in the lung, brain, or abdomen would require far higher particle energies.</p><h3></h3><br/><img alt="Photo of floor-to-ceiling electromagnetic hardware with many tubes and pipes, some of which is copper-colored." class="rm-shortcode" data-rm-shortcode-id="3b3bd74be1a8bc555eb51aa843114f06" data-rm-shortcode-name="rebelmouse-image" id="39797" loading="lazy" src="https://spectrum.ieee.org/media-library/photo-of-floor-to-ceiling-electromagnetic-hardware-with-many-tubes-and-pipes-some-of-which-is-copper-colored.jpg?id=65111435&width=980"/><h3></h3><br/><p>They also needed an alternative to conventional X-rays. In a clinical linac, X-ray photons are produced by dumping high-energy electrons into a bremsstrahlung target, which is made of a material with a high atomic number, like tungsten or copper. The target slows the electrons, converting their kinetic energy into X-ray photons. It’s an inherently inefficient process that wastes most of the beam power as heat and makes it extremely difficult to reach the ultrahigh dose rates required for FLASH. High-energy electrons, by contrast, can be switched on and off within milliseconds. And because they have a charge and can be steered by magnets, electrons can be precisely guided to reach tumors deep within the body. (Researchers are also investigating protons and carbon ions; see the sidebar, “What’s the Best Particle for FLASH Therapy?”)</p><p>Loo turned to the <a href="https://www6.slac.stanford.edu/" target="_blank">SLAC National Accelerator Laboratory</a> in Menlo Park, Calif., where physicist <a href="https://profiles.stanford.edu/sami-tantawi" rel="noopener noreferrer" target="_blank">Sami Gamal-Eldin Tantawi</a> was redefining how electromagnetic waves move through linear accelerators. Tantawi’s findings allowed scientists to precisely control how energy is delivered to particles—paving the way for compact, efficient, and finely tunable machines. It was exactly the kind of technology FLASH therapy would need to target tumors deep inside the body.</p><p>Meanwhile, Vozenin and other European researchers turned to CERN, best known for its 27-kilometer Large Hadron Collider (LHC) and the 2012 discovery of the Higgs boson, the “God particle” that gives other particles their mass. </p><p class="ieee-inbody-related">RELATED: <a href="https://spectrum.ieee.org/particle-physics-ai" target="_blank">AI Hunts for the Next Big Thing in Physics</a></p><p>CERN is also home to a range of smaller linear accelerators—including CLEAR, where Wuensch and his team are adapting high-energy physics tools for medicine.</p><h3>What’s the Best Particle for FLASH Therapy?</h3><br/><p>Even as research on FLASH radiotherapy advances, a central question remains: What kind of particle will deliver it best? The main contenders are electrons, protons, and carbon ions. Each has distinct advantages, limitations, and implications for cost, complexity, and clinical reach.</p><p><strong>Electrons</strong>—long used to treat surface tumors and to generate X-rays—are light, nimble particles, far easier to control than protons or carbon ions. At low energies, they stop quickly in tissue, but new high-energy systems can drive electrons deeper. Now researchers are working on machines that combine multiple high-energy beams at different angles to let doctors sculpt radiation doses that match the tumor’s shape.</p><p>That principle underpins Billy Loo’s PHASER (Pluridirectional High-energy Agile Scanning Electron Radiotherapy) system, developed at Stanford and SLAC and licensed to a startup called <a href="https://www.tibaray.com/" target="_blank">TibaRay</a>. An array of high-efficiency linacs generates X-ray beams from many directions at once. Their high output overcomes the inefficiency of electron-to-photon conversion to deliver the dose at FLASH speed. Beam convergence at the tumor and electronic shaping conform the dose in three dimensions, producing uniform coverage with relatively simple infrastructure. </p><p><strong>Protons</strong> have led the way in early clinical trials, largely because existing proton therapy centers can be adapted to deliver FLASH doses. In 2020, the University of Cincinnati Health launched the <a href="https://www.uchealth.com/en/media-room/articles/ground-breaking-cancer-research-is-in-your-backyard" rel="noopener noreferrer" target="_blank">first human FLASH trial</a> to use proton beams, to treat cancer that had metastasized to bones. “If I want to be pragmatic, the proton beam is ready to go, so let’s move with what we have,” says Geneva University Hospitals’ Marie-Catherine Vozenin.</p><p>Protons can penetrate up to 30 centimeters, reaching deep-seated tumors. But the delivery of protons in a continuous beam limits the dose rates. Also, proton systems are far larger and more expensive than, say, X-ray machines, which will likely constrain their availability to specialized centers.</p><p><strong>Carbon ions</strong>, used in a handful of elite facilities, offer even higher precision and biological effectiveness compared to electrons and protons. Their Bragg peak—a sudden deposition of energy at a specific depth—makes them appealing for deep or complex tumors. But that unmatched precision comes at a steep price, with each facility costing upward of US $300 million. —T.C.</p><h3></h3><br/><p>Unlike the LHC, which loops particles around a massive ring to build up energy before smashing them together, linear accelerators like CLEAR send particles along a straight, one-time path. That setup allows for greater precision and compactness, making it ideal for applications like FLASH.</p><p>At the heart of the CLEAR facility, Wuensch points out the 200-MeV linear accelerator with its 20-meter beamline. This is “a playground of creativity,” he says, for the physicists and engineers who arrive from all over the world to run experiments.</p><p>The process begins when a laser pulse hits a photocathode, releasing a burst of electrons that form the initial beam. These electrons travel through a series of precisely machined copper cavities, where high-frequency microwaves push them forward. The electrons then move through a network of magnets, monitors, and focusing elements that shape and steer them toward the experimental target with submillimeter precision.</p><p>Instead of a continuous stream, the electron beam is divided into nanosecond-long bunches—billions of electrons riding the radio-frequency field like surfers. Inside the accelerator’s cavities, the field flips polarity 12 billion times per second, so timing is everything: Only electrons that arrive perfectly in phase with the accelerating wave will gain energy. That process repeats through a chain of cavities, each giving the bunches another push, until the beam reaches its final energy of 200 MeV.</p><h3></h3><br/><img alt="Close-up photo of an etched copper disc being held under a microscope by a gloved hand." class="rm-shortcode" data-rm-shortcode-id="9cbcce34df51565a0cd0cea335517027" data-rm-shortcode-name="rebelmouse-image" id="6eeba" loading="lazy" src="https://spectrum.ieee.org/media-library/close-up-photo-of-an-etched-copper-disc-being-held-under-a-microscope-by-a-gloved-hand.jpg?id=65111478&width=980"/><p><span>Much of this architecture draws directly from the </span><a href="https://clic-study.org/" target="_blank">Compact Linear Collider study</a><span>, a decades-long CERN project aimed at building a next-generation collider. The proposed CLIC machine would stretch 11 kilometers and collide electrons and positrons at 380 gigaelectron volts. To do that in a linear configuration—without the multiple passes around a ring like the LHC—CERN engineers have had to push for extremely high acceleration gradients to boost the electrons to high energies over relatively short distances—up to 100 megavolts per meter.</span></p><p>Wuensch leads me to a large experimental hall housing prototype structures from the CLIC effort, and points out the microwave devices that now help drive FLASH research. Though the future of CLIC as a collider remains uncertain, its infrastructure is already yielding dividends: smaller, high-gradient accelerators that may one day be as suited for curing cancer as they are for smashing particles.</p><p class="ieee-inbody-related">RELATED: <a href="https://spectrum.ieee.org/supercolliders" target="_blank">Four Ways Engineers Are Trying to Break Physics</a></p><p>The power behind the high gradients comes from <a href="https://aries.web.cern.ch/xbox" target="_blank">CERN’s Xboxes</a>, the X-band RF systems that dominate the experimental hall. Each Xbox houses a klystron, modulator, pulse compressor, and waveguide network to generate and shape the microwave pulses. The pulse compressors store energy in resonant cavities and then release it in a microsecond burst, producing peaks of up to 200 megawatts; if it were continuous, that’s enough to power at least 40,000 homes. The Xboxes let researchers fine-tune the power, timing, and pulse shape.</p><p>According to Wuensch, many of the recent accelerator developments were enabled by advances in computer simulation and high-precision three-dimensional machining. These tools allow the team to iterate quickly, designing new accelerator components and improving beam control with each generation.</p><p>Still, real-world challenges remain. The power demands are formidable, as are the space requirements; for all the talk of its “compact” design, the original CLIC was meant to span kilometers. Obviously, a hospital needs something that’s actually compact.</p><p>“A big challenge of the project,” says Wuensch, “is to transform this kind of technology and these kinds of components into something that you can imagine installing in a hospital, and it will run every day reliably.”</p><p>To that end, CERN researchers have teamed up with the <a href="https://www.lausanneuniversityhospital.com/home" target="_blank">Lausanne University Hospital</a> (known by its French acronym, CHUV) and the French medical technology company <a href="https://www.theryq-alcen.com/" target="_blank">Theryq</a> to design a hospital facility capable of treating large and deep-seated tumors with the very short time scales needed for FLASH and scaled down to fit in a clinical setting.</p><h2>Theryq’s Approach to FLASH</h2><p>Theryq’s research center and factory are located in southern France, near the base of Montagne Sainte-Victoire, a jagged spine of limestone that Paul Cézanne painted dozens of times, capturing its shifting light and form.</p><p>“The solution that we are trying to develop here is something which is extremely versatile,” says <a href="https://www.linkedin.com/in/ludovic-le-meunier-7084382?originalSubdomain=fr" target="_blank">Ludovic Le Meunier</a>, CEO of the expanding company. “The ultimate goal is to be able to treat any solid tumor anywhere in the body, which is about 90 percent of the cancer these days.”</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Futuristic scientific equipment setup, featuring streamlined machinery and intricate components." class="rm-shortcode" data-rm-shortcode-id="91c6f9815a719ce2a415181d8352df23" data-rm-shortcode-name="rebelmouse-image" id="5b999" loading="lazy" src="https://spectrum.ieee.org/media-library/futuristic-scientific-equipment-setup-featuring-streamlined-machinery-and-intricate-components.jpg?id=65111601&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Theryq’s FLASHDEEP system, under development with CERN and the company’s clinical partners, has a 13.5-meter-long, 140-MeV linear accelerator. That’s strong enough to treat tumors at depths of up to about 20 centimeters in the body. The patient will remain in a supported standing position during the split-second irradiation.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">THERYQ</small></p><p>Theryq’s push to bring FLASH radiotherapy from the lab to clinic has followed a three-pronged rollout, with each device engineered for a specific depth and clinical use. The first machine, <a href="https://www.theryq-alcen.com/flash-radiotherapy-products/flashknife/" target="_blank">FLASHKNiFE</a>, was unveiled in 2020. Designed for superficial tumors and intraoperative use, the system delivers electron beams at 6 or 9 MeV. A prototype installed that same year at CHUV is conducting a phase-two trial for patients with localized skin cancer.</p><p>More recently, Theryq launched <a href="https://www.theryq-alcen.com/flash-radiotherapy-products/flashlab/" target="_blank">FLASHLAB</a>, a compact, 7-MeV platform for radiobiology research.</p><p>The company’s most ambitious system, <a href="https://www.theryq-alcen.com/flash-radiotherapy-products/flashdeep/" target="_blank">FLASHDEEP</a>, is still under development. The 13.5-meter-long electron source will deliver very high-energy electrons of as much as 140 MeV up to 20 centimeters inside the body in less than 100 milliseconds. An integrated CT scanner, built into a patient-positioning system developed by <a href="https://leocancercare.com/" target="_blank">Leo Cancer Care</a>, captures images that stream directly into the treatment-planning software, enabling precise calculation of the radiation dose. “Before we actually trigger the beam or the treatment, we make stereo images to verify at the very last second that the tumor is exactly where it should be,” says Theryq technical manager <a href="https://www.linkedin.com/in/philippe-liger-977a3316?originalSubdomain=fr" target="_blank">Philippe Liger</a>.</p><h2>FLASH Therapy Moves to Animal Tests</h2><p>While CERN’s CLEAR accelerator has been instrumental in characterizing FLASH parameters, researchers seeking to study FLASH in living organisms must look elsewhere: CERN doesn’t allow animal experiments on-site. That’s one reason why a growing number of scientists are turning to PITZ, the Photo Injector Test Facility in Zeuthen, a leafy lakeside suburb of Berlin.</p><p>PITZ is part of Germany’s national accelerator lab and is responsible for developing the electron source for the <a href="https://www.xfel.eu/" target="_blank">European X-ray Free-Electron Laser</a>. Now PITZ is emerging as a hub for FLASH research, with an unusually tunable accelerator and a dedicated biomedical lab to ensure controlled conditions for preclinical studies.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="A photo showing a row of experimental electronic equipment on racks" class="rm-shortcode" data-rm-shortcode-id="b3c62ff858a14ceb04a3a4549f85d68a" data-rm-shortcode-name="rebelmouse-image" id="cfbfe" loading="lazy" src="https://spectrum.ieee.org/media-library/a-photo-showing-a-row-of-experimental-electronic-equipment-on-racks.jpg?id=65111551&width=980"/></p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="A photo of a closeup of a gloved hand holding a sample of a purple liquid above a piece of equipment." class="rm-shortcode" data-rm-shortcode-id="e4f204a1631b000ef17c7be15995ef83" data-rm-shortcode-name="rebelmouse-image" id="82e52" loading="lazy" src="https://spectrum.ieee.org/media-library/a-photo-of-a-closeup-of-a-gloved-hand-holding-a-sample-of-a-purple-liquid-above-a-piece-of-equipment.jpg?id=65111525&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">At Germany’s Photo Injector Test Facility in Zeuthen (PITZ), the electron-beam accelerator [top] is used to irradiate biological targets in early-stage animal tests of FLASH radiotherapy [bottom].</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Top: Frieder Mueller; Bottom: MWFK</small></p><p>“The biggest advantage of our facility is that we can do a very stepwise, very defined and systematic study of dose rates,” says <a href="https://www.linkedin.com/in/anna-grebinyk-186a8245?originalSubdomain=de" target="_blank">Anna Grebinyk</a>, a biochemist who heads the new biomedical lab, “and systematically optimize the FLASH effect to see where it gets the best properties.”</p><p>The experiments begin with zebra-fish embryos, prized for early-stage studies because they’re transparent and develop rapidly. After the embryos, researchers test the most promising parameters in mice. To do that, the PITZ team uses a small-animal radiation research platform, complete with CT imaging and a robotic positioning system adapted from CERN’s CLEAR facility.</p><p>What sets PITZ apart is the flexibility of its beamline. The 30-meter accelerator system steers electrons with micrometer precision, producing electron bunches with exceptional brightness and emittance—a metric of beam quality. “We can dial in any distribution of bunches we want,” says Frank Stephan, group leader at PITZ. “That gives us tremendous control over time structure.”</p><p>Timing matters. At PITZ, the laser-struck photocathode generates electron bunches that are accelerated immediately, at up to 60 million volts per meter. A fast electromagnetic kicker system acts as a high-speed gatekeeper, selectively deflecting individual electron bunches from a high-repetition beam and steering them according to researchers’ needs. This precise, bunch-by-bunch control is essential for fine-tuning beam properties for FLASH experiments and other radiation therapy studies.</p><p>“The idea is to make the complete treatment within one millisecond,” says Stephan. “But of course, you have to [trust] that within this millisecond, everything works fine. There is not a chance to stop [during] this millisecond. It has to work.”</p><p>Regulating the dose remains one of the biggest technical hurdles in FLASH. The ionization chambers used in standard radiotherapy can’t respond accurately when dose rates spike hundreds of times higher in a matter of microseconds. So researchers are developing new detector systems to precisely measure these bursts and keep pace with the extreme speed of FLASH delivery.</p><h2>FLASH as a Research Tool</h2><p>Beyond its therapeutic potential, FLASH may also open new windows to illuminate cancer biology. “What is really, really superinteresting, in my opinion,” says Vozenin, “is that we can use FLASH as a tool to understand the difference between normal tissue and tumors. There must be something we’re not aware of that really distinguishes the two—and FLASH can help us find it.” Identifying those differences, she says, could lead to entirely new interventions, not just with radiation, but also with drugs.</p><p>Vozenin’s team is currently testing a hypothesis involving long-lived proteins present in healthy tissue but absent in tumors. If those proteins prove to be key, she says, “we’re going to find a way to manipulate them—and perhaps reverse the phenomenon, even [turn] a tumor back into a normal tissue.”</p><p>Proponents of FLASH believe it could help close the cancer care gap worldwide; in low-income countries, only about 10 percent of patients have access to radiotherapy, and in middle-income countries, only about 60 percent of patients do, according to the International Atomic Energy Agency. Because FLASH treatment can often be delivered in a single brief session, it could spare patients from traveling long distances for weeks of treatment and allow clinics to treat many more people.</p><p>High-income countries stand to benefit as well. Fewer sessions mean lower costs, less strain on radiotherapy facilities, and fewer side effects and disruptions for patients.</p><p>The big question now is, How long will it take? Researchers I spoke with estimate that FLASH could become a routine clinical option in about 10 years—after the completion of remaining preclinical studies and multiphase human trials, and as machines become more compact, affordable, and efficient. Much of the momentum comes from a growing field of startups competing to build devices, but the broader scientific community remains remarkably open and collaborative.</p><p>“Everyone has a relative who knows about cancer because of their own experience,” says Stephan. “My mother died of it. In the end, we want to do something good for mankind. That’s why people work together.” <span class="ieee-end-mark"></span></p><p><em>This article appears in the March 2026 print issue.</em></p>]]></description><pubDate>Fri, 06 Mar 2026 14:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/flash-radiotherapy</guid><category>Medical-technology</category><category>Cern</category><category>High-energy-physics</category><category>Linear-accelerator</category><category>Electron-beams</category><category>Cancer-treatments</category><dc:creator>Tom Clynes</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/photo-of-a-man-in-a-lab-coat-adjusting-a-large-piece-of-medical-equipment-thats-pointed-at-the-head-of-a-partial-mannequin.jpg?id=65111419&amp;width=980"></media:content></item><item><title>Scenario Modeling and Array Design for Non-Terrestrial Networks (NTNs)</title><link>https://content.knowledgehub.wiley.com/scenario-modeling-and-array-design-for-non-terrestrial-networks-ntns/</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/mathworks-logo.png?id=26851519&width=980"/><br/><br/><p>Non-terrestrial networks (NTNs) using low earth orbit (LEO) satellites present unique technical challenges, from managing large satellite constellations to ensuring reliable communication links. In this webinar, we’ll explore how to address these complexities using comprehensive modeling and simulation techniques. Discover how to model and analyze satellite orbits, onboard antennas and arrays, transmitter power amplifiers (PAs), signal propagation channels, and the RF and digital receiver segments—all within an integrated workflow. Learn the importance of including every link component to achieve accurate, reliable system performance.</p><p><strong>Highlights include:</strong></p><ul><li><span>Modeling large satellite constellations<br/></span></li><li><span>Analyzing and visualizing time-varying visibility and link closure</span></li><li><span>Using graphical apps for antenna analysis and RF component design</span></li><li><span>Modeling PAs and digital predistortion</span></li><li><span>Simulating interference effects in communication links</span></li></ul><div><a href="https://content.knowledgehub.wiley.com/scenario-modeling-and-array-design-for-non-terrestrial-networks-ntns/" target="_blank">Register now for this free webinar!</a></div>]]></description><pubDate>Fri, 06 Mar 2026 11:00:03 +0000</pubDate><guid>https://content.knowledgehub.wiley.com/scenario-modeling-and-array-design-for-non-terrestrial-networks-ntns/</guid><category>Type-webinar</category><category>Nonterrestrial-networks</category><category>Satellites</category><category>Satellite-communications</category><dc:creator>MathWorks</dc:creator><media:content medium="image" type="image/png" url="https://assets.rbl.ms/26851519/origin.png"></media:content></item><item><title>From TV Repairman to Electromagnetic Compatibility Expert</title><link>https://spectrum.ieee.org/from-tv-repairman-to-electromagnetic-compatibility-expert</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/an-elderly-white-man-in-a-dress-shirt-and-glasses-smiling-in-his-at-home-workshop.jpg?id=65112143&width=2000&height=1500&coordinates=0%2C0%2C0%2C0"/><br/><br/><p>No one had very high career aspirations for teenager <a href="https://ieeexplore.ieee.org/author/38227767000" rel="noopener noreferrer" target="_blank">David A. Weston</a>—except for Weston himself. Growing up in London, he scored low on the U.K. national assessment test given to students finishing primary school. The result meant that his next path was either to become a laborer or attend a vocational school to learn a trade.</p><p>What Weston really wanted to do was to work as a radio and TV repairman. He was fascinated by how the devices worked. He had taught himself to build an AM radio when he was 15. Even after showing it to his parents and teachers, though, they still didn’t think he was smart enough to pursue his chosen career, he says.</p><h3>David A. Weston</h3><br/><p><strong>Employer </strong></p><p><strong></strong>EMC Consulting, in in Arnprior, Ont., Canada</p><p><strong>Job title</strong></p><p>Retired consultant</p><p><strong>Member grade </strong></p><p>Life member</p><p><strong>Alma mater </strong></p><p><strong></strong>Croydon Technical College, London</p><h3></h3><br/><p>So, later that year, the underweight teen got a job on a construction site carrying heavy loads of building materials in a hod, a three-sided wooden trough. The experience convinced him he wasn’t cut out for manual labor.</p><p>He eventually earned a certificate in radio and television, the only credential he holds. The lack of academic degrees did not hold him back, though. He went on to become an expert in electromagnetic interference (EMI) and electromagnetic compatibility (EMC).</p><p>An EMI field has unwanted energy that causes interference. EMC is the capacity for electronic devices to work correctly in a shared electromagnetic environment without causing interference or suffering from it in nearby devices or signals.</p><p>After working for a number of companies, he launched his own business more than 40 years ago: <a href="https://www.emcconsultinginc.com/" target="_blank">EMC Consulting</a>, in Arnprior, Ont., Canada. The company has helped clients meet EMI and EMC regulatory requirements.</p><p>Now 83 years old and retired, the IEEE life member recently self-published his memoir, <a href="https://www.barnesandnoble.com/w/from-a-hod-to-an-odd-em-wave-david-weston/1148995654" rel="noopener noreferrer" target="_blank"><em>From a Hod to an Odd EM Wave</em></a>.</p><p>“My memoir is about engineering persistence and human and technical discoveries,” he says. “I wanted to interest a young person, or perhaps a person later in life, in a career in engineering. If I can show that engineering is a personal, human endeavor with exciting opportunities in different fields such as medical, scientific, and the arts, maybe more women would be attracted to it.”</p><h2>From repairing radios to designing underwater devices</h2><p>In 1960 Weston enrolled in the radio and electronics program at London’s Croydon Technical College (now <a href="https://croydon.ac.uk/" rel="noopener noreferrer" target="_blank">Croydon College</a>). The school covered topics from the <a href="https://www.cityandguilds.com/" rel="noopener noreferrer" target="_blank">City and Guilds of London Institute</a>’s radio and television certificate program. He attended classes one day a week for five years while working to put himself through school.</p><p>Although his parents and his teachers might not have recognized Weston’s potential, employers did.</p><p>He got his first job in 1960, fixing televisions in a small repair shop. Then he helped repair tape recorders. In his spare time, he studied transistors and semiconductors.</p><p>Everything he knows, he says, he learned by reading books and research papers, and from on-the-job training.</p><p>Later in 1960, he worked as a mechanical examiner for the U.K. <a href="https://en.wikipedia.org/wiki/Ministry_of_Aviation" rel="noopener noreferrer" target="_blank">Ministry of Aviation</a>, where he calibrated precision meters and potentiometers, which are variable resistors that monitor, control, and measure industrial equipment.</p><h3></h3><br/><p>“Engineering is creative. To have a new idea or design accepted is rewarding, satisfying, pleasurable, and even exciting.”</p><h3></h3><br/><p>He left the ministry in 1963 because he found the work boring, he says, and he was hired as a technician with the <a href="https://www.gov.uk/government/organisations/medical-research-council/about" target="_blank">Medical Research Council</a>’s neuropsychiatric research unit in Carshalton. The institution researches the biological causes of mental illness. His manager was interested in learning about advances in medical electronics and eagerly shared his knowledge with Weston.</p><p>One of Weston’s tasks was to build an electroencephalography (EEG) calibrator to measure responses from a patient’s brain activity. The methods used at the time to detect a brain tumor—before <a href="https://spectrum.ieee.org/mri-pioneer-to-receive-ieee-medal-for-innovations-in-healthcare-technology" target="_self">MRI machines</a> were developed—involved monitoring the patient’s speech and coordination, followed by taking a biopsy, which was not without danger, he says.</p><p>He used an ultrasonic transmitter and receiver to measure the time of transmission to the midline in the brain to determine whether the person had a tumor. If the midline had shifted, it would indicate the presence of a tumor, and a biopsy would be performed to confirm it. The measure of the evoked response in the brain was the only reliable indicator.</p><p>Weston earned his radio and TV certificate in 1965, leaving the research facility a year later to join Divcon (now part of <a href="https://www.oceaneering.com/" target="_blank">Oceaneering International</a>), a commercial diving company based in London that developed deep-sea helium diving helmets. Weston helped design a waterproof handheld communication device for divers that could withstand the high pressure in diving bells, the open-bottom pressurized chambers that transported them underwater.</p><p>Weston then moved to Hamburg, Germany, in 1969 to work for <a href="https://plath-corporation.de/en/" rel="noopener noreferrer" target="_blank">Plath</a>, an electronics manufacturer. He was tasked, along with other engineers from England, to design a servo control loop.</p><p>“Unfortunately it oscillated so badly when first being turned on that it shook itself to bits,” he says.</p><p>He left to work as a senior engineer at <a href="https://www.kistler.com/INT/en/kistler-acquisitions-win-win-for-all-concerned/C00000494" rel="noopener noreferrer" target="_blank">Dr. Staiger Mohilo and Co.</a> (now part of <a href="https://www.kistler.com/INT/en/" rel="noopener noreferrer" target="_blank">Kistler</a>), in Schorndorf, Germany. It manufactured torque sensors, force transducers, and specialized test stand systems. Weston designed a process control computer. He says his boss told him that the controller had to work in close proximity to—and from the same power source as—a nearby machine without interfering with it or being interfered by it.</p><p>“I was thus introduced to the idea of electromagnetic compatibility,” he says.</p><p>After three years, he left to join the <a href="https://www.mobility.siemens.com/global/en.html" rel="noopener noreferrer" target="_blank">Siemens Mobility</a> train group in Braunschweig, Germany, where he helped develop an electronic train-crossing light controller. The original warning lights on crossing gates used a mercury tube as a switch.</p><p>“The concern was the danger to personnel if the tube broke,” he says. “The simple and inexpensive solution was to put the tube in a metal container.”</p><p>Weston and his wife decided to leave Germany for Canada in 1975, after their young son began forgetting how to speak English.</p><h2>Working on the space shuttle and a particle accelerator</h2><p>His first job in the country was as an engineer for <a href="https://www.cae.com/" rel="noopener noreferrer" target="_blank">Canadian Aviation Electronics</a> in Montreal. CAE helped design the remote manipulator system in robotic hand controllers and simulation systems used to train astronauts for the space shuttle.</p><p>The robotic arm, known as <a href="https://en.wikipedia.org/wiki/Canadarm" rel="noopener noreferrer" target="_blank">Canadarm</a>, was used to deploy, maneuver, and capture payloads for the astronauts. Weston’s engineering team designed the display and control panel as well as the hand controllers located in the shuttle’s flight deck.</p><p>“I was attracted to the EMC aspects of the project and avidly studied everything I could on the topic,” he says.</p><p>He also helped develop a system that would protect an aircraft’s deployable black box from lightning strikes.</p><p>“I used a computer program to analyze the EMI field at close proximity to the black box to predict the lightning current flowing into the aircraft structure,” he says.</p><p>While enjoying the warm winter weather during a 1975 visit to a supplier on Long Island, N.Y., he decided he wanted to move his family there and asked whether any companies in the area were hiring. He was told that <a href="https://www.bnl.gov/world/" rel="noopener noreferrer" target="_blank">Brookhaven National Laboratory</a>, in Upton, was, so he applied for a position working on the ring system for the <a href="https://en.wikipedia.org/wiki/ISABELLE" rel="noopener noreferrer" target="_blank">Isabelle proton colliding-beam particle accelerator</a>.</p><p>The project, later known as the <a href="https://spectrum.ieee.org/supercolliders" target="_self">colliding beam accelerator</a>, was a collaboration between the lab and the <a href="https://www.energy.gov/" rel="noopener noreferrer" target="_blank">U.S. Department of Energy</a>. The 200+200 giga-electron volt proton-proton collider was designed to use advanced superconducting magnets cooled by a massive helium refrigeration system to produce high-energy collisions. The GeV refers to the collision energy in a particle accelerator.</p><h3>Weston’s Advice for Budding Engineers</h3><br/><ul><li>Follow the field in which you are most interested.</li><li>Don’t be afraid to work in other countries; it can be a rewarding, enriching experience.</li><li>Question the results of measurements or analyses. If it doesn’t seem right, it probably isn’t. Look at a similar publication on the same topic for a good correlation. </li><li>Don’t be too shy to ask simple questions. That’s how we learn and grow.</li><li>Keep an open mind.</li></ul><p><span>The lab hired him in 1978, and the family moved to Long Island. After a few weeks of meeting with different departments, his boss asked him what kind of work he wanted to do. Weston told him about his idea for designing a device to detect a helium leak, should there ever be one. His machine would cover the entire 3,834-meter circumference area of the ring.</span></p><p>“The danger with increased helium-enriched air is that the oxygen level reduces until the person breathing becomes adversely affected,” he wrote in his memoir. “I found that the speed of the sound of helium increased enough to be detected, but not sufficient enough to cause a person trouble if they were in the tunnel.</p><p>“Brookhaven was considering machines that only covered a small area of the ring, but these would be unrealistic because too many machines would be needed, and the cost would have been astronomical.”</p><p>Weston’s system included an ultrasonic transmitter, a receiver, a power amplifier, and a preamplifier. It would sound an alarm if the helium content went above a certain level. People in the tunnel would be directed to go to the nearest oxygen-breathing equipment, put on a mask, and immediately evacuate. It was successfully tested.</p><p>Weston wrote a report detailing the ultrasonic helium leak detector, but shortly after, he and his wife had to return to Canada in 1978 because they were unable to get additional work permits in the United States.</p><p>When he returned to Brookhaven for a visit, his former boss told him the report was well-received. And he shared some news that upset Weston.</p><p>“My boss told me he took my report, changed the name on the report to his, did not mention me, and published the report as his,” Weston wrote in his memoir.</p><p>But the system was never built. The Isabelle project was canceled in July 1983 due to technical problems with fabricating the superconducting magnets.</p><p>Weston got a job working for <a href="https://satistar.com/portfolio/cal-corporation/" target="_blank">CAL Corp.</a>, an aerospace telecommunications company in Montreal. For the next 14 years, he fixed EMI problems for the company’s products, including its charge-coupled device-based space-qualified cameras, which were designed to be carried aboard a satellite.</p><p>In 1992 he realized that nearly all his work involved consulting for the company’s customers, so he decided to start his own agency. CAL generously let him take the clients he worked with, he says.</p><p>Weston then conducted EMI analysis and testing and designed EMC systems for companies around the world.</p><p>“I always had enough customers and have never had to look for work,” he says. “For me, having my own business was more secure than working for a company.”</p><p>He retired in 2022.</p><h2>IEEE as an educator</h2><p>To broaden his education, he joined IEEE in 1976 to get access to its research papers and attend its conferences, he says. He is a member of the <a href="https://www.emcs.org/" rel="noopener noreferrer" target="_blank">IEEE Electromagnetic Compatibility Society</a>.</p><p>Because he is self-educated, he was “keen to learn as much as possible by reading practical papers published by IEEE,” he says. “I met people at IEEE symposiums and listened to the authors presenting their papers.”</p><p>Those included EMC experts such as Life Fellows <a href="https://ieeexplore.ieee.org/search/searchresult.jsp?newsearch=true&queryText=L.O.%20Hoeft" rel="noopener noreferrer" target="_blank">Lothar O. “Bud” Hoeft</a>, <a href="https://ieeexplore.ieee.org/search/searchresult.jsp?newsearch=true&searchWithin=%22First%20Name%22:Richard&searchWithin=%22Last%20Name%22:Mohr" rel="noopener noreferrer" target="_blank">Richard J. Mohr</a>, and <a href="https://ieeexplore.ieee.org/search/searchresult.jsp?newsearch=true&queryText=Clayton%20R.%20Paul" rel="noopener noreferrer" target="_blank">Clayton R. Paul</a>, whose papers are published in the<a href="https://ieeexplore.ieee.org/Xplore/home.jsp" rel="noopener noreferrer" target="_blank"> IEEE Xplore Digital Library</a>. <a href="https://ieeexplore.ieee.org/search/searchresult.jsp?queryText=david%20weston&highlight=true&returnFacets=ALL&returnType=SEARCH&matchPubs=true&refinements=Author:David%20Weston" rel="noopener noreferrer" target="_blank">Several of Weston’s papers</a> are in the library as well.</p><p>His book <a href="https://www.emcconsultinginc.com/publications/" rel="noopener noreferrer" target="_blank"><em><em>Electromagnetic Compatibility: Methods, Analysis, Circuits, and Measurement</em></em></a> references many IEEE papers on data and analysis methods.</p><p>“Engineering is creative,” he says. “To have a new idea or design accepted is rewarding, satisfying, pleasurable, and even exciting.”</p>]]></description><pubDate>Thu, 05 Mar 2026 19:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/from-tv-repairman-to-electromagnetic-compatibility-expert</guid><category>Ieee-member-news</category><category>Type-ti</category><category>Careers</category><category>Emi</category><category>Emc</category><category>Electromagnetic-compatibility</category><dc:creator>Kathy Pretz</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/an-elderly-white-man-in-a-dress-shirt-and-glasses-smiling-in-his-at-home-workshop.jpg?id=65112143&amp;width=980"></media:content></item><item><title>This Student-Built EV Focuses on Repairability</title><link>https://spectrum.ieee.org/ev-battery-swapping-aria-ev</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/view-underneath-a-modular-electric-vehicle-s-hood-inside-a-simple-wire-frame-between-two-of-its-wheels-holds-an-ample-toolbox.jpg?id=65096797&width=2000&height=1500&coordinates=0%2C0%2C0%2C0"/><br/><br/><p><span>At first glance, the </span><a href="https://www.tue.nl/en/news-and-events/news-overview/25-11-2025-tue-students-build-modular-electric-city-car-you-can-repair-yourself" target="_blank">Aria EV</a><span> doesn’t look much different from any other student-built electric prototype—no different from </span><a href="https://www.southwesterncc.edu/news/auto-students-build-program-electric-vehicle" target="_blank">the battery-powered cars built by engineering students</a><span> from dozens of universities every year. Beneath its panels, however, is a challenge to the modern auto industry: What if electric vehicles were designed to be repaired by their owners?</span></p><p>The Aria project began in 2024, when roughly 20 students assembled at <a href="https://www.tue.nl/en/" target="_blank">Eindhoven University of Technology</a> in the Netherlands under the university’s <a href="https://www.tuecomotive.nl/" target="_blank">Ecomotive</a> team structure, which operates like a small startup. Students apply, are selected, and spend a year developing a vehicle in a setting meant to mirror industry practice.</p><p>The goal, says team spokesperson <a href="https://nl.linkedin.com/in/sarpgurel?trk=people-guest_people_search-card" target="_blank">Sarp Gurel</a>, “was to make the car as accessible and repairable as possible.” Gurel, who graduated last July with a bachelor’s degree in industrial engineering and is currently working toward a master’s degree at Eindhoven, says the Aria EV is not yet road legal. Its purpose is to demonstrate that repairability can be embedded into EV architecture from the outset. With that objective in mind, the team focused first on the most challenging and expensive component in almost any EV: the battery.</p><h2>Modular Battery Design in EVs</h2><p>Aria’s total battery capacity is 13 kilowatt-hours, which is far below the 50- to 80-kWh packs common in mass-market electric sedans and SUVs. The scale is closer to that of a lightweight urban vehicle or neighborhood EV, which is more appropriate for a student-built prototype focused on concept validation rather than long-range highway travel.</p><p>What distinguishes Aria is not the battery’s size, but its structure. Rather than housing the 13 kWh in a single sealed pack, the team divided the total capacity into six smaller modules. Each module weighs about 12 kilograms—much easier to handle than the 400 kg or more that’s typical of a conventional EV’s monolithic battery pack. This makes it feasible for a single person to remove, swap, and replace modules.</p><p>The modules sit in reinforced compartments beneath the vehicle floor and are secured using a bottom-latch system. When the vehicle is fully powered down, a latch can be made to mechanically release a module. Integrated interlocks isolate the high-voltage connection before a module can be lowered. This combination of hardware and software ensures that component-level replacement is straightforward and relatively safe, bringing the idea of “repairability by design” into a tangible, hands-on form. Even with this careful design, modular batteries introduce technical considerations that must be managed, particularly when integrating different modules over the vehicle’s lifespan.</p><p><a href="https://car.osu.edu/news/2024/06/borgerson-brings-technical-expertise-battery-workforce-team" target="_blank">Joe Borgerson</a>, a laboratory research operations coordinator at <a href="https://www.osu.edu/" target="_blank">Ohio State University</a>’s <a href="https://car.osu.edu/" rel="noopener noreferrer" target="_blank">Center for Automotive Research</a>, in Columbus, notes one complication: Mixing new and aged battery modules can create challenges. <span>Borgerson has spent the past three years designing and building a battery pack from scratch as part of the U.S. </span><a href="https://www.energy.gov/" target="_blank">Department of Energy</a><span>’s </span><a href="https://batteryworkforcechallenge.org/" target="_blank">Battery Workforce Challenge</a><span>. “Our team is integrating a student-designed pack into a </span><a href="https://www.stellantis.com/en" target="_blank">Stellantis</a><span> vehicle platform,” he says, “</span><span>which has given me deep exposure to both automaker design philosophy and high-voltage EV architecture,</span><span>.”</span></p><p>To complement their car’s hardware, the Aria team developed a diagnostic app that can be accessed via a dedicated USB-C port. When the user connects their smartphone, the app presents a 3D visualization on the phone screen that points out faults, locates problems, identifies the necessary tools to fix them, and provides step-by-step repair instructions. The tools themselves are stored in the vehicle. The system aims to reduce as many barriers as possible for users to maintain and extend a vehicle’s service life.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Eighteen college students posing around a modular electric vehicle inside a museum." class="rm-shortcode" data-rm-shortcode-id="fbc9f915ac286de2bb18b02069b5ef15" data-rm-shortcode-name="rebelmouse-image" id="b3cb0" loading="lazy" src="https://spectrum.ieee.org/media-library/eighteen-college-students-posing-around-a-modular-electric-vehicle-inside-a-museum.jpg?id=65096805&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Students at Eindhoven University of Technology unveiled their Aria EV prototype in November.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Sarp Gürel</small></p> <h2>Challenges of EV Modularity</h2><p>While Aria prioritizes modularity, the broader EV industry trend is toward integrated, interdependent systems that simplify manufacturing processes and cut costs. This trend is true for the structural battery packs for EVs as well.</p><p><span>Unlike mainstream EVs, Aria treats energy storage as a replaceable subsystem. Whether it scales economically and structurally to larger, highway-capable EVs remains an open question. </span><span>But designing a vehicle for repairability involves trade-offs that ripple across every system in the car.</span></p><p>Borgerson says that dividing systems into removable units adds interfaces—mechanical fasteners, electrical connectors, seals, and safety interlocks. Each interface must survive vibration, temperature swings, and crash forces. More interfaces can mean added mass and complexity compared with tightly integrated battery structures. And these components take up space that would otherwise be used for energy storage.</p><p><a href="https://mae.osu.edu/people/darpino.2" target="_blank">Matilde D’Arpino</a><span>, an assistant professor of mechanical and aerospace engineering at Ohio State whose research focuses on electrified power trains and advanced vehicle architectures, notes that EV batteries are already modular internally—cells form modules, and modules form packs—but making modules externally replaceable changes validation requirements. High-voltage isolation, thermal performance, and crash integrity must remain robust even when energy storage is divided into removable segments. </span><span>In other words, what seems like a simple way to make batteries user-friendly actually cascades into system-level design decisions influencing safety, thermal management, and vehicle structure.</span></p><h2>Impact of Right-to-Repair Laws</h2><p><a href="https://spectrum.ieee.org/tag/right-to-repair" target="_self">Right-to-repair</a> legislation in Europe and the United States could push automakers to reconsider sealed architectures for batteries and other components. Economic incentives could also emerge from fleet operators or long-term owners who benefit from replacing a fraction of a battery system rather than an entire pack. But a<span>dopting this approach would require changes across supply chains, certification processes, and service models.</span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Interior view of the driver's side of a modular electric vehicle. Its elements are minimal and stripped down to essentials." class="rm-shortcode" data-rm-shortcode-id="7afd2ec8121adb93c0a2ebe6ad65afde" data-rm-shortcode-name="rebelmouse-image" id="cc673" loading="lazy" src="https://spectrum.ieee.org/media-library/interior-view-of-the-driver-s-side-of-a-modular-electric-vehicle-its-elements-are-minimal-and-stripped-down-to-essentials.jpg?id=65096844&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The Aria prototype isn’t ready to go toe-to-toe with production EVs, but it demonstrates some proof-of-concept ideas about repairability.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Sarp Gürel</small></p> <p><span>Consumer expectations are also shaping the boundaries of what designs like Aria’s can become. In the mainstream market, buyers consistently prioritize longer </span><a href="https://spectrum.ieee.org/tag/range-anxiety" target="_self">driving range</a><span> and lower sticker prices—two factors that have defined competition among models such as the </span><a href="https://www.chevrolet.com/electric/bolt-ev" target="_blank">Chevrolet Bolt EV</a><span>, the </span><a href="https://www.hyundaiusa.com/us/en/vehicles/ioniq-5?&chid=sem&fb=io5_bnd_husa&CID=20166438&PID=202442677&CRID=795801287407&SID=4075918&AID=402292811&ds_query=hyundai+ioniq+5&ads_rl=8569909089&&&&gclsrc=aw.ds&ds_rl=1277805&gad_source=1&gad_campaignid=12376057835&gbraid=0AAAAADgtKc23yPObUZ3QivHwSbWd8NAwS&gclid=CjwKCAiAkvDMBhBMEiwAnUA9BRvs9klZKC3h1Lcso-GfaUvk3xTqkxGcTiyeyQzTtuFz8YyR-IGjlBoC0lcQAvD_BwE" target="_blank">Hyundai Ioniq 5</a><span>, and the the </span><a href="https://www.tesla.com/model3" target="_blank">Tesla Model 3</a><span>. Range anxiety remains a powerful psychological factor, even as charging infrastructure expands, and price sensitivity has intensified as government incentives fluctuate. Designing for modularity and repairability, as Aria does, must ultimately contend with these consumer priorities. Any added cost, weight, or complexity must be weighed against a market that still rewards vehicles that go farther for less money.</span></p><p>Ultimately, however, Aria inserts a different priority into the equation: repair as a core design requirement. Whether that priority becomes mainstream will depend less on whether it can be engineered—and more on whether regulators, manufacturers, and consumers decide it should be.</p>]]></description><pubDate>Wed, 04 Mar 2026 17:21:07 +0000</pubDate><guid>https://spectrum.ieee.org/ev-battery-swapping-aria-ev</guid><category>Electric-vehicles</category><category>Modular-design</category><category>Eindhoven-university-of-technolo</category><category>Modular-ev</category><dc:creator>Willie D. Jones</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/view-underneath-a-modular-electric-vehicle-s-hood-inside-a-simple-wire-frame-between-two-of-its-wheels-holds-an-ample-toolbox.jpg?id=65096797&amp;width=980"></media:content></item><item><title>Taara Brings Fiber-Optic Speeds to Open-Air Laser Links​</title><link>https://spectrum.ieee.org/free-space-optical-link-taara</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-plastic-elliptic-cylinder-case-equipped-with-a-lens-the-device-is-mounted-on-a-metal-beam.jpg?id=65113617&width=2000&height=1500&coordinates=0%2C0%2C0%2C0"/><br/><br/><p>Taara started as a <a href="https://en.wikipedia.org/wiki/X_Development" rel="noopener noreferrer" target="_blank">Google X moonshot spin-off</a> aimed at <a href="https://spectrum.ieee.org/free-space-optical-communication-taara" target="_self">connecting rural villages in sub-Saharan Africa</a> with beams of light. Its newest product, debuting this week at <a href="https://www.mwcbarcelona.com/" rel="noopener noreferrer" target="_blank">Mobile World Congress</a> (MWC), in Barcelona, aims at a different kind of connectivity problem: getting internet access into buildings in cities that already have plenty of fiber—just not where it’s needed.</p><p>The Sunnyvale, Calif.–based company transmits data via infrared lasers, the kind typically used in fiber-optic lines. However, Taara’s systems beam gigabits across kilometers over open air. “Every one of our Taara terminals is like a digital camera with a laser pointer,” says <a href="https://linkedin.com/in/mahesh-krishnaswamy-341b471" target="_blank">Mahesh Krishnaswamy</a>, Taara’s CEO. “The laser pointer is the one that’s shining the light on and off, and the digital camera is on the [receiving] side.”</p><p>Taara’s new system—<a href="https://www.taaraconnect.com/product/beam" rel="noopener noreferrer" target="_blank">Taara Beam</a>, being demoed at <a href="https://www.mwcbarcelona.com/themes/game-changers" rel="noopener noreferrer" target="_blank">MWC’s “Game Changers</a>” platform—prioritizes efficiency and a compact size. Each Beam unit is the size of a shoebox and weighs just 8 kilograms, and can be mounted on a utility pole or the side of a building. According to the company, Beam will deliver fiber-competitive speeds of up to 25 gigabits per second with low, 50-microsecond latency.</p><p><span>Taara’s former parent company, Krishnaswamy says, is also these days a prominent client. Google’s main campus in Mountain View, Calif., is near a landing point for a major </span><a data-linked-post="2671361590" href="https://spectrum.ieee.org/undersea-internet-cables-meta-waterworth" target="_blank">submarine fiber-optic cable</a><span>.</span></p><p>“One of the Google buildings was literally a few hundred meters away from the landing spot in California,” he says. “Yet they couldn’t connect the two points because of land rights and right-of-way issues.… Without digging and trenching into federal land, we are able to connect the two points at tens of gigabits per second. And so many Googlers are actually using our technology today.”</p><h3>A Fingernail-Size Chip Shrinks Taara’s Tech</h3><p><strong></strong>Krishnaswamy says his laser pointer and digital camera analogy doesn’t quite do justice to the engineering problems the company had to tackle to fit all the gigabit-per-second photonics into a weather-hardened, shoebox-size device.</p><p>The Taara Beam must steer its laser link across kilometers of open air so that the Beam device can receive it on the other end of the line. Effectively, that means the device’s laser can’t be off target by more than a few degrees. </p><p>Beam approaches the steering problem by physically shaping the laser pulse itself. Taara’s photonics chip splits the laser beam carrying the data into more than a thousand separate streams, delaying each one by a closely controlled amount. The result is a laser wavefront that can be pointed anywhere the system directs.</p><p>Krishnaswamy likens this to the effects of pebbles tossed into a pond. Dropping pebbles in a careful sequence, he says, can create interference patterns in the waves that ripple outward. “These thousand emitters are equivalent to a thousand stones,” he says. “And I’m able to delay the phase of each of them. That allows me to steer [the wavefront] whichever direction I want it to go.” <strong></strong></p><p>The idea behind this technology—called a <a href="https://en.wikipedia.org/wiki/Phased_array" target="_blank">phased array</a>—is not new. But turning it into a commercial optical communications device, at Taara Beam’s scale and range, is where others have so far fallen short.</p><p>“Radio-frequency phased arrays like <a href="https://www.linkedin.com/pulse/overview-how-starlinks-phased-array-antenna-dishy-works-curtis-arnold/" target="_blank">Starlink antennas</a> are well known,” Krishaswamy says. “But to do this with optics, and in a commercial way, not just an experimental way, is hard.”</p><p>This isn’t how the company started out, however. </p><p>In 2019, when the company was still a Google X subsidiary, Krishaswamy says, Taara launched its first commercial product, the <a href="https://x.company/blog/posts/bringing-light-speed-internet-to-sub-saharan-africa/" target="_blank">traffic-light-size Lightbridge</a><a href="https://www.taaraconnect.com/product" target="_blank"></a>. Like Beam, Lightbridge boasts fiberlike connection speeds, and it has to date been deployed in more than 20 countries around the world—including the Google campus.</p><p><span>Taara’s upgraded model, </span><a href="https://www.taaraconnect.com/product/lightbridge-pro" target="_blank"> Lightbridge Pro</a><span>, launched last month and is also on display this week at MWC. Lightbridge Pro adds one crucial capability Lightbridge lacked: an automatic backup. When fog or rain disrupts Lightbridge’s optical link, the system switches traffic to a paired radio connection. When conditions clear, Lightbridge Pro switches traffic back to the faster laser-data connection. The company says that combination keeps the link up 99.999 percent of the time—less than 5 minutes of downtime in a year.</span></p><p>Both Lightbridge and Lightbridge Pro mechanically position their mirrors, achieving three degrees of pointing accuracy. An onboard tracking system inside the unit also relocks the beams automatically whenever the unit gets shifted or jostled.</p><h3>The Future of Taara Beam Deployment</h3><p>Krishaswamy says that while Taara continues to install and support Lightbridge and Lightbridge Pro, he hopes the company can also begin installing Taara Beam units for select early customers as soon as later this year. </p><p><a href="https://www.kaust.edu.sa/en/study/faculty/mohamed-slim-alouini" target="_blank">Mohamed-Slim Alouini</a>, distinguished professor of electrical and computer engineering at King Abdullah University of Science and Technology in Thuwal, Saudi Arabia, says the bandwidth of free-space optical (FSO) technologies like Taara Beam and Lightbridge still leaves plenty of room to grow. </p><p> “Like any physical medium, free-space optics has a capacity limit,” Alouini says. “But laboratory experiments have <a href="https://www.nict.go.jp/en/press/2025/12/16-1.html" target="_blank">already demonstrated</a> fiberlike performance with terabits-per-second data rates over FSO links. The real gap is not in raw capacity but in practical deployment.”</p><p><a href="https://www.linkedin.com/in/atul-bhatnagar-1a41212/" target="_blank">Atul Bhatnagar</a>, formerly of <a href="https://en.wikipedia.org/wiki/Nortel" target="_blank">Nortel</a> and <a href="https://www.cambiumnetworks.com/" target="_blank">Cambium Networks</a>, and currently serving as advisor to Taara, sees room for optimism even when it comes to practical deployment.</p><p>“Current Taara architecture is capable of delivering hundreds of gigabits per second over the next several years,” he says.</p><p>Krishnaswamy adds that Beam’s compact form factor makes it suitable for more than just terrestrial applications.</p><p>“We’ll continue to do the work that we’re doing on the ground. But to the extent that space solutions are taking off, we would love to be part of that,” he says. “Data center-to-data center in space is something we are really looking at using for this technology.</p><p>“Because when you have multiple servers up in space, you can’t run fiber from one to the other,” he adds. “But these photonics modules will be able to point and track and transmit gigabits and gigabits of data to each other.”</p><p>For now, Taara’s ambitions are closer to Earth—specifically to the buildings, utility poles, and city blocks where fiber still hasn’t arrived. Which is, after all, where the company’s story began.</p><p><em><strong></strong></em></p><p><em><strong>UPDATE 4 March 2026: </strong></em><em>The weight of the Taara Beam (8 kg) and the launch year of the Taara Lightbridge (2019) were both corrected.</em></p>]]></description><pubDate>Wed, 04 Mar 2026 15:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/free-space-optical-link-taara</guid><category>Free-space-optics</category><category>Mobile-world-congress</category><category>Google</category><category>Digital-divide</category><category>Internet</category><dc:creator>Margo Anderson</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-plastic-elliptic-cylinder-case-equipped-with-a-lens-the-device-is-mounted-on-a-metal-beam.jpg?id=65113617&amp;width=980"></media:content></item><item><title>This Offshore Wind Turbine Will House a Data Center Underwater</title><link>https://spectrum.ieee.org/data-center-floating-wind-turbine</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-floating-wind-turbine-at-sea-an-expanded-view-of-a-buoyant-cylinder-at-the-turbine-s-base-reveals-a-large-hollow-interior-whi.jpg?id=65106142&width=2000&height=1500&coordinates=166%2C0%2C167%2C0"/><br/><br/><p>As data-center developers frantically seek to secure power for their operations, one startup is proposing a novel solution: Build them into floating offshore wind turbines. </p><p>San Francisco–based offshore wind-power developer <a href="https://www.aikidotechnologies.com/" rel="noopener noreferrer" target="_blank">Aikido Technologies</a> today announced its plans to start housing data centers in the underwater tanks that keep its turbine platforms afloat. The turbines will supply the power for the servers, and onboard batteries and grid connection will provide backup. </p><p>The company’s first prototype, a 100-kilowatt unit, is scheduled to launch in the North Sea off the coast of Norway by the end of this year. A 15-to-18-megawatt project off the coast of the United Kingdom may follow in 2028.</p><p>Aikido is one of several companies planning data centers in unusual places—<a href="https://spectrum.ieee.org/underwater-data-centers" target="_self">underwater</a>, on floating buoys, in coal mines and now on offshore wind turbines. The creativity stems from the forces of several trends: rapidly rising energy demand from data centers, the need for domestic renewable power production, and limited real estate. </p><p>The North Sea serves as an ideal first spot for floating, wind-powered data centers because European policymakers and companies are looking to regain domestic control over energy production. They’re also looking to host an AI economy on servers within the continent’s boundaries. Floating wind platforms keep the compute out of sight while tapping the stronger, more consistent air streams that blow over deep waters, where traditional, seabed-mounted turbine monopiles can’t go. </p><p>“A lot of energy in the clean-energy space is focused on powering AI data centers quickly, reliably, and cleanly in a way that does not upset neighbors and remains safe, fast, and cheap,” says Ramez Naam, an independent clean-energy investor who does not have a stake in Aikido. “Aikido has that, and a smart team,” he says.</p><h2>Floating Wind-Power Designs Evolve</h2><p>Aikido’s design builds on many iterations tested by the growing floating wind industry. When Norwegian energy giant Equinor finished construction on the <a href="https://www.equinor.com/energy/hywind-scotland" target="_blank">world’s first floating wind farm </a>in 2017, it kept the turbines upright with ballasted steel columns extending 78 meters into the water—a design called a spar platform. This gave it a dense mass like the keel of a boat. Since then, the floating wind industry has largely <a href="https://spectrum.ieee.org/floating-offshore-wind-turbine" target="_self">coalesced around a semisubmersible design</a> based on oil and gas platforms. Semisubmersibles don’t go as deep as spar platforms; instead, they extend buoyancy horizontally. Anchors, chains, and ropes keep the platform floating within a certain radius.</p><p>Aikido is taking the semisubmersible approach. Its football-field-size platform holds the turbine in the center, and three legs extend tripod-like outward, like a Christmas-tree stand. At the end of each leg is a ballast that reaches 20 meters deep. This holds tanks largely filled with fresh water to maintain the platform’s buoyancy in the salty ocean.</p><p>The data centers will go in the upper part of each ballast tank. There’s room for a 3- to 4-MW data hall in each tank, giving the platform a combined compute of 10 to 12 MW. Below the data halls is an open chamber used as a safety barrier, and below that sit the freshwater tanks. The water is piped up to the data center for liquid cooling of the servers. The warmed water is then funneled back down the ballast into the tank. There, proximity to the cold ocean water cools it again as the heat is conducted out through the tank’s steel walls. </p><p>“We have this power from the wind. We have free cooling. We think we can be quite cost competitive compared to conventional data-center solutions,” says Aikido CEO <a href="https://www.linkedin.com/in/sam-kanner/" rel="noopener noreferrer" target="_blank">Sam Kanner</a>. “This crunch in the next five years is an opportunity for us to prove this out and supply AI compute where it’s needed.”</p><p>One challenge, he says, is that liquid cooling can’t cover all the data center’s needs. For example, heat generated from Ethernet switches that connect the GPUs can’t be liquid-cooled with commercially available technology. So Aikido installed an air-conditioning method for that.</p><p>Another challenge is the marine environment, which is “pretty brutal to engineer around because there’s the increased salinity, there’s debris, and there’s various kinds of corrosion and fouling of metal piping that you wouldn’t have in a freshwater environment,” says <a href="https://www.thefai.org/profile/daniel-king" rel="noopener noreferrer" target="_blank">Daniel King</a>, a research fellow at the Foundation for American Innovation in Washington who focuses on AI infrastructure. </p><h2>Offshore Data Centers Face Challenges</h2><p><span>Aikido’s plan avoids the prickly not-in-my-backyard complaints that are dogging both onshore wind and data-center projects. It might also circumvent some inquiries into water usage and power demand too, or so Aikido’s thinking goes. </span></p><p>But it might not be that easy. “Instinctively many people reach for offshore or even orbital outer-space data centers as a way to circumvent the typical burdens of environmental reviews,” says King. “But there could be more or additional requirements around discharging heat and the effects that has on marine life that are different from the considerations of a terrestrial data center. It’s unclear to me whether this actually makes life easier or harder for a developer.” </p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="3D rendering of a crane lowering a pre-fabricated data center into a hollow semi-submersible platform for a floating wind turbine." class="rm-shortcode" data-rm-shortcode-id="0a67f0ed0900a837eaabf97204dc71b9" data-rm-shortcode-name="rebelmouse-image" id="6f350" loading="lazy" src="https://spectrum.ieee.org/media-library/3d-rendering-of-a-crane-lowering-a-pre-fabricated-data-center-into-a-hollow-semi-submersible-platform-for-a-floating-wind-turbin.jpg?id=65111639&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Prefabricated data halls could be installed quayside, followed by final electrical and plumbing connections to commission the data center.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Aikido</small></p><p>Aikido’s “design choice to use the fresh water in the ballast as a working fluid is a novel one” that, thanks to the closed-loop system, may “alleviate some of the engineering problems you see when a really high temperature fluid is pumping its heat directly into a marine environment,” King says.</p><p>Offshore sites are also vulnerable to sabotage, King notes. Since Russia’s invasion of Ukraine, fleets of vessels directed by the Kremlin have <a href="https://www.bbc.com/news/world-europe-65309687" target="_blank">reportedly</a> started messing with offshore wind and communications infrastructure in northern Europe. Russian and Chinese boats have allegedly <a href="https://spectrum.ieee.org/black-sea-energy-link" target="_self">cut subsea cables in recent years</a>.</p><p>But vandalism is a risk anywhere, including at conventional data centers, Aikido CEO Kanner notes. Unlike those on land, where the local police have jurisdiction, Aikido’s data centers would enjoy protection from national coast guards, which he suggests gives an added degree of security. </p><h2>North Sea Hosts Clean Energy</h2><p>Kanner first began thinking about offshore wind turbines as a place to build data centers after a chance phone call with a cryptocurrency billionaire. The financier wanted to know whether turbines in international waters could power servers generating digital tokens at a moment when crypto-mining faced increased scrutiny from regulators. The talks fizzled. But that encounter sparked Kanner’s curiosity about how to use power generated onboard floating turbines. </p><p>When ChatGPT emerged in 2022 and sparked a heated debate over how to power and cool such technology, the idea to put the data center in the floating turbine clicked for Kanner. The idea really congealed after he met with the chief executive of Portland, Ore.–based <a href="https://panthalassa.com/" target="_blank">Panthalassa</a>. The wave-energy company was proposing to enclose small, remote data centers in buoys attached to equipment that generates power from the surf. Panthalassa <a href="https://www.youtube.com/watch?v=Q7Pmgq2JKbI" target="_blank">just completed</a> its full-scale prototype tests off the coast of Washington state last summer. </p><p>At that point, Aikido had already designed a modular platform for floating wind turbines. Each platform consists of 13 major steel components that are snapped together with pin joints—like IKEA furniture. The platforms fold up in a flat configuration that takes up roughly half the space of other designs, allowing it to be transported by a wider range of ships, according to Aikido. From there, it was a matter of figuring out how to accommodate a data center in the unused space. </p><p>Aikido’s prototype will use a refurbished<a href="https://en.wind-turbine-models.com/turbines/141-vestas-v17-75" target="_blank"> Vesta V-17 turbine</a>. It will need onboard batteries for backup power and will also be connected to the grid for additional power during seasons with less wind. Aikido envisions eventually sprinkling its data centers among large arrays of offshore turbines to tap into that larger power infrastructure. </p><p><span>Between Russia’s threat to expand its war in Ukraine to EU countries and the Trump administration’s bid to pressure Denmark into ceding sovereignty of Greenland to Washington, Europe is scrambling to build up its own energy production and AI capabilities. The North Sea, increasingly, looks like a primary theater of that effort. In January, nearly a dozen European nations banded together in a pact to transform the North Sea into a “</span><a href="https://www.canarymedia.com/articles/offshore-wind/european-nations-are-jointly-plotting-a-massive-offshore-wind-buildout" target="_blank">reservoir</a><span>” of clean power from offshore wind.</span></p>]]></description><pubDate>Tue, 03 Mar 2026 20:56:45 +0000</pubDate><guid>https://spectrum.ieee.org/data-center-floating-wind-turbine</guid><category>Floating-wind-turbine</category><category>Offshore-wind-farms</category><category>Data-center-energy</category><dc:creator>Alexander C. Kaufman</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-floating-wind-turbine-at-sea-an-expanded-view-of-a-buoyant-cylinder-at-the-turbine-s-base-reveals-a-large-hollow-interior-whi.jpg?id=65106142&amp;width=980"></media:content></item><item><title>Countdown to IEEE’s Annual Election</title><link>https://spectrum.ieee.org/ieee-annual-election-2026</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/hand-placing-a-ballot-in-a-green-box-against-a-blue-background.jpg?id=65111718&width=2000&height=1500&coordinates=0%2C0%2C0%2C0"/><br/><br/><p>This year’s annual election, which begins on 17 August, will include candidates for IEEE president-elect and other officer positions up for election.</p><p>To see who is running for 2027 <a data-linked-post="2674856607" href="https://spectrum.ieee.org/2027-ieee-president-elect-candidates" target="_blank">IEEE president-elect</a> and the <a data-linked-post="2674856144" href="https://spectrum.ieee.org/2027-petition-president-elect-candidates" target="_blank">petition candidates</a>, visit the <a href="https://www.ieee.org/pe27" target="_blank">election website</a>.</p><p>The ballot also includes nominees for delegate-elect/director-elect offices submitted by division and region nominating committees, as well as <a href="https://ta.ieee.org/" rel="noopener noreferrer" target="_blank">IEEE Technical Activities</a> vice president-elect; <a href="https://ieeeusa.org/" rel="noopener noreferrer" target="_blank">IEEE-USA</a> president-elect; and <a href="https://standards.ieee.org/" rel="noopener noreferrer" target="_blank">IEEE Standards Association</a> board of governors members-at-large.</p><p>Those elected take office on 1 January 2027.</p><p> IEEE members who want to run for an office, except for IEEE president-elect, who have not been nominated, must submit their petition intention to the IEEE Board of Directors by 1 April. Petitions should be sent to the IEEE Corporate Governance staff at elections@ieee.org. The petition intention deadline for IEEE president-elect was 31 December.</p><h2>Election Updates</h2><p>Regional elections will also take place. Eligible voting members in IEEE <a href="https://ieeer1.org/" rel="noopener noreferrer" target="_blank">Region 1</a> (Northeastern U.S.) and <a href="https://r2.ieee.org/" rel="noopener noreferrer" target="_blank">Region 2</a> (Eastern U.S.) will elect the future IEEE Region 2 delegate-elect/director-elect (Eastern and Northeastern U.S.) for the 2027—2028 term. Members in the future IEEE Region 10 (North Asia) will elect the IEEE Region 10 delegate-elect/director-elect for the same term. These changes reflect IEEE’s upcoming region realignment, as outlined in <em><em>The Institute’s </em></em>September 2024 article, “<a href="https://spectrum.ieee.org/region-realignment-2024-election" target="_self">How Region Realignment Will Impact IEEE Elections</a>.”</p><p> Beginning this year, only professional members will be eligible to vote in IEEE’s annual election or sign related petitions. Ballots will be created for eligible voting members on record as of 31 March. To ensure voting eligibility, all members should review and update their <a href="https://ieee.org/go/my_account" rel="noopener noreferrer" target="_blank">contact information</a> and <a href="https://ieee.org/election-preferences" rel="noopener noreferrer" target="_blank">communication preferences</a> by that date.</p><p>To support sustainability initiatives, the “Candidate Biographies and Statements” booklet will no longer be available in print. Members can access the candidate biographies and statements within their electronic ballot, view them on the <a href="https://ieee.org/about/corporate/election" rel="noopener noreferrer" target="_blank">annual election website</a>, or download the digital booklet. Members are also encouraged to vote electronically.</p><p>For more information about the offices up for election, the process for getting on the annual ballot, and deadlines, visit the <a href="https://www.ieee.org/about/corporate/election" target="_blank">website</a> or email <a href="mailto:elections@ieee.org">elections@ieee.org</a>.</p>]]></description><pubDate>Tue, 03 Mar 2026 19:00:04 +0000</pubDate><guid>https://spectrum.ieee.org/ieee-annual-election-2026</guid><category>Ieee-news</category><category>Ieee-election</category><category>Ieee-president-elect</category><category>Type-ti</category><dc:creator>Elizabeth Fuscaldo</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/hand-placing-a-ballot-in-a-green-box-against-a-blue-background.jpg?id=65111718&amp;width=980"></media:content></item><item><title>Optimizing a Battery Electric Vehicle Thermal Management System</title><link>https://content.knowledgehub.wiley.com/optimizing-a-battery-electric-vehicle-thermal-management-system/</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/mathworks-logo.png?id=26851519&width=980"/><br/><br/><p>This webinar looks at a Battery Electric Virtual Vehicle Model of a mid-size BEV, and uses Simulink and Simscape to facilitate design exploration, component refinement, and system-level optimization. The virtual vehicle comprises five subsystems: Electric powertrain, driveline, <span>refrigerant cycle, coolant cycle, and passenger cabin. The model will be tested using different drive cycles, cooling, and heating scenarios. The results will be analyzed to determine the impact of the different design parameters on vehicle consumption.</span></p><p>The resulting virtual vehicle will be used to:</p><ul><li>Test different drive cycles and environmental conditions</li><li>Perform sensitivity analysis</li><li>Optimize model to improve thermal performance and <span>consumption</span></li></ul><div><span><a href="https://content.knowledgehub.wiley.com/optimizing-a-battery-electric-vehicle-thermal-management-system/" target="_blank">Register now for this free webinar!</a></span></div>]]></description><pubDate>Tue, 03 Mar 2026 11:00:02 +0000</pubDate><guid>https://content.knowledgehub.wiley.com/optimizing-a-battery-electric-vehicle-thermal-management-system/</guid><category>Type-webinar</category><category>Battery-electric-vehicle</category><category>Electric-vehicles</category><category>Batteries</category><dc:creator>MathWorks</dc:creator><media:content medium="image" type="image/png" url="https://assets.rbl.ms/26851519/origin.png"></media:content></item><item><title>Watershed Moment for AI–Human Collaboration in Math</title><link>https://spectrum.ieee.org/ai-proof-verification</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/four-by-four-grid-of-circles-with-varying-color-gradient-patterns.jpg?id=65103143&width=2000&height=1500&coordinates=0%2C0%2C0%2C0"/><br/><br/><p><span>When Ukrainian mathematician </span><a href="https://people.epfl.ch/maryna.viazovska?lang=en" target="_blank">Maryna Viazovska</a><span> received a </span><a href="https://www.mathunion.org/imu-awards/fields-medal/fields-medals-2022" target="_blank">Fields Medal</a><span>—widely regarded as the Nobel Prize for mathematics—in July 2022,</span><span> it was big news. Not only was she the second woman to accept the honor in the award’s 86-year history, but she collected the medal just months after her country had been invaded by Russia. Nearly four years later, Viazovska is making waves again. <a href="https://www.math.inc/sphere-packing" target="_blank">Today</a>, in </span><span>a collaboration between humans and AI, Viazovska’s proofs have been formally verified, signaling rapid progress in AI’s abilities to <a href="https://spectrum.ieee.org/ai-math-benchmarks" target="_blank">assist</a> with mathemat</span><span>ical research. </span></p><p><span>“These new results seem very, very impressive, and definitely signal some rapid progress in this direction,” says AI-reasoning expert and Princeton University postdoc <a href="https://ai.princeton.edu/news/2025/ai-lab-welcomes-associate-research-scholars" target="_blank">Liam Fowl</a>, who was not involved in the work.</span></p><p>In her Fields Medal–winning research, Viazovska had tackled two versions of the sphere-packing problem, which asks: How densely can identical circles, spheres, et cetera, be packed in <em>n</em>-dimensional space? In two dimensions, the honeycomb is the best solution. In three dimensions, spheres stacked in a pyramid are optimal. But after that, it becomes exceedingly difficult to find the best solution, and to prove that it is in fact the best. </p><p>In 2016, Viazovska solved the problem in two cases. By using powerful mathematical functions known as (quasi-)modular forms, she proved that a symmetric arrangement known as E<sub>8</sub> is the <a href="https://annals.math.princeton.edu/articles/keyword/sphere-packing" target="_blank">best 8-dimensional packing</a>, and soon after proved with collaborators that another sphere packing called the <a href="https://annals.math.princeton.edu/2017/185-3/p08" target="_blank">Leech lattice is best in 24 dimensions</a>. Though seemingly abstract, this result has potential to help solve everyday problems related to dense sphere packing, including <a data-linked-post="2650280110" href="https://spectrum.ieee.org/novel-error-correction-code-opens-a-new-approach-to-universal-quantum-computing" target="_blank">error-correcting codes</a> used by smartphones and space probes.</p><p>The proofs were verified by the mathematical community and deemed correct, leading to the Fields Medal recognition. But formal verification—the ability of a proof to be verified by a computer—is another beast altogether. Since 2022, much <a href="https://cacm.acm.org/research/formal-reasoning-meets-llms-toward-ai-for-mathematics-and-verification/" target="_blank">progress</a> has been made in AI-assisted formal proof verification. </p><h2>Serendipity leads to formalization project</h2><p>A few years later, a chance meeting in Lausanne, Switzerland, between third-year undergraduate <a href="https://thefundamentaltheor3m.github.io/" target="_blank">Sidharth Hariharan</a> and Viazovska would reignite her interest in sphere-packing proofs. Though still very early in his career, Hariharan was already becoming adept at formalizing proofs.</p><p>“Formal verification of a proof is like a rubber stamp,” Fowl says. “It’s a kind of bona fide certification that you know your statements of reasoning are correct.”</p><p>Hariharan told Viazovska how he had been using the process of formalizing proofs to learn and really understand mathematical concepts. In response, Viazovska expressed an interest in formalizing her proofs, largely out of curiosity. From this, in March 2024 the <a href="https://thefundamentaltheor3m.github.io/Sphere-Packing-Lean/" target="_blank">Formalising Sphere Packing in Lean</a> project was born. <span>Lean is a popular programming language and “proof assistant” that allows mathematicians to write proofs that are then verified for absolute correctness by a computer.</span></p><p>A collaboration formed to write a human-readable “blueprint” that could be used to map the 8-dimensional proof’s various constituents and figure out which of them had and had not been formalized and/or proven, and then prove and formalize those missing elements in Lean. </p><p><span>“We had been building the project’s repository for about 15 months when we enabled public access in June 2025,” recalls Hariharan, now a first-year Ph.D. student at Carnegie Mellon University. “Then, in late October we heard from Math, Inc. for the first time.”</span></p><h2>The AI speedup</h2><p><a href="https://www.math.inc/" target="_blank">Math, Inc.</a> is a startup developing Gauss, an AI specifically designed to automatically formalize proofs. “It’s a particular kind of language model called a reasoning agent that’s meant to interleave both traditional natural-language reasoning and fully formalized reasoning,” explains <a href="https://jesse-michael-han.github.io/" target="_blank">Jesse Han</a>, Math, Inc. CEO and cofounder. “So it’s able to conduct literature searches, call up tools, and use a computer to write down Lean code, take notes, spin up verification tooling, run the Lean compiler, et cetera.”</p><p>Math, Inc. first hit the headlines when it announced that Gauss had completed a <a href="https://mathstodon.xyz/@tao/111847680248482955" target="_blank">Lean formalization of the strong <span>prime number theorem</span> (PNT)</a> in three weeks last summer, a task that Fields Medalist <a href="https://terrytao.wordpress.com/" target="_blank">Terence Tao</a> and <a href="https://sites.math.rutgers.edu/~alexk/" target="_blank">Alex Kontorovich</a> had been working on. Similarly, Math, Inc. contacted Hariharan and colleagues to say that Gauss had proven several facts related to their sphere-packing project.</p><p>“They told us that they had finished 30 “sorrys,” which meant that they proved 30 intermediate facts that we wanted proved,” explains Hariharan. A proportion of these sorrys were shared with the project team and merged with their own work. “One of them helped us identify a typo in our project, which we then fixed,” adds Hariharan. “So it was a pretty fruitful collaboration.”</p><h2>From 8 to 24 dimensions</h2><p>But then, radio silence followed. Math, Inc. appeared to lose interest. However, while Hariharan and colleagues continued their labor of love, Math, Inc. was building a new and improved version of Gauss. “We made a research breakthrough sometime mid-January that produced a much stronger version of Gauss,” says Han. “This new version reproduced our three-week PNT result in two to three days.”</p><p>Days later, the new Gauss was steered back to the sphere-packing formalization. Working from the invaluable preexisting blueprint and work that Hariharan and collaborators had shared, Gauss not only autoformalized the 8-dimensional case, but also found and fixed a typo in the published paper, all in the space of five days.</p><p>“When they reached out to us in late January saying that they finished it, to put it very mildly, we were very surprised,” says Hariharan. “But at the end of the day, this is technology that we’re very excited about, because it has the capability to do great things and to assist mathematicians in remarkable ways.”</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="A laptop with sphere packing code in the foreground, with an autumn sunset at Carnegie Mellon in the background. " class="rm-shortcode" data-rm-shortcode-id="1dd0742602809b330ce11552ae9d6d3f" data-rm-shortcode-name="rebelmouse-image" id="898fd" loading="lazy" src="https://spectrum.ieee.org/media-library/a-laptop-with-sphere-packing-code-in-the-foreground-with-an-autumn-sunset-at-carnegie-mellon-in-the-background.jpg?id=65106120&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Hariharan was working on sphere-packing proof verification as the sun was setting behind Carnegie Mellon’s Hamerschlag Hall.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Sidharth Hariharan</small></p><p>The 8-dimensional sphere-packing proof formalization alone, <a href="https://leanprover.zulipchat.com/#narrow/channel/113486-announce/topic/Sphere.20Packing.20Milestone/with/575354368" target="_blank">announced on February 23</a>, represents a watershed moment for autoformalization and AI–human collaboration. But <a target="_blank"></a><a href="https://math.inc/sphere-packing" target="_blank">today, Math, Inc. revealed</a><span> </span>an even more impressive accomplishment: Gauss has autoformalized Viazovska’s 24-dimensional sphere-packing proof—all 200,000+ lines of code of it—in just two weeks. </p><p>There are commonalities between the 8- and 24-dimensional cases in terms of the foundational theory and overall architecture of the proof, meaning some of the code from the 8-dimensional case could be refactored and reused. However, Gauss had no preexisting blueprint to work from this time. “And it was actually significantly more involved than the 8-dimensional case, because there was a lot of missing background material that had to be brought on line surrounding many of the properties of the Leech lattice, in particular its uniqueness,” explains Han.</p><p>Though the 24-dimensional case was an automated effort, both Han and Hariharan acknowledge the many contributions from humans that laid the foundations for this achievement, regarding it as a collaborative endeavor overall between humans and AI.</p><p>But for Han, it represents even more: the beginning of a revolutionary transformation in mathematics, where extremely large-scale formalizations are commonplace. “A programmer used to be someone who punched holes into cards, but then the act of programming became separated from whatever material substrate was used for recording programs,” he concludes. “I think the end result of technology like this will be to free mathematicians to do what they do best, which is to dream of new mathematical worlds.”</p>]]></description><pubDate>Mon, 02 Mar 2026 18:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/ai-proof-verification</guid><category>Mathematics</category><category>Ai-reasoning</category><category>Large-language-models</category><category>Ai</category><dc:creator>Benjamin Skuse</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/four-by-four-grid-of-circles-with-varying-color-gradient-patterns.jpg?id=65103143&amp;width=980"></media:content></item><item><title>How Electrical Engineers Fight a War</title><link>https://spectrum.ieee.org/repair-ukraine-power-grid</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-worker-kneels-in-the-snow-while-welding-a-damaged-pipe-buried-underneath-the-rubble-of-a-power-station.jpg?id=65064523&width=2000&height=1500&coordinates=416%2C0%2C417%2C0"/><br/><br/><p><span>Every time Russia attacks Ukraine’s power infrastructure, Ukrainian engineers risk their lives in the scramble to get electricity flowing again. It’s a dangerous job at best, and a lethal one at worst. It also requires creativity. Time pressure and <a href="https://spectrum.ieee.org/russia-targets-ukraine-grid" target="_blank">equipment shortages</a> make it nearly impossible to rebuild things exactly as they were, so engineers must redesign on the fly. </span></p><p>These dangerous, stressful conditions have led to more engineers being hurt or killed. The rate of injuries among Ukrainian workers in electricity generation, transmission, and distribution <a target="_blank">jumped nearly 50 percent</a> after Russia’s full-scale invasion began four years ago, according to data provided by<a target="_blank"> </a><a href="https://amnu.gov.ua/nagorna-antonina-maksymivna/" target="_blank"><span>Antonina Nagorna</span></a><span>, who leads the Department of Epidemiology and Physiology of Work at the Kundiiev Institute of Occupational Health, in Kiev. By her count at least 48 people had died on the job through the end of 2025, either while repairing damage or during the bombardment itself.</span></p><p><span>Transmission mastermind Oleksiy Brecht joined that grim count in January. Brecht, who was director for network operations and development at the Ukrainian grid operator </span><span><a href="https://ua.energy/" target="_blank">Ukrenergo</a></span><span>, died while coordinating work at Ukraine’s most attacked electrical switchyard, Kyivska, west of the capital. He was 47 years old.</span></p><p><span>Brecht’s life and death are a window into the realities of thousands of Ukrainian engineers who face conditions beyond what most engineers could imagine. “The war completely transformed the professional life of a top-manager engineer,” says </span><span><a href="https://www.linkedin.com/in/mariia-tsaturian-86560b282/" target="_blank">Mariia Tsaturian</a></span><span>, an energy analyst and chief communication officer at the think tank </span><span><a href="https://uafp.eu/" target="_blank">Ukraine Facility Platform</a></span><span>, who previously worked with Brecht at Ukrenergo. “As for junior staff, their world was turned upside down entirely. A substation engineer working under shelling is something no one had ever seen or experienced before,” she says.</span></p><h2>How Russia Attacks Ukraine’s Grid</h2><p><span>Over the course of the war, Russia has increasingly focused on destroying Ukraine’s energy infrastructure. It sends attack drones almost daily during the winter there, when heat and electricity is needed most to survive the bitter cold. Every 10 days or so it barrages Ukraine’s power system with combinations of missiles and hundreds of drones, repeatedly mangling equipment and cutting off power. The cold imposed on Ukrainian homes is </span><span><a href="https://www.counteroffensive.news/p/why-cold-darkness-worsen-ptsd-among" target="_blank">especially hard on former prisoners of war</a></span> held in Russia, where cold is routinely employed as a form of torture.</p><p><span>In the first two years of the war, keeping the grid flowing was a 24/7 job. But Ukrenergo has adapted to the impossible since then, says</span> <span><a href="https://ua.energy/about_us/the-management/chairman-of-the-management-board/" target="_blank"><span>Vitali<span>y Zay</span><span>chenko</span></span></a></span>, Ukrenergo’s CEO, <span>who somehow found a moment to speak with <em>IEEE</em> </span><span><em>Spectrum </em></span><span>via video call</span><span>. Now, “we are more prepared for each attack. We have well-trained teams. We have support from Europe,” he says.</span></p><p>But the risk involved in repairing the grid remains unnerving. Last month a crew from <a href="https://dtek.com/" target="_blank">DTEK</a>, Ukraine’s biggest private-sector energy firm, was traveling between locations when it was targeted by a Russian drone. They heard the drone coming and escaped before their <span><a href="https://x.com/DTEK_Group/status/2021986413487554807" target="_blank">bucket truck was destroyed</a></span>. Russian forces have employed “double tap” attacks against DTEK’s crews, targeting their power infrastructure with a follow-up strike designed to kill first responders—a practice <span><a href="https://ukraine.ohchr.org/en/Extensive-Civilian-Harm-from-Russian-Attacks-This-Spring" target="_blank">confirmed by the U.N</a></span>.</p><p><span>When Russia began targeting power infrastructure in October 2022, Brecht’s job shifted from high-level direction of grid planning and maintenance to near-constant triage and real-time system reengineering. Most weeks, Brecht spent several days in the field, crisscrossing the country to coordinate work at smashed substations. Brecht would often be found on site figuring out how to restart power using whatever equipment was available. “It was a unique decision every time,” says Zaychenko</span><span>.</span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Oleksiy Brecht seated in a conference room while listening intently to a virtual Ukrenergo meeting projected onto the wall." class="rm-shortcode" data-rm-shortcode-id="c2f0253c54a11a55e3e99dc84a2e67a0" data-rm-shortcode-name="rebelmouse-image" id="3143a" loading="lazy" src="https://spectrum.ieee.org/media-library/oleksiy-brecht-seated-in-a-conference-room-while-listening-intently-to-a-virtual-ukrenergo-meeting-projected-onto-the-wall.jpg?id=65065018&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Oleksiy Brecht died in January while overseeing repairs to a bombed-out substation near Kyiv. He called his employees at Ukrenergo “my fighters. They called him “our general.”</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Ukrenergo</small></p><p><span>Zaychenko noted Brecht’s “genius” for finding creative grid fixes, his passion and leadership skills, and his credibility with power brokers in Ukraine and abroad. Brecht scoured the globe sourcing critical replacement parts, including stockpiled or older equipment from international utilities. Transformers, which </span><span><a href="https://spectrum.ieee.org/transformer-shortage" target="_self">can take a year or more</a></span> to source, are especially precious.</p><p><span>When the right equipment wasn’t forthcoming, Brecht figured out how to make do. For example, he would deploy transformers from Western Europe rated for 400 kilovolts to restart a 330-kV circuit. He would adapt transformers designed for 60-hertz alternating current for emergency use on Ukraine’s 50-Hz grid. </span><span>“He would find a way,” says Zay</span><span>chenko, who worked closely with Brecht for over 20 years.</span></p><p><span>Brecht’s assistant at Ukrenergo, Svitlana Dubas-Veremiienko, says he also contributed to the teams’ morale and confidence. She </span><span><a href="https://www.facebook.com/share/p/1DoAefkHYH/?mibextid=wwXIfr" target="_blank">shared on Facebook</a></span> that he smoked “like a locomotive” at the worst times, and yet exuded calm: <span>“In his presence, chaos subsided,” she wrote. </span><span>Brecht was not easy to intimidate. “He was someone who never feared anything or anyone,” adds Tsaturian.</span></p><p><span>Brecht’s work proved so essential that Ukrenergo</span><span>’s former Deputy CEO Andrii Nemyrovskyi recalls telling Ukraine’s Ministry of Defense in 2022 that the military must protect two people: Zaychenko</span><span>, because he ran grid operations, and Brecht because “system operations requires that the system exists.” Last week, President Zelenskyy </span><span><a href="https://babel.ua/en/news/125158-former-head-of-ukrenergo-oleksiy-brecht-who-died-while-working-at-a-substation-was-awarded-the-title-hero-of-ukraine" target="_blank"><span>posthumously named Brecht a “Hero of Ukraine</span></a>” </span><span>for “strengthening the energy security of Ukraine under martial law.”</span></p><h2><span></span>Ukraine’s Power Infrastructure Under Fire</h2><p><span>Brecht joined Ukrenergo in 2002 after earning his degree in power engineering from <a href="https://kpi.ua/en" target="_blank">Igor Sikorsky Kyiv Polytechnic Institute</a></span><span>. Over the next 20 years, he held leadership positions in dispatching and grid planning and development. He joined Ukrenergo’s management board in June 2022 and served as its interim leader in 2024.</span></p><p><span>Brecht’s contributions to Ukraine’s wartime survival began with several key upgrades to Ukrenergo’s technical capabilities ahead of the February 2022 invasion. He reintroduced “live line” techniques, providing training and equipment that enable crews to work on circuits while they continue to carry power to homes and to sustain critical needs.</span></p><p><span>Brecht also led preparations for Ukraine’s disconnection from the Russian grid and synchronization with Europe’s. When the invasion began, Ukraine’s Minister of Energy at the time, </span><span><a href="https://en.wikipedia.org/wiki/German_Galushchenko" target="_blank">Herman H<span>alushchenko</span></a></span><span>, had argued that switching from Russia’s grid to Europe’s was too risky, according to Tsaturian and Nemyrovskyi. But Brecht insisted—correctly, as hindsight has shown—that synchronizing with Europe would provide crucial stability and backup power. At his urging, the</span><span><a href="https://spectrum.ieee.org/ukraine-europe-electricity-grid" target="_self"> switch was completed in daring fashion</a></span> during the first weeks of the invasion.</p><p><span>(Halushchenko was dismissed last year following longstanding </span><span><a href="https://spectrum.ieee.org/ukraine-nuclear-power-fears-russia" target="_self"><span>allegations of corruption and Russian influence</span></a></span> in Ukraine’s energy sector that gave way to indictments in November 2025 that have rocked President Zelenskyy’s government. In January, Halushchenko was <span><a href="https://www.rferl.org/a/ukraine-corruption-energy-sector-kickbacks-scandal/33679486.html" target="_blank"><span>detained while attempting to leave the country</span></a></span> and charged with money laundering.)<span><br/></span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Two power grid workers in heavy coats preparing a bucket truck for power line repairs on a snowy residential street." class="rm-shortcode" data-rm-shortcode-id="ce5d28090ba881cfeb35ddc5f94ee063" data-rm-shortcode-name="rebelmouse-image" id="c7574" loading="lazy" src="https://spectrum.ieee.org/media-library/two-power-grid-workers-in-heavy-coats-preparing-a-bucket-truck-for-power-line-repairs-on-a-snowy-residential-street.jpg?id=65035406&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">DTEK workers conduct repairs on 26 January following a Russian attack in Kyiv.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Danylo Antoniuk/Cover Images/AP</small></p><h2>A Ukrainian Electrical Engineer’s Final Day</h2><p><span>Brecht’s final act of service followed the mass destruction of January 19—a day when Kyiv’s high temperature was –10° C. That night, Russian forces targeted Ukraine’s energy infrastructure with 18 ballistic missiles, a hypersonic cruise missile, 15 conventional cruise missiles, and 339 drones.</span></p><p><span>The impact included catastrophic damage at the 750-kV Kyivska substation, which feeds electricity to the capital and ensures cooling power for two nuclear power plants.</span></p><p><span>Brecht was leading a team of about 100 people who were undoing the damage when he made a deadly choice. He picked up a section of busbar—solid conduits that connect circuits within substations. It had been blasted to the ground and, unbeknownst to Brecht, was carrying lethal voltage. It’s unclear whether its circuit was still connected, or if it had </span><span><a href="https://spectrum.ieee.org/transmission-line-safety-suit" target="_self"><span>picked up voltage from another circuit</span></a></span><span>.</span></p><p><span>Zaychenko says an investigation is ongoing to provide answers. “I don</span><span>’t know why he touched this busbar. Maybe because of tiredness. Maybe something else,” he says. “He was trying to help the team to do this job quickly. It was a huge mistake and a huge loss for us.”</span></p>]]></description><pubDate>Mon, 02 Mar 2026 14:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/repair-ukraine-power-grid</guid><category>Ukraine</category><category>Russia-ukraine-war</category><category>Transmission-and-distribution</category><category>Power-grid</category><dc:creator>Peter Fairley</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-worker-kneels-in-the-snow-while-welding-a-damaged-pipe-buried-underneath-the-rubble-of-a-power-station.jpg?id=65064523&amp;width=980"></media:content></item><item><title>How Quantum Data Can Teach AI to Do Better Chemistry</title><link>https://spectrum.ieee.org/quantum-chemistry</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/illustration-of-a-human-head-in-profile-with-a-spiral-upon-which-human-figures-are-walking-overlaid-on-an-image-of-an-atom.png?id=63744636&width=2000&height=1500&coordinates=0%2C183%2C0%2C184"/><br/><br/><p><strong>Sometimes a visually compelling</strong> metaphor is all you need to get an otherwise complicated idea across. In the summer of 2001, a Tulane physics professor named <a href="https://sse.tulane.edu/john-p-perdew-phd" rel="noopener noreferrer" target="_blank">John P. Perdew</a> came up with a banger. He wanted to convey the hierarchy of computational complexity inherent in the behavior of electrons in materials. He called it “<a href="https://pubs.aip.org/aip/acp/article-abstract/577/1/1/573973/Jacob-s-ladder-of-density-functional?redirectedFrom=fulltext" rel="noopener noreferrer" target="_blank">Jacob’s Ladder</a>.” He was appropriating an idea from the Book of Genesis, in which Jacob dreamed of a ladder “set up on the earth, and the top of it reached to heaven. And behold the angels of God ascending and descending on it.”</p><p>Jacob’s Ladder represented a gradient and so too did Perdew’s ladder, not of spirit but of computation. At the lowest rung, the math was the simplest and least computationally draining, with materials represented as a smoothed-over, cartoon version of the atomic realm. As you climbed the ladder, using increasingly more intensive mathematics and compute power, descriptions of atomic reality became more precise. And at the very top, nature was perfectly described via impossibly intensive computation—something like what God might see.</p><p>With this metaphor in mind, we propose to extend Jacob’s Ladder beyond Perdew’s version, to encompass <em><em>all</em></em> computational approaches to simulating the behavior of electrons. And instead of climbing rung by rung toward an unreachable summit, we have an idea to <em><em>bend</em></em> the ladder so that even the very top lies within our grasp. Specifically, we at Microsoft envision a hybrid approach. It starts with using quantum computers to generate exquisitely accurate data about the behavior of electrons—data that would be prohibitively expensive to compute classically. This quantum-generated data will then train AI models running on classical machines, which can predict the properties of materials with remarkable speed. By combining quantum accuracy with AI-driven speed, we can ascend Jacob’s Ladder faster, designing new materials with novel properties and at a fraction of the cost.</p><p>In our approach, the base of Jacob’s Ladder still starts with classical models that treat atoms as simple balls connected by springs—models that are fast enough to handle millions of atoms over long times, but with the lowest precision. As we ascend the ladder, some quantum mechanical calculations are added to semiempirical methods. Eventually, we’ll get to the full quantum behavior of individual electrons but with their interactions modeled in an averaged way; this greater accuracy requires significant compute power, which means you can only simulate molecules of no more than a few hundred atoms. At the top will be the most computationally intensive methods—prohibitively expensive on classical computers but tractable on quantum computers.</p><p>In the coming years, quantum computing and AI will become critical tools in the pursuit of new materials science and chemistry. When combined, their forces will multiply. We believe that by using quantum computers to train AI on quantum data, the result will be hyperaccurate AI models that can reach ever higher rungs of computational complexity without the prohibitive computational costs.</p><p>This powerful combination of quantum computing and AI could unlock unprecedented advances in chemical discovery, materials design, and our understanding of complex reaction mechanisms. Chemical and materials innovations already play a vital—if often invisible—role in our daily lives. These discoveries shape the modern world: new drugs to help treat disease more effectively, improving health and extending life expectancy; everyday products like toothpaste, sunscreen, and cleaning supplies that are safe and effective; cleaner fuels and longer-lasting batteries; improved fertilizers and pesticides to boost global food production; and biodegradable plastics and recyclable materials to shrink our environmental footprint. In short, chemical discovery is a behind-the-scenes force that greatly enhances our everyday lives.</p><h3></h3><br/><div class="rblad-ieee_in_content"></div><p>The potential is vast. Anywhere AI is already in use, this new quantum-enhanced AI could drastically improve results. These models could, for instance, scan for previously unknown catalysts that could fix atmospheric carbon and so mitigate climate change. They could discover novel chemical reactions to turn waste plastics into useful raw materials and remove toxic “forever chemicals” from the environment. They could uncover new battery chemistries for safer, more compact energy storage. They could supercharge drug discovery for personalized medicine.</p><p>And that would just be the beginning. We believe quantum-enhanced AI will open up new frontiers in materials science and reshape our ability to understand and manipulate matter at its most fundamental level. Here’s how.</p><h2>How Quantum Computing Will Revolutionize Chemistry</h2><p>To understand how quantum computing and AI could help bend Jacob’s Ladder, it’s useful to look at the classical approximation techniques that are currently used in chemistry. In atoms and molecules, electrons interact with one another in complex ways called electron correlations. These correlations are crucial for accurately describing chemical systems. Many computational methods, such as <a href="https://www.synopsys.com/glossary/what-is-density-functional-theory.html" target="_blank">density functional theory</a> (DFT) or the <a href="https://insilicosci.com/hartree-fock-method-a-simple-explanation/" target="_blank">Hartree-Fock method</a>, simplify these interactions by replacing the intricate correlations with averaged ones, assuming that each electron moves within an average field created by all other electrons. Such approximations work in many cases, but they can’t provide a full description of the system.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" rel="float: left;" style="float: left;"> <img alt="a woman stirs a white powder inside a glove box." class="rm-shortcode" data-rm-shortcode-id="c0e1bdeb8e874740173f3f02c62eb308" data-rm-shortcode-name="rebelmouse-image" id="40c54" loading="lazy" src="https://spectrum.ieee.org/media-library/a-woman-stirs-a-white-powder-inside-a-glove-box.jpg?id=63745112&width=980"/> </p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="The second shows white powder in test tubes." class="rm-shortcode" data-rm-shortcode-id="5ac7a16946b97de61047d14b9ff28eb7" data-rm-shortcode-name="rebelmouse-image" id="2b1dd" loading="lazy" src="https://spectrum.ieee.org/media-library/the-second-shows-white-powder-in-test-tubes.jpg?id=63745094&width=980"/> </p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="shows a gloved hand holding a silvery disc close to an electronic apparatus." class="rm-shortcode" data-rm-shortcode-id="f3e77cc9b1b4502b2fab5ed6a3cf10f5" data-rm-shortcode-name="rebelmouse-image" id="98787" loading="lazy" src="https://spectrum.ieee.org/media-library/shows-a-gloved-hand-holding-a-silvery-disc-close-to-an-electronic-apparatus.jpg?id=63745089&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">A joint project between Microsoft and Pacific Northwest National Laboratory used AI and high-performance computing to identify potential materials for battery electrolytes. The most promising were synthesized [top and middle] and tested [bottom] at PNNL. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Dan DeLong/Microsoft</small></p><p>Electron correlation is particularly important in systems where the electrons are strongly interacting—as in materials with unusual electronic properties, like high-temperature superconductors—or when there are many possible arrangements of electrons with similar energies—such as compounds containing certain metal atoms that are crucial for catalytic processes.</p><p>In these cases, the simplified approach of DFT or Hartree-Fock breaks down, and more sophisticated methods are needed. As the number of possible electron configurations increases, we quickly reach an “exponential wall” in computational complexity, beyond which classical methods become infeasible.</p><p>Enter the quantum computer. Unlike classical bits, which are either on or off, qubits can exist in superpositions—effectively coexisting in multiple states simultaneously. This should allow them to represent many electron configurations at once, mirroring the complex quantum behavior of correlated electrons. Because quantum computers operate on the same principles as the electron systems they will simulate, they will be able to accurately simulate even strongly correlated systems—where electrons are so interdependent that their behavior must be calculated collectively.</p><h2>AI’s Role in Advancing Computational Chemistry</h2><p>At present, even the computationally cheap methods at the bottom of Jacob’s Ladder are slow, and the ones higher up the ladder are slower still. AI models have emerged as powerful accelerators to such calculations because they can serve as emulators that predict simulation outcomes without running the full calculations. The models can speed up the time it takes to solve problems up and down the ladder by orders of magnitude.</p><p>This acceleration opens up entirely new scales of scientific exploration. In 2023 and 2024, we collaborated with researchers at <a href="https://www.pnnl.gov/" rel="noopener noreferrer" target="_blank">Pacific Northwest National Laboratory</a> (PNNL) on using <a href="https://arxiv.org/abs/2401.04070" rel="noopener noreferrer" target="_blank">advanced AI models</a> to evaluate over 32 million potential battery materials, looking for safer, cheaper, and more environmentally friendly options. This enormous pool of candidates would have taken about 20 years to explore using traditional methods. And yet, within less than a week, <a href="https://spectrum.ieee.org/ai-battery-material" target="_blank">that list was narrowed</a> to 500,000 stable materials and then to 800 highly promising candidates. Throughout the evaluation, the AI models replaced expensive and time-consuming quantum chemistry calculations, in some cases delivering insights half a million times as fast as would otherwise have been the case.</p><p>We then used high-performance computing (HPC) to validate the most promising materials with DFT and AI-accelerated molecular dynamics simulations. The PNNL team then spent about nine months synthesizing and testing one of the candidates—a solid-state electrolyte that uses sodium, which is cheap and abundant, and some other materials, with 70 percent less lithium than conventional lithium-ion designs. The team then built a prototype solid-state battery that they tested over a range of temperatures.</p><p>This potential battery breakthrough isn’t unique. AI models have also dramatically accelerated research in <a href="https://science.nasa.gov/earth/ai-open-science-climate-change/" rel="noopener noreferrer" target="_blank">climate science</a>, <a href="https://www.sciencedirect.com/science/article/pii/S3050585225000217" rel="noopener noreferrer" target="_blank">fluid dynamics</a>, <a href="https://www.simonsfoundation.org/2024/08/26/astrophysicists-use-ai-to-precisely-calculate-universes-settings/" rel="noopener noreferrer" target="_blank">astrophysics</a>, <a href="https://www.nature.com/articles/s44222-025-00349-8" rel="noopener noreferrer" target="_blank">protein design</a>, and <a href="https://www.nature.com/articles/d41586-025-00602-5" rel="noopener noreferrer" target="_blank">chemical and biological discovery</a>. By replacing traditional simulations that can take days or weeks to run, AI is reshaping the pace and scope of scientific research across disciplines.</p><p>However, these AI models are only as good as the quality and diversity of their training data. Whether sourced from high-fidelity simulations or carefully curated experimental results, these data must accurately represent the underlying physical phenomena to ensure reliable predictions. Poor or biased data can lead to misleading outcomes. By contrast, high-quality, diverse datasets—such as those full-accuracy quantum simulations—enable models to generalize across systems and uncover new scientific insights. This is the promise of using quantum computing for training AI models.</p><h2>How to Accelerate Chemical Discovery</h2><p>The real breakthrough will come from strategically combining quantum computing’s and AI’s unique strengths. AI already excels at learning patterns and making rapid predictions. Quantum computers, which are still being scaled up to be practically useful, will excel at capturing electron correlations that classical computers can only approximate. So if you train classical models on quantum-generated data, you’ll get the best of both worlds: the accuracy of quantum delivered at the speed of AI.</p><p>As we learned from the Microsoft-PNNL collaboration on electrolytes, AI models alone can greatly speed up chemical discovery. In the future, quantum-accurate AI models will tackle even bigger challenges. Consider the basic discovery process, which we can think of as a funnel. Scientists begin with a vast pool of candidate molecules or materials at the wide-mouthed top, narrowing them down using filters based on desired properties—such as boiling point, conductivity, viscosity, or reactivity. Crucially, the effectiveness of this screening process depends heavily on the accuracy of the models used to predict these properties. Inaccurate predictions can create a “leaky” funnel, where promising candidates are mistakenly discarded or poor ones are mistakenly advanced.</p><p>Quantum-accurate AI models will dramatically improve the precision of chemical-property predictions. They’ll be able to help identify “first-time right” candidates, sending only the most promising molecules to the lab for synthesis and testing—which will save both time and cost.</p><p>Another key aspect of the discovery process is understanding the chemical reactions that govern how new substances are formed and behave. Think of these reactions as a network of roads winding through a mountainous landscape, where each road represents a possible reaction step, from starting materials to final products. The outcome of a reaction depends on how quickly it travels down each path, which in turn is determined by the energy barriers along the way—like mountain passes that must be crossed. To find the most efficient route, we need accurate calculations of these barrier heights, so that we can identify the lowest passes and chart the fastest path through the reaction landscape.</p><p>Even small errors in estimating these barriers can lead to incorrect predictions about which products will form. Case in point: A slight miscalculation in the energy barrier of an environmental reaction could mean the difference between labeling a compound a “forever chemical” or one that safely degrades over time.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="70e0b9b540bc0e061b38252e88243293" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/X1aWMYukuUk?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> </p><p>Accurate modeling of reaction rates is also essential for designing catalysts—substances that speed up and steer reactions in desired directions. Catalysts are crucial in industrial chemical production, carbon capture, and biological processes, among many other things. Here, too, quantum-accurate AI models can play a transformative role by providing the high-fidelity data needed to predict reaction outcomes and design better catalysts.</p><p>Once trained, these AI models, powered by quantum-accurate data, will revolutionize computational chemistry by delivering quantum-level precision. And once the AI models, which run on classical computers, are trained with quantum computing data, researchers will be able to run high-accuracy simulations on laptops or desktop computers, rather than relying on massive supercomputers or future quantum hardware. By making advanced chemical modeling more accessible, these tools will democratize discovery and empower a broader community of scientists to tackle some of the most pressing challenges in health, energy, and sustainability.</p><h2>Remaining Challenges for AI and Quantum Computing</h2><p>By now, you’re probably wondering: When will this transformative future arrive? It’s true that<strong> </strong>quantum computers still struggle with <a href="https://spectrum.ieee.org/quantum-error-correction" target="_blank">error rates</a> and limited lifetimes of usable qubits. And they still need to scale to the size required for meaningful chemistry simulations. Meaningful chemistry simulations beyond the reach of classical computation will require hundreds to thousands of high-quality qubits with error rates of around 10<span><sup>-15</sup></span>, or one error in a quadrillion operations. Achieving this level of reliability will require fault tolerance through redundant encoding of quantum information in logical qubits, each consisting of hundreds of physical qubits, thus requiring a total of about a million physical qubits. Current AI models for chemical-property predictions may not have to be fully redesigned. We expect that it will be sufficient to start with models pretrained on classical data and then fine-tune them with a few results from quantum computers.</p><p> Despite some open questions, the potential rewards in terms of scientific understanding and technological breakthroughs make our proposal a compelling direction for the field. The quantum computing industry has begun to move beyond the early noisy prototypes, and high-fidelity quantum computers with low error rates could be possible <a href="https://www.darpa.mil/research/programs/quantum-benchmarking-initiative" target="_blank">within a decade</a>.</p><p>Realizing the full potential of quantum-enhanced AI for chemical discovery will require focused collaboration between chemists and materials scientists who understand the target problems, experts in quantum computing who are building the hardware, and AI researchers who are developing the algorithms. Done right, quantum-enhanced AI could start to tackle the world’s toughest challenges—from climate change to disease—years ahead of anyone’s expectations. <span class="ieee-end-mark"></span></p>]]></description><pubDate>Mon, 02 Mar 2026 13:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/quantum-chemistry</guid><category>Quantum-computing</category><category>Quantum-chemistry</category><category>Drug-discovery</category><category>Batteries</category><category>Ai-models</category><category>Microsoft</category><dc:creator>Chi Chen</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/illustration-of-a-human-head-in-profile-with-a-spiral-upon-which-human-figures-are-walking-overlaid-on-an-image-of-an-atom.png?id=63744636&amp;width=980"></media:content></item><item><title>What Military Drones Can Teach Self-Driving Cars</title><link>https://spectrum.ieee.org/military-drones-self-driving-cars</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/silhouette-from-the-back-of-an-adults-head-as-they-look-at-two-monitors-one-screen-displays-a-drone-and-the-other-shows-self-d.jpg?id=65098234&width=2000&height=1500&coordinates=416%2C0%2C417%2C0"/><br/><br/><p><a href="https://spectrum.ieee.org/self-driving-cars/missy-cummings" target="_blank">Self-driving cars often struggle</a> with situations that are commonplace for human drivers. When confronted with construction zones, school buses, power outages, or misbehaving pedestrians, these vehicles often behave unpredictably, leading to crashes or freezing events, causing significant disruption to local traffic and possibly blocking first responders from doing their jobs. Because self-driving cars cannot successfully handle such routine problems, self-driving companies use human babysitters to remotely supervise them and intervene when necessary.</p><p>This idea—humans supervising autonomous vehicles from a distance—is not new. The U.S. military has been doing it since the 1980s with unmanned aerial vehicles (UAVs). In those early years, the military experienced numerous accidents due to poorly designed control stations, lack of training, and communication delays.</p><p>As a Navy fighter pilot in the 1990s, I was one of the first researchers to examine how to improve the UAV remote supervision interfaces. The thousands of hours I and others have spent working on and observing these systems generated a deep body of knowledge about how to safely manage remote operations. With recent revelations that U.S. commercial self-driving car remote operations are handled by <a href="https://www.c-span.org/program/senate-committee/tesla-and-waymo-executives-others-testify-about-self-driving-cars/672835" rel="noopener noreferrer" target="_blank">operators in the Philippines</a>, it is clear that self-driving companies have not learned the hard-earned military lessons that would promote safer use of self-driving cars today.</p><p>While stationed in the Western Pacific during the Gulf War, I spent a significant amount of time in air operations centers, learning how military strikes were planned, implemented and then replanned when the original plan inevitably fell apart. After obtaining my PhD, I leveraged this experience to begin research on the remote control of UAVs for all three branches of the U.S. military. Sitting shoulder-to-shoulder in tiny trailers with operators flying UAVs in local exercises or from 4000 miles away, my job was to learn about the pain points for the remote operators as well as identify possible improvements as they executed supervisory control over UAVs that might be flying halfway around the world.</p><p>Supervisory control refers to situations where humans monitor and support autonomous systems, stepping in when needed. For self-driving cars, this oversight can take several forms. The first is teleoperation, where<strong> </strong>a human remotely controls the car’s speed and steering from afar. Operators sit at a console with a steering wheel and pedals, similar to a racing simulator. Because this method relies on real-time control, it is extremely sensitive to communication delays.</p><p>The second form of supervisory control is remote assistance. Instead of driving the car in real time, a human gives higher-level guidance. For example, an operator might click a path on a map (called laying “breadcrumbs”) to show the car where to go, or interpret information the AI cannot understand, such as hand signals from a construction worker. This method tolerates more delay than teleoperation but is still time-sensitive.</p><h2>Five Lessons From Military Drone Operations</h2><p>Over 35 years of UAV operations, the military consistently encountered five major challenges during drone operations which provide valuable lessons for self-driving cars.</p><h3>Latency</h3><p>Latency—delays in sending and receiving information due to distance or poor network quality—is the single most important challenge for remote vehicle control. Humans also have their own built-in delay: neuromuscular lag. Even under perfect conditions, people cannot reliably respond to new information in less than 200–500 milliseconds. In remote operations, where communication lag already exists, this makes real-time control even more difficult.</p><p>In early drone operations, U.S. Air Force pilots in Las Vegas (the primary U.S. UAV operations center) attempted to take off and land drones in the Middle East using teleoperation. With at least a two-second delay between command and response, the accident rate was <a href="https://dsiac.dtic.mil/articles/reliability-of-uavs-and-drones/" rel="noopener noreferrer" target="_blank">16 times that of fighter jets conducting the same missions</a> . The military switched to local line-of-sight operators and eventually to fully automated takeoffs and landings. When I interviewed the pilots of these UAVs, they all stressed how difficult it was to control the aircraft with significant time lag.</p><p>Self-driving car companies typically rely on cellphone networks to deliver commands. These networks are unreliable in cities and prone to delays. This is one reason many companies prefer remote assistance instead of full teleoperation. But even remote assistance can go wrong. In <a href="https://www.forbes.com/sites/bradtempleton/2024/03/26/waymo-runs-a-red-light-and-the-difference-between-humans-and-robots/" rel="noopener noreferrer" target="_blank">one incident</a>, a Waymo operator instructed a car to turn left when a traffic light appeared yellow in the remote video feed—but the network latency meant that the light had already turned red in the real world. After moving its remote operations center from the U.S. to the Philippines, Waymo’s latency increased even further. It is imperative that control not be so remote, both to resolve the latency issue but also increase oversight for security vulnerabilities.</p><h3>Workstation Design</h3><p>Poor interface design has caused many drone accidents. The military learned the hard way that confusing controls, difficult-to-read displays, and unclear autonomy modes can have disastrous consequences. Depending on the specific UAV platform, the FAA attributed between 20% and 100% of Army and Air Force UAV <a href="https://apps.dtic.mil/sti/pdfs/ADA460102.pdf" rel="noopener noreferrer" target="_blank">crashes caused by human error through 2004</a> to poor interface design.</p><h3>UAV crashes (1986-2004) caused by human factors problems, including poor interface and procedure design. These two categories do not sum to 100% because both factors could be present in an accident.</h3><br/><table border="0" style="white-space: unset;" width="100%"><tbody><tr><th align="left" scope="col" style="color: #ffffff; background-color: #000000;"></th><th align="left" scope="col" style="color: #ffffff; background-color: #000000;" width="25%">Human Factors</th><th align="left" scope="col" style="color: #ffffff; background-color: #000000;" width="25%"> Interface Design</th><th align="left" scope="col" style="color: #ffffff; background-color: #000000;" width="25%"> Procedure Design</th></tr><tr><th align="left" scope="col" style="color: #ffffff; background-color: #000000;"> Army Hunter</th><td align="left" style="background-color: #DFD5C1;"> 47%</td><td align="left" style="background-color: #DFD5C1;"> 20%</td><td align="left" style="background-color: #DFD5C1;"> 20%</td></tr><tr><th align="left" scope="col" style="color: #ffffff; background-color: #000000;"> Army Shadow</th><td align="left" style="background-color: #E9E3D6;"> 21%</td><td align="left" style="background-color: #E9E3D6;"> 80%</td><td align="left" style="background-color: #E9E3D6;"> 40%</td></tr><tr><th align="left" scope="col" style="color: #ffffff; background-color: #000000;">Air Force Predator</th><td align="left" style="background-color: #DFD5C1;"> 67%</td><td align="left" style="background-color: #DFD5C1;"> 38%</td><td align="left" style="background-color: #DFD5C1;"> 75%</td></tr><tr><th align="left" scope="col" style="color: #ffffff; background-color: #000000;" width="25%"> Air Force Global Hawk</th><td align="left" style="background-color: #E9E3D6;"> 33%</td><td align="left" style="background-color: #E9E3D6;"> 100%</td><td align="left" style="background-color: #E9E3D6;"> 0%</td></tr></tbody></table><p>Many UAV aircraft crashes have been caused by poor human control systems. In one case, buttons were placed on the controllers such that it was relatively easy to <a href="https://spectrum.ieee.org/review-djis-new-fpv-drone-is-effortless-exhilarating-fun" target="_self">accidentally shut off the engine</a> instead of firing a missile. This poor design led to the accidents where the remote operators <a href="https://dspace.mit.edu/handle/1721.1/84129" rel="noopener noreferrer" target="_blank">inadvertently shut the engine down instead of launching a missile</a>.</p><p> The self-driving industry reveals hints of comparable issues. Some autonomous shuttles use off-the-shelf gaming controllers, which—while inexpensive—were never designed for vehicle control. The off-label use of such controllers can lead to mode confusion, which was a factor in a <a href="https://www.govtech.com/transportation/after-crash-orlandos-self-driving-bus-back-on-the-road" rel="noopener noreferrer" target="_blank">recent shuttle crash</a>. Significant human-in-the-loop testing is needed to avoid such problems, not only prior to system deployment, but also after major software upgrades.</p><h3>Operator Workload</h3><p>Drone missions typically include long periods of surveillance and information gathering, occasionally ending with a missile strike. These missions can sometimes last for days; for example, while the military waits for the person of interest to emerge from a building. As a result, the remote operators experience extreme swings in workload: sometimes overwhelming intensity, sometimes crushing boredom. Both conditions can lead to errors.</p><p>When operators teleoperate drones, workload is high and fatigue can quickly set in. But when onboard autonomy handles most of the work, operators can become bored, complacent, and less alert. This pattern is <a href="https://www.airuniversity.af.edu/Wild-Blue-Yonder/Articles/Article-Display/Article/2144225/airmen-and-unmanned-aerial-vehicles-the-danger-of-generalization/" rel="noopener noreferrer" target="_blank">well documented in UAV research</a>.</p><p>Self-driving car operators are likely experiencing similar issues for tasks ranging from interpreting confusing signs to helping cars escape dead ends. In simple scenarios, operators may be bored; in emergencies—like driving into a flood zone or responding during a citywide power outage—they can become quickly overwhelmed.</p><p>The military has tried for years to have one person supervise many drones at once, because it is far more cost effective. However, cognitive switching costs (regaining awareness of a situation after switching control between drones) result in workload spikes and high stress. That coupled with increasingly complex interfaces and communication delays have made this extremely difficult.</p><p>Self-driving car companies likely face the same roadblocks. They will need to model operator workloads and be able to reliably predict what staffing should be and how many vehicles a single person can effectively supervise, especially during emergency operations. If every self-driving car turns out to need a dedicated human to pay close attention, such operations would no longer be cost-effective.</p><h3>Training</h3><p>Early drone programs lacked formal training requirements, with training programs designed by pilots, for pilots. Unfortunately, supervising a drone is more akin to air traffic control than actually flying an aircraft, so the military often placed drone operators in critical roles with inadequate preparation. This caused many accidents. Only years later did the military conduct <a href="https://www.researchgate.net/publication/238795397_Enhancing_Unmanned_Aerial_System_Training_A_Taxonomy_of_Knowledge_Skills_Attitudes_and_Methods" rel="noopener noreferrer" target="_blank">a proper analysis of the knowledge, skills, and abilities needed to conduct safe remote operations</a>, and changed their training program.</p><p>Self-driving companies do not publicly share their training standards, and no regulations currently govern the qualifications for remote operators. On-road safety depends heavily on these operators, yet very little is known about how they are selected or taught. If commercial aviation dispatchers are required to have formal training overseen by the FAA, which are very similar to self-driving remote operators, we should hold commercial self-driving companies to similar standards.</p><h3>Contingency Planning</h3><p>Aviation has strong protocols for emergencies including predefined procedures for lost communication, backup ground control stations, and highly reliable onboard behaviors when autonomy fails. In the military, drones may fly themselves to safe areas or land autonomously if contact is lost. Systems are designed with cybersecurity threats—like GPS spoofing—in mind.</p><p>Self-driving cars appear far less prepared. The <a href="https://waymo.com/blog/2025/12/autonomously-navigating-the-real-world" rel="noopener noreferrer" target="_blank">2025 San Francisco power outage</a> left Waymo vehicles frozen in traffic lanes, blocking first responders and creating hazards. These vehicles are supposed to perform “minimum-risk maneuvers” such as pulling to the side—but many of them didn’t. This suggests gaps in contingency planning and basic fail-safe design.</p><div class="horizontal-rule"></div><p><span>The history of military drone operations offers crucial lessons for the self-driving car industry. Decades of experience show that remote supervision demands extremely low latency, carefully designed control stations, manageable operator workload, rigorous, well-designed training programs, and strong contingency planning.</span></p><p>Self-driving companies appear to be repeating many of the early mistakes made in drone programs. Remote operations are treated as a support feature rather than a mission-critical safety system. But as long as AI struggles with uncertainty, which will be the case for the foreseeable future, remote human supervision will remain essential. The military learned these lessons through painful trial and error, yet the self-driving community appears to be ignoring them. The self-driving industry has the chance—and the responsibility—to learn from our mistakes in combat settings before it harms road users everywhere.</p>]]></description><pubDate>Mon, 02 Mar 2026 12:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/military-drones-self-driving-cars</guid><category>Drones</category><category>Military-robots</category><category>Self-driving-cars</category><dc:creator>Missy Cummings</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/silhouette-from-the-back-of-an-adults-head-as-they-look-at-two-monitors-one-screen-displays-a-drone-and-the-other-shows-self-d.jpg?id=65098234&amp;width=980"></media:content></item><item><title>IEEE President’s Note: Engineering a Modern Renaissance</title><link>https://spectrum.ieee.org/ieee-presidents-note-engineering-renaissance</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/person-wearing-a-scarf-over-a-dark-sweater-with-a-blue-background.png?id=65004859&width=2000&height=1500&coordinates=5%2C0%2C5%2C0"/><br/><br/><p>Consider a powerful parallel between the advancements made during the Renaissance and the developments made by today’s engineers.</p><p>The Renaissance was a uniquely fertile era. Its ethos of curiosity and creativity fostered unprecedented <a data-linked-post="2675310822" href="https://spectrum.ieee.org/cinzia-davia-physics" target="_blank">collaboration across disciplines</a>. Artists, scientists, philosophers, and patrons engaged in a shared pursuit of human potential, beauty, and advancements in art, science, and literature.</p><p>But the Renaissance wasn’t just a cultural awakening. It was a systems-level transformation: a convergence of disciplines, minds, and methods that redefined what humanity could achieve. And in many ways, it mirrors the collaborative spirit we strive for within our IEEE communities.</p><h2>Collaboration Is a Catalyst</h2><p>During the Renaissance, breakthroughs didn’t happen in isolation. They emerged from intersections of different disciplines. Collaboration was the norm: Artists worked with mathematicians to perfect their creations’ accuracy, and architects consulted astronomers to design buildings that reflected celestial order. It was interdisciplinary design thinking centuries before the concept was given a name.</p><h3>Who’s up for a Challenge?</h3><br/><p>IEEE Impact Challenge, which launched in January, aims not only to address real-world problems through purpose-driven engineering but also to attract new members, foster cross‑disciplinary collaboration, and design a better world for all.</p><p>The <a href="https://www.ieee.org/ieee-future-tech-explorers-challenge" target="_blank">IEEE Future Tech Explorers program</a> invites IEEE members to partner with others to inspire tomorrow’s engineers and technologists by creating interactive educational experiences that spark curiosity and open doors for young minds.</p><p>The IEEE Response Quest seeks to find solutions that enable near-real-time situational awareness for those providing emergency response and relief assistance.</p><p>We welcome educators, designers, engineers, and innovators from every technical discipline to come together, collaborate across communities, and demonstrate the power of IEEE when we unite around a shared purpose.<br/></p>Learn more at the <a href="https://ieee.org/impact-challenge" target="_blank">IEEE Impact Challenge website</a>.<p>It is at the intersections where disciplines and communities meet that the sparks of transformation ignite. The intersection of engineering and medicine gives us lifesaving devices. The intersection of computing and art produces immersive experiences from virtual, augmented, and mixed reality technology that expands human imagination. The intersection of policy and technology ensures ethical innovation. The outcomes of these crossroads remind us that progress is rarely linear. It is woven from the threads of various expertise, perspectives, and values.</p><p>When we collaborate across specialties, from electrical and biomedical to aerospace and software, we unlock new possibilities. And when we engage with industry, educators, policymakers, standard developers, and the public, we elevate those possibilities into solutions. We do it together, because no single engineer or technologist, and no one discipline can solve all the challenges we face.</p><p>The Renaissance teaches us that collaboration is a catalyst for advancing society. And so, I ask: What if we are living in a new, modern renaissance?</p><p>What if our members are today’s da Vincis, designing systems that serve humanity? What if our volunteers are modern-day patrons, investing time, talent, and heart into building a better world? What if our students and young professionals are the architects of tomorrow’s breakthroughs, fluent in computer code, ethics, and global impact, ready to collaborate across borders, sectors, and disciplines?</p><p>What if our conferences, technical standards, and humanitarian technologies are the printing presses of our time, disseminating knowledge, sparking dialogue, and scaling solutions? What if our collective imagination is the canvas upon which the next century of innovation will be painted?</p><p>And what if, like the Renaissance, our era is defined not only by invention but also by intersection, where many voices and perspectives converge to shape technologies that reflect humanity’s full spectrum?</p><p>Imagine engineers working together with ethicists to ensure responsible AI; with environmental scientists to safeguard our planet; and with local communities to design solutions that solve their challenges. Also imagine engineers partnering with <a data-linked-post="2670490806" href="https://spectrum.ieee.org/ieee-move-hurricane-helene-milton" target="_blank">disaster relief agencies</a> to design real-time systems, restore communication networks, and deliver lifesaving technologies when survivors need them most.</p><p>So let us think like Renaissance creators. Let us design with empathy and collaborate across boundaries. Let us honor that legacy by not just preserving the past but also by building systems that empower the future for everyone.</p><p>When we unite technical excellence with human purpose, we don’t just innovate; we elevate. And in doing so, we carry forward the timeless truth of the Renaissance: Humanity’s greatest achievements are born not from isolation but from intersection and connection.</p><p>—Mary Ellen Randall</p><p>IEEE president and CEO</p><p>Please share your thoughts with me: <a href="mailto:president@ieee.org">president@ieee.org</a>.</p>]]></description><pubDate>Sun, 01 Mar 2026 19:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/ieee-presidents-note-engineering-renaissance</guid><category>Ieee-news</category><category>Ieee-presidents-column</category><category>Type-ti</category><dc:creator>Mary Ellen Randall</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/person-wearing-a-scarf-over-a-dark-sweater-with-a-blue-background.png?id=65004859&amp;width=980"></media:content></item><item><title>Letting Machines Decide What Matters</title><link>https://spectrum.ieee.org/ai-new-physics</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/large-particle-detector-with-circular-structure-person-standing-below.png?id=65005476&width=2000&height=1500&coordinates=26%2C0%2C27%2C0"/><br/><br/><p>In the time it takes you to read this sentence, the <a href="https://spectrum.ieee.org/tag/large-hadron-collider" target="_blank">Large Hadron Collider</a> (LHC) will have smashed billions of particles together. In all likelihood, it will have found exactly what it found yesterday: more evidence to support the <a href="https://home.cern/science/physics/standard-model" rel="noopener noreferrer" target="_blank">Standard Model</a> of particle physics.</p><p>For the engineers who built this 27-kilometer-long ring, this consistency is a triumph. But for theoretical physicists, it has been rather frustrating. As <a href="https://spectrum.ieee.org/u/matthew-hutson" rel="noopener noreferrer" target="_blank">Matthew Hutson</a> reports in “<a data-linked-post="2675068613" href="https://spectrum.ieee.org/particle-physics-ai" target="_blank">AI Hunts for the Next Big Thing in Physics</a>,” the field is currently gripped by a quiet crisis. In an email discussing his reporting, Hutson explains that the Standard Model, which describes the known elementary particles and forces, is not a complete picture. “So theorists have proposed new ideas, and experimentalists have built giant facilities to test them, but despite the gobs of data, there have been no big breakthroughs,” Hutson says. “There are key components of reality we’re completely missing.”</p><p>That’s why researchers are turning artificial intelligence loose on particle physics. They aren’t simply asking AI to comb through accelerator data to confirm existing theories, Hutson explains. They’re asking AI to point the way toward theories that they’ve never imagined. “Instead of looking to support theories that humans have generated,” he says, “unsupervised AI can highlight anything out of the ordinary, expanding our reach into unknown unknowns.” By asking AI to flag anomalies in the data, researchers hope to find their way to “<a href="https://en.wikipedia.org/wiki/Physics_beyond_the_Standard_Model" target="_blank">new physics</a>” that extends the Standard Model. </p><p>On the surface, this article might sound like another “AI for <em><em>X</em></em>” story. As <em><em>IEEE</em></em> <em><em>Spectrum</em></em>’s AI editor, I get a steady stream of pitches for such stories: AI for drug discovery, AI for farming, AI for wildlife tracking. Often what that really means is faster data processing or automation around the edges. Useful, sure, but incremental.</p><p>What struck me in Hutson’s reporting is that this effort feels different. Instead of analyzing experimental data after the fact, the AI essentially becomes part of the instrument, scanning for subtle patterns and deciding in real time what’s interesting. At the LHC, detectors record 40 million collisions per second. There’s simply no way to preserve all that data, so engineers have always had to build filters to decide which events get saved for analysis and which are discarded; nearly everything is thrown away. </p><p>Now those split-second decisions are increasingly handed to machine learning systems running on field-programmable gate arrays (FPGAs) connected to the detectors. The code must run on the chip’s limited logic and memory, and compressing a neural network into that hardware isn’t easy. Hutson describes one theorist pleading with an engineer, “Which of my algorithms fits on your bloody FPGA?”</p><p>This moment is part of a much older pattern. As Hutson writes in the article, new instruments have opened doors to the unexpected throughout the history of science. Galileo’s telescope <a href="https://www.nasa.gov/general/415-years-ago-astronomer-galileo-discovers-jupiters-moons/" target="_blank">revealed moons circling Jupiter</a>. Early microscopes exposed entire worlds of “<a href="https://hekint.org/2018/10/23/van-leeuwenhoeks-discovery-of-animalcules/" target="_blank">animalcules</a>” swimming around. These better tools didn’t just answer existing questions; they made it possible to ask new ones.</p><p>If there’s a crisis in particle physics, in other words, it may not just be about missing particles. It’s about how to look beyond the limits of the human imagination. Hutson’s story suggests that AI might not solve the mysteries of the universe outright, but it could change how we search for answers.</p>]]></description><pubDate>Sun, 01 Mar 2026 11:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/ai-new-physics</guid><category>Large-hadron-collider</category><category>Lhc</category><category>Particle-physics</category><category>Fpga</category><category>Machine-learning</category><dc:creator>Eliza Strickland</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/large-particle-detector-with-circular-structure-person-standing-below.png?id=65005476&amp;width=980"></media:content></item><item><title>Xiangyi Cheng Is Bringing AR to Classrooms and Hospitals</title><link>https://spectrum.ieee.org/xiangyi-cheng-ar-classrooms-hospitals</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/xiangyi-cheng-pointing-at-a-robotic-arm-in-a-lab-setting-next-to-her-is-a-young-adult-woman-wearing-a-virtual-reality-headset.jpg?id=65096065&width=2000&height=1500&coordinates=416%2C0%2C417%2C0"/><br/><br/><p>When<a href="https://cse.lmu.edu/department/mechanicalengineering/faculty/?expert=xiangyi.cheng" rel="noopener noreferrer" target="_blank"> Xiangyi Cheng</a> published her first journal paper as a principal investigator in<a href="https://ieeeaccess.ieee.org/?gad_source=1&gad_campaignid=19948279603&gbraid=0AAAAApgaRM9zNBBlw-jYd7UE0gSXKor4y&gclid=Cj0KCQiA-YvMBhDtARIsAHZuUzLiREaQgsad40vwttsLGsVt00CzNOVcrZY4taO2lvzsqnbC8Q7hvBQaAuHqEALw_wcB" rel="noopener noreferrer" target="_blank"> <em><em>IEEE Access</em></em></a> in 2024, it marked more than a professional milestone. For Cheng, an IEEE member and an assistant professor of mechanical engineering at<a href="https://www.lmu.edu/" rel="noopener noreferrer" target="_blank"> Loyola Marymount University</a>, in Los Angeles, it was the latest waypoint in a career shaped by curiosity, persistence, and a belief that technology should serve people—not the other way around.</p><p>The paper’s title was “<a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10419075" rel="noopener noreferrer" target="_blank">Mobile Devices or Head-Mounted Displays: A Comparative Review and Analysis of Augmented Reality in Healthcare</a>.”</p><h3>XIANGYI CHENG</h3><br/><p><strong>Employer </strong></p><p><strong></strong>Loyola Marymount University, in Los Angeles</p><p><strong>Title </strong></p><p><strong></strong>Assistant professor of mechanical engineering</p><p><strong>Member grade </strong></p><p><strong></strong>Member</p><p><strong>Alma maters </strong></p><p><strong></strong>China University of Mining and Technology; Texas A&M University</p><p>Cheng’s work spans<a href="https://spectrum.ieee.org/topic/robotics/" target="_self"> robotics</a>, intelligent systems, human-machine interaction and<a href="https://spectrum.ieee.org/tag/human-machine-interaction" target="_self"> </a><a href="https://spectrum.ieee.org/topic/artificial-intelligence/" target="_self"> artificial intelligence</a>. It has applications in patient-specific surgical planning, an approach whereby treatment is customized to the anatomy and clinical needs of each individual.</p><p>Her research also covers<a href="https://spectrum.ieee.org/tag/wearables" target="_self"> wearables</a> for rehabilitation and<a href="https://www.ibm.com/think/topics/augmented-reality" rel="noopener noreferrer" target="_blank"> augmented-reality</a>-enhanced engineering education.</p><p>The throughline of her career is sound judgment based on critical thinking. She urges her students to avoid the temptation to accept the answers they’re given by AI without cross-checking them against their own foundational understanding of the subject matter.</p><p>“AI can give you ideas,” Cheng says, “but it should never lead your thinking.”</p><p>That principle—honed through uncertainty, disciplinary shifts, and hard-earned confidence—has made Cheng an emerging voice in applied intelligent systems and a thoughtful educator preparing students for an AI-saturated world.</p><h2>From Xi’an to Beijing: A mind drawn to mathematics</h2><p>Cheng, born in <a href="https://www.britannica.com/place/Xian-China" rel="noopener noreferrer" target="_blank">Xi’an, China</a>, grew up in a household shaped by her parents’ disparate careers. Her father was a mining engineer, and her mother taught Chinese and literature at a high school.</p><p>“That contrast between logical and literary thinking helped me understand myself early,” Cheng says. “I liked math, and STEM felt natural to me.”</p><p>Several teachers reinforced her inclination, she says, particularly a math teacher whose calm, fair approach emphasized reasoning over punishments such as detention for misbehavior or failure to complete assignments.</p><p>“It wasn’t about being right,” Cheng says. “It was about thinking clearly.”</p><p>In 2011 she enrolled at the <a href="https://english.cumtb.edu.cn/" rel="noopener noreferrer" target="_blank">China University of Mining and Technology (Beijing)</a> , where she studied mechanical engineering. After graduating with a bachelor’s degree in 2015, she was unsure where the field would take her.</p><h2>An IEEE paper changed her trajectory</h2><p>Later in 2015, she traveled to the United States to study at<a href="https://case.edu/?campaignid=20602013936&adgroupid=154678129432&adid=675596328898&gad_source=1&gad_campaignid=20602013936&gbraid=0AAAAADHbx0VJm2eRyZsMlLOp8nqtMVwNX&gclid=Cj0KCQiA-YvMBhDtARIsAHZuUzLeRv-IjpkjT25nzbJLmuPBgndVAcirkurp9VNZxYujWgU2vMAOML8aAnHyEALw_wcB" rel="noopener noreferrer" target="_blank"> Case Western Reserve University</a>, in Cleveland.</p><p>She initially viewed the move as exploratory rather than a long-term commitment.</p><p>“I wasn’t thinking about a Ph.D.,” she says. “I wasn’t even sure research was for me.”</p><p>That uncertainty shifted in 2017, when Cheng submitted her <a href="https://ieeexplore.ieee.org/document/8460779" rel="noopener noreferrer" target="_blank">“IntuBot: Design and Prototyping of a Robotic Intubation Device</a>” paper to the<a href="https://2025.ieee-icra.org/" rel="noopener noreferrer" target="_blank"> IEEE International Conference on Robotics and Automation</a> (ICRA)—which was accepted.</p><p class="pull-quote"><span>“AI can give you more possibilities, but thinking is still our responsibility.”</span></p><p>Intubation is a procedure in which an endotracheal tube is inserted into a patient’s airway—usually through the mouth—to help them breathe. Because placing the tube correctly is not simple and usually must be done quickly, it requires training. That’s why research into robotic or assisted intubation systems focuses on improving speed, accuracy, and safety.</p><p>She presented her findings at ICRA in 2018, giving her early exposure to a global research community.</p><p>“That acceptance gave me confidence,” she recalls. “It showed me I could contribute to the field.”</p><p>Her advisor at Case Western encouraged her to switch from the mechanical engineering master’s program to the Ph.D. track. When the advisor moved to<a href="https://www.tamu.edu/index.html" target="_blank"> Texas A&M University</a>, in College Station, in 2019, Cheng decided to transfer. She completed her Ph.D. in mechanical engineering at Texas A&M in 2022.</p><p>Although she didn’t earn a degree from Case Western, she credits her experience there with clarifying her professional direction.</p><p>Shortly after graduating with her Ph.D., Cheng was hired as an assistant professor of mechanical engineering at <a href="https://www.onu.edu/" target="_blank">Ohio Northern University</a>, in Ada. She left in 2024 to become an assistant professor at Loyola Marymount.</p><h2>Engineering for the body—and the classroom</h2><p>Cheng’s research focuses on human-centered engineering, particularly in health care. One of her major projects addresses<a href="https://my.clevelandclinic.org/health/diseases/23521-syndactyly-webbed-digits" target="_blank"> syndactyly</a>, a congenital condition in which a newborn’s fingers are fused at birth. Surgeons rely on their experience to estimate the size and shape of skin grafts to be taken from another part of the body for the corrective surgery.</p><p>She is developing technology to scan the patient’s hand, extract anatomical landmarks, and use finite element analysis—a computer-based method for predicting how a physical object will behave under real-world conditions—to determine the optimal graft size and shape.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Smiling portrait of Xiangyi Cheng." class="rm-shortcode" data-rm-shortcode-id="526dfbc3e04a391b6b58fa177291d09d" data-rm-shortcode-name="rebelmouse-image" id="399a0" loading="lazy" src="https://spectrum.ieee.org/media-library/smiling-portrait-of-xiangyi-cheng.jpg?id=65096141&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Xiangyi Cheng designs human-centered intelligent systems with applications in health care and education.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Xiangyi Cheng</small></p><p>“Everyone’s hand is different,” Cheng says. “So the surgery should be personalized.”</p><p>Another project centers on developing smart gloves to assist with hand rehabilitation, pairing the unaffected hand with the injured one so the person’s natural motion can help guide therapy.</p><p>She also is exploring augmented reality in engineering education, using immersive visualization and AI tools to help students grasp three-dimensional concepts that are difficult to convey through traditional learning tools. Such visualization lets students see and interact with a digital world as if they’re inside it instead of viewing it on a flat screen.</p><h2>Teaching balance in an AI-driven world</h2><p>Despite working at the forefront of AI-enabled systems, Cheng cautions her students to be judicious in their use of the technology so that they don’t rely on it too heavily.</p><p>“AI is not always right and perfect,” she says. “You still need to be able to judge whether the answers it provides are correct.”</p><p>As AI continues to reshape engineering, Cheng remains grounded in a simple principle, she says: “We should use these tools. But we should never let them replace our judgment. AI can give you more possibilities, but thinking is still our responsibility.”</p><p>In her lab and classroom, Cheng prioritizes independent thinking, critical evaluation, and persistence. Many of her research students are undergraduates, and she encourages them to take ownership of their work—planning ahead, testing ideas, and learning from failure.</p><p>“The students who succeed don’t give up easily,” she says.</p><p>What she finds most rewarding, she says, is watching students mature. Reserved first-year students often become confident seniors who can present complex work and manage demanding projects.</p><p>“Getting to witness that transformation is why I teach,” she says.</p><p>For students considering engineering, Cheng offers straightforward advice: “Focus on mathematics. Engineering looks hands-on, but math is the foundation behind everything.”</p><p>With practice and persistence, she says, students can succeed and find meaning in the field.</p><h2>Why IEEE continues to matter</h2><p>Cheng joined IEEE in 2017, the year she submitted her first paper to ICRA. The organization has remained central to her professional development, she says.</p><p>She has served as a reviewer for IEEE journals and conferences including<a href="https://ieeexplore.ieee.org/document/10368213" target="_blank"> <em><em>Robotics and Automation Letters</em></em></a>,<a href="https://www.ieee-tmrb.org/new/" target="_blank"> <em><em>Transactions on Medical Robotics and Bionics</em></em></a>,<a href="https://ieeexplore.ieee.org/document/6894708" target="_blank"> <em><em>Transactions on Robotics</em></em></a>, the<a href="https://www.iros25.org/" target="_blank"> International Conference on Intelligent Robots and Systems</a>, and ICRA.</p><p>IEEE’s interdisciplinary scope aligns naturally with her work, she says, adding that the organization is “one of the few places that truly welcomes research across boundaries.”</p><p>More personally, IEEE helped her see a future she had not initially imagined.</p><p>“That first conference was a turning point,” she says. “It helped me realize I belonged.”</p>]]></description><pubDate>Sat, 28 Feb 2026 19:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/xiangyi-cheng-ar-classrooms-hospitals</guid><category>Robotics</category><category>Ai</category><category>Ieee-member-news</category><category>Type-ti</category><category>Careers</category><category>Biomedical</category><dc:creator>Willie D. Jones</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/xiangyi-cheng-pointing-at-a-robotic-arm-in-a-lab-setting-next-to-her-is-a-young-adult-woman-wearing-a-virtual-reality-headset.jpg?id=65096065&amp;width=980"></media:content></item><item><title>This Power Grid Pioneer’s EV Prediction Came 100 Years Too Soon</title><link>https://spectrum.ieee.org/charles-proteus-steinmetz</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/black-and-white-photo-of-people-posed-with-an-early-20th-century-car-one-man-leans-out-the-window-of-the-car-three-children-an.jpg?id=65005163&width=2000&height=1500&coordinates=68%2C0%2C68%2C0"/><br/><br/><p>Charles Proteus Steinmetz was a towering figure in the early decades of electrical engineering, easily the intellectual equal of Thomas Edison and Nikola Tesla—men he considered his friends. One of Steinmetz’s most significant achievements was to quantify and characterize the phenomenon of magnetic hysteresis—the behavior of magnetism in materials—and then devise a simple law that allowed for predictable transformer and motor design. He also established a revolutionary framework for analyzing AC circuits, which is still taught today in power engineering. And from 1893, he served as chief consulting engineer at General Electric at a pivotal moment for the young company and for the U.S. effort to expand its power grid. For these and other accomplishments, he was well known in his time, even if he’s not exactly a household name today.</p><p>Steinmetz was also an evangelist for electric vehicles. In March 1920, he typed out his thoughts, comparing the pros and cons of EVs to the gasoline-propelled alternative. Among the advantages: low cost of maintenance, reliability, simplicity of operation, and lower cost of operation. The disadvantages: dependence on charging stations, limited range on a single charge, and lower speeds. More than a century later, his list remains remarkably pertinent.</p><p>Steinmetz could often be seen decked out in a suit and top hat, smoking his trademark BlackStone panatela cigar while riding around Schenectady, N.Y., in his 1914 Detroit Electric sedan. According to John Spinelli, emeritus professor of electrical and computer engineering at <a href="https://www.union.edu/" rel="noopener noreferrer" target="_blank">Union College</a>, in Schenectady, sometimes both Steinmetz <em><em>and</em></em> his chauffeur sat in the backseat—you could control the car from both the front and the rear—so that it would appear to be a driverless car. With a top speed of 40 kilometers per hour (25 miles per hour), the car ran on 14 six-volt batteries and could go about 48 km between charges.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Photo of a black car from the early 20th century." class="rm-shortcode" data-rm-shortcode-id="c8a9bd25e52e9f0ad0014dac6815368e" data-rm-shortcode-name="rebelmouse-image" id="d4b80" loading="lazy" src="https://spectrum.ieee.org/media-library/photo-of-a-black-car-from-the-early-20th-century.jpg?id=65005180&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Steinmetz’s 1914 Detroit Electric car is now at Union College in Schenectady, N.Y., where Steinmetz had founded, chaired, and taught in the department of electrical engineering.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Paul Buckowski/Union College</small></p><p>In 1971, the car was purchased by Union College, where Steinmetz had founded, chaired, and taught in the department of electrical engineering. The car had been discovered rotting in a field, so it needed some work. Over the next decade, faculty and engineering students <a href="https://www.union.edu/news/stories/201404/Shifting-gears-A-new-home-for-Steinmetz-car" target="_blank">restored it</a> to its former glory. Still in running condition, it’s now on permanent display at the college.</p><h2>Steinmetz’s Contributions to Electrical Engineering</h2><p>Karl August Rudolf Steinmetz was born in 1865 in Breslau, Prussia (now known as Wrocław, Poland). He studied mathematics, physics, and the burgeoning field of electricity at the University of Breslau. He also joined a student socialist club and edited the party newspaper, <em><em>The People’s Voice</em></em>. He completed his doctoral studies, but before receiving his degree, Steinmetz fled to Switzerland in 1888, after his socialist writings came under the scrutiny of the Bismarck government.</p><p>Steinmetz immigrated to New York the following year, anglicized his first name, dropped his two middle names, and added Proteus, a nickname he had picked up at university (after the shape-shifting sea god of Greek mythology). Eventually, he became a U.S. citizen.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Black and white photo of a man with wire-rim spectacles smoking a cigar and writing at his desk." class="rm-shortcode" data-rm-shortcode-id="dac4dd8876b292524ca95255ae991938" data-rm-shortcode-name="rebelmouse-image" id="239bd" loading="lazy" src="https://spectrum.ieee.org/media-library/black-and-white-photo-of-a-man-with-wire-rim-spectacles-smoking-a-cigar-and-writing-at-his-desk.jpg?id=65005184&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Charles Proteus Steinmetz solved a number of important problems that helped the power grid expand.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Bettmann/Getty Images</small></p><p>In January 1892, Steinmetz burst onto the engineering scene when he read his paper “<a href="https://ia800805.us.archive.org/view_archive.php?archive=/13/items/crossref-pre-1909-scholarly-works/10.1109%252Fpaiee.1909.6660362.zip&file=10.1109%252Ft-aiee.1892.5570437.pdf" target="_blank">On the Law of Hysteresis</a>” before the American Institute of Electrical Engineers, a forerunner of today’s IEEE. I can’t quite imagine sitting through the delivery of its 62 pages, but those assembled recognized its groundbreaking nature. The ideas Steinmetz outlined allowed engineers to calculate power losses in the magnetic components of electrical machinery during the design phase. Prior to this, the design process for transformers and electric motors was largely trial and error, and power losses could be measured only after the machine was built, which greatly added to the cost.</p><p>Steinmetz was not just an equations and theory guy, though. He loved working in the lab and building things. In 1893, General Electric acquired the small manufacturing firm of Eickemeyer & Osterheld, in Yonkers, N.Y., where Steinmetz had worked since shortly after his arrival in the United States. So Steinmetz began his new life as a corporate engineer, an interesting turn for the socialist. During his first few years with GE, he mostly designed generators and transformers. But he also created an informal position for himself as a consultant, giving expert opinions on various problems across divisions. He eventually formalized this role, becoming GE’s chief consulting engineer, and he maintained a relationship with the company for the rest of his life, even after joining the faculty of Union College in 1902.</p><p>By the time Steinmetz died in 1923 at the age of 58, he had been granted more than 200 patents and had made major contributions to various subfields in electrical engineering, including phasors and complex numbers (for steady-state AC analysis); electrical transients, switching surges, and surge protection (based on his research on lightning); industrial research (including how to run a corporate lab); and engineering methods (by writing textbooks that standardized practice).</p><h2>Why Steinmetz Believed in Electric Cars</h2><p>By 1914, Steinmetz was convinced that the future of transportation was electric. In June, he <a href="https://ia600203.us.archive.org/22/items/electricvehicles51914chic/electricvehicles51914chic.pdf#page=17" target="_blank">addressed</a> the National Electric Light Association convention in Philadelphia with a bold prediction: <em><em>“</em></em>I have no doubt that in 10 years, more or less—rather less than more—we will see the field of the pleasure and business vehicle covered by such an electric car in large numbers. And I believe I underestimate when I say that 1,000,000 or more will be used.”</p><p>As we now know, Steinmetz was overly optimistic. At the time, there were about 1.2 million gasoline-powered cars in use in the United States, and only about 35,000 EVs. It would take until 2018 for the number of EVs (including plug-in hybrids) on U.S. roads to surpass a million. Worldwide, there are now about <a href="https://ourworldindata.org/electric-car-sales" rel="noopener noreferrer" target="_blank">60 million electric vehicles</a> in use.</p><p>But Steinmetz had his reasons. He firmly believed that electric vehicles would flourish in urban areas, where most rides involved short distances at low speed. He also thought EVs would be a boon for power companies, which were eager to drum up more business, especially at night. With 1 million electric cars being charged about 5 kilowatt-hours on most nights, and at a rate of 5 cents per kilowatt-hour, Steinmetz predicted US $75 million (about $2.5 billion today) of new business for central power stations each year.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Black and white photo of a professor and students doing work on a disassembled old car." class="rm-shortcode" data-rm-shortcode-id="75d8b933f1fcc46f556dad18a624e199" data-rm-shortcode-name="rebelmouse-image" id="045f9" loading="lazy" src="https://spectrum.ieee.org/media-library/black-and-white-photo-of-a-professor-and-students-doing-work-on-a-disassembled-old-car.jpg?id=65005205&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">In 1971, Union College purchased Steinmetz’s car, which had been found rotting in a field, and faculty and students restored it to working condition.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Special Collections & Archives/Schaffer Library/Union College</small></p><p>Steinmetz went to work to improve the electric car. He developed a double-rotor motor that was integrated into the rear axle, which did away with the need for a mechanical differential or drive shaft and drastically reduced the overall weight, which improved the mileage. Dey Electric Corp. incorporated Steinmetz’s design into its electric roadster and priced it under $1,000. Unfortunately, an internal combustion engine Ford Model T cost about half as much, and the Dey roadster flopped, ending production within a year.</p><p>Undeterred, Steinmetz formed the Steinmetz Electric Motor Car Corp. in 1920 with the initial goal of bringing to market an electric truck for deliveries and light industrial use. The first truck debuted on a cold February day in 1922 with a publicity stunt of climbing the steep Miller Avenue hill in Brooklyn, N.Y. According to a report in <em><em>The New York Times, </em></em>the vehicle went up the 14.5 percent grade between Jamaica Avenue and Highland Boulevard in 51 seconds. During a second climb, it stopped a number of times to show how easily it restarted. The truck had a range of 84 km (52 miles).</p><p>The company planned to manufacture 1,000 trucks per year and 300 lightweight delivery cars, plus a five-passenger coupe, but it made a total of only 48 vehicles. After Steinmetz died in 1923, the company soon ceased operation.</p><p>Steinmetz wasn’t only bullish on the electric car, but on electricity in general. A <a href="https://www.nytimes.com/1923/08/20/archives/steinmetz-predicts-fourhour-workday-electricity-in-a-hundred-years.html" target="_blank"><em><em>New York Times</em></em> article</a> recorded his belief that by 2023, we would work no more than 4 hours a day, 200 days a year because electricity would have eliminated the drudgery and unpleasantness of labor. He also predicted that electricity would bring about an end to urban pollution: “Every city would be a spotless town.” With an expansion of leisure time, people would be healthier, engaging in gardening (especially growing their own food) and pursuing educational interests to become “much more intelligent and self-expressive creature[s].”</p><h2>Steinmetz’s Chosen Family</h2><p>I decided to write about Steinmetz last year, after <em><em>IEEE</em></em> <em><em>Spectrum</em></em> published an essay I wrote about <a href="https://spectrum.ieee.org/engineering-and-humanities" target="_self">why engineering needs the humanities</a>. The article contains this line: “In 1909, none other than Charles Proteus Steinmetz advocated for including the classics in engineering education.” I had been impressed to learn of Steinmetz’s recognition of the value of a liberal arts education. But my copy editor didn’t know who Steinmetz was or why he merited the qualifier “none other.” More people should know about this remarkable man, I decided. And so I went looking for a museum object associated with him, so I could include him in a <a href="https://spectrum.ieee.org/collections/past-forward/" target="_self">Past Forward</a> column.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Black and white photo of two men in suits, sitting close to each other on a porch." class="rm-shortcode" data-rm-shortcode-id="7ec840f328e2a51f366264ec666d9ee2" data-rm-shortcode-name="rebelmouse-image" id="6ef4b" loading="lazy" src="https://spectrum.ieee.org/media-library/black-and-white-photo-of-two-men-in-suits-sitting-close-to-each-other-on-a-porch.jpg?id=65005209&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Steinmetz [left] was easily the intellectual equal of Thomas Edison [right], whom he considered a friend.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Corbis/Getty Images</small></p><p>The electric car is only one avenue into Steinmetz’s life. I could instead have looked into Steinmetz solids (the geometric shapes that form when two or three identical cylinders intersect at right angles), Steinmetz curves (the edges of a Steinmetz solid), or the Steinmetz equivalent circuit (a mathematical model that describes a transformer using resistors and inductors). But none of those concepts could be easily captured in a picture-worthy object. His love of his electric car, on the other hand, was a fun and fitting entry point for this most unusual engineer.</p><p>I also saw an opportunity to highlight how Steinmetz became a family man. Steinmetz had dwarfism—he stood just 122 centimeters tall—as well as <a href="https://my.clevelandclinic.org/health/diseases/17671-kyphosis" target="_blank">kyphosis</a>, a severe curvature of the spine, as did his father and grandfather. He didn’t wish to pass along those traits, and so he never married or had children of his own. But that didn’t mean he didn’t want a family.</p><p>In 1903, Steinmetz’s favorite lab assistant, Joseph LeRoy Hayden, told his boss that he was getting married. Steinmetz invited the couple to dinner, and then invited them to live in his large home. They agreed to this unusual living arrangement, with Corinne Rost Hayden running the household and cooking for her husband and Steinmetz. She forced the men to set aside their work for regular family meals.</p><p>Eventually, the Hayden family expanded, welcoming Joe, Midge, and Billy. Steinmetz legally adopted the elder Hayden, thereby gaining three grandchildren as well. Steinmetz, whom <em><em>The</em></em> <em><em>New York Times </em></em>had <a href="https://timesmachine.nytimes.com/timesmachine/1922/03/03/98993187.pdf" rel="noopener noreferrer" target="_blank">named</a> a “modern Jove” who “hurls thunderbolts at will” (from a high-voltage lightning generator), delighted at entertaining the grandkids with wondrous tricks of electricity and chemistry.</p><p>In writing about the history of electrical engineering, I sometimes fall into the trap of focusing too much on the technology. But it’s just as important to recognize the people behind the technology—their personalities, their frailties, their feelings, their challenges. Steinmetz faced adversity for his political beliefs, for being an immigrant, and for his physical stature, yet none of that ever stopped him. In word and deed, he showed that he had a generous heart as mighty as his intellect.</p><p><em>Part of a <a href="https://spectrum.ieee.org/collections/past-forward/" target="_self">continuing series</a> looking at historical artifacts that embrace the boundless potential of technology.</em></p><p><em>An abridged version of this article appears in the March 2026 print issue as “Charles Proteus Steinmetz Loved His Electric Car.”</em></p><h3>References</h3><br/><p><em>IEEE Power & Energy Magazine </em>published Steinmetz’s pro/con list comparing electric cars to those with internal combustion engines in the September/October 2005 issue, along with a good<a href="https://ieeexplore.ieee.org/document/1507031" target="_blank"> biographical overview of Steinmetz</a> by Carl Sulzberger.</p><p>Union College published a <a href="https://www.union.edu/news/stories/201404/Shifting-gears-A-new-home-for-Steinmetz-car" target="_blank">nice story</a> about the restoration of Steinmetz’s electric car in 2014, when it received its permanent home on campus.</p><p>There are many biographies of Steinmetz, one published as early as <a href="https://babel.hathitrust.org/cgi/pt?id=mdp.39015003730945&seq=21" rel="noopener noreferrer" target="_blank">1924</a>, but I am particularly fond of <a href="https://www.amazon.com/Steinmetz-Engineer-Socialist-Hopkins-Technology/dp/0801842980" rel="noopener noreferrer" target="_blank"><em>Steinmetz: Engineer and Socialist</em></a><em> </em>by Ronald Kline (Johns Hopkins University Press, 1992).</p><p>Gilbert King’s 2011 article “<a href="https://www.smithsonianmag.com/history/charles-proteus-steinmetz-the-wizard-of-schenectady-51912022" rel="noopener noreferrer" target="_blank">Charles Proteus Steinmetz, the Wizard of Schenectady</a>” for <em>Smithsonian </em>magazine describes Steinmetz’s chosen family and includes several fun anecdotes not mentioned above.</p>]]></description><pubDate>Sat, 28 Feb 2026 14:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/charles-proteus-steinmetz</guid><category>Electric-vehicles</category><category>Past-forward</category><category>Electrification</category><category>General-electric</category><category>Typedepartments</category><category>History-of-evs</category><dc:creator>Allison Marsh</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/black-and-white-photo-of-people-posed-with-an-early-20th-century-car-one-man-leans-out-the-window-of-the-car-three-children-an.jpg?id=65005163&amp;width=980"></media:content></item><item><title>Video Friday: Robot Dogs Haul Produce From the Field</title><link>https://spectrum.ieee.org/quadruped-farming-robots</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/quadruped-robots-with-crates-on-their-backs-carry-produce-on-a-path-amidst-lush-leafy-green-crops.png?id=65095903&width=2000&height=1500&coordinates=144%2C0%2C144%2C0"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://2026.ieee-icra.org/">ICRA 2026</a>: 1–5 June 2026, VIENNA</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="vzjcvzyi2wq"><em>Our robots Lynx M20 help transport harvested crops in mountainous farmland—tackling the rural “last mile” logistics challenge.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ba185ed737063e503a9255c4a1cfd96d" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/VzjcvzYi2WQ?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.deeprobotics.cn/en">Deep Robotics</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="eqpyvr-b7hc">Once again, I would point out that now that we are reaching peak humanoid robots doing humanoid things, we are inevitably about to see humanoid robots doing nonhumanoid things.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="07f6309039fd04258b0e4abdcfae0617" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/eQpyvR-B7hc?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.unitree.com/">Unitree</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="g9oyrplrig8"><em>In a study, a team of researchers from the Max Planck Institute for Intelligent Systems, the University of Michigan, and Cornell University show that groups of <a data-linked-post="2650267091" href="https://spectrum.ieee.org/magnetic-microbots-to-fight-cancer" target="_blank">magnetic microrobots</a> can generate fluidic forces strong enough to rotate objects in different directions without touching them. These microrobot swarms can turn gear systems, rotate objects much larger than the robots themselves, assemble structures on their own, and even pull in or push away many small objects.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="316646fb51cfe112cec7b3c3839dec72" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/G9oYrPLRIG8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.science.org/doi/10.1126/sciadv.aea9947">Science</a> ] via [ <a href="https://is.mpg.de/en/news/magnetic-microrobot-swarms-enable-contactless-manipulation-of-objects-through-fluidic-torque">Max Planck Institute</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="klhx6qfrzes"><em>Bipedal—or two-legged—autonomous robots can be quite agile. This makes them useful for performing tasks on uneven terrain, such as carrying equipment through outdoor environments or performing maintenance on an oceangoing ship. However, unstable or unpredictable conditions also increase the possibility of a robot wipeout. Until now, there’s been a significant lack of research into how a robot recovers when its direction shifts—for example, a robot losing balance when a truck makes a quick turn. The team aims to fix this research gap.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="de95bdf7f06151477581b83f9cd1d146" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/klhX6qFRZEs?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://news.gatech.edu/news/2026/02/18/humanoid-robots-make-confident-strides-toward-walking-stability">Georgia Tech</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="u7tsdb4nuge"><em>Robotics is about controlling energy, motion, and uncertainty in the real world.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="d6071f0de516f1571a9e1cc180c0f753" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/U7TSDb4NugE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.cs.cmu.edu/~16311/current/labs/lab01/index.html">Carnegie Mellon University</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="fftskxohrxm"><em>Delicious dinner cooked by our robot Robody. We’ve asked our investors to speak about why they’re along for the ride.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="e7e0e84ad77fb254d7071021f9b5677e" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/FfTSKxOhrxM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.devanthro.com/">Devanthro</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="7oc55almc4u"><em>Tilt-rotor aerial robots enable omnidirectional maneuvering through thrust vectoring, but introduce significant control challenges due to the strong coupling between joint and rotor dynamics. This work investigates reinforcement learning for omnidirectional aerial motion control on overactuated tiltable quadrotors that prioritizes robustness and agility.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="4141e9b6c30d50877b06af77023c3bab" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/7oc55aLMC4U?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://zwt006.github.io/posts/BeetleOmni/">Dragon Lab</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="3fzhnaoajae"><em>At the [Carnegie Mellon University] Robotic Innovation Center’s 75,000-gallon water tank, members of the TartanAUV student group worked to further develop their autonomous underwater vehicle (AUV) called Osprey. The team, which takes part in the annual <a data-linked-post="2650255401" href="https://spectrum.ieee.org/build-your-own-undersea-robot" target="_blank">RoboSub</a> competition sponsored by the U.S. Office of Naval Research, is comprised primarily of undergraduate engineering and robotics students.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="d138043667221480117b9978bd790ef3" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/3fzhNAoAjaE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.cmu.edu/news/stories/archives/2026/february/cmus-robotics-innovation-center-propels-research-from-deep-sea-to-space">Carnegie Mellon University</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="1one4l_pghw">Sure seems like the only person who would want a robot dog is a person who does not in fact want a dog.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="c384213c6605abee52a4b285faed7d20" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/1ONE4l_pgHw?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><blockquote><em>Compact size, industrial capability. Maximum torque of 90N·m, over 4 hours of no-load runtime, IP54 rainproof design. With a 15-kg payload, range exceeds 13 km. Open secondary development, empowering industry applications.</em></blockquote><p>[ <a href="https://www.unitree.com/">Unitree</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="flmtgad-snu">If your robot video includes tasty baked goods it <strong><em>will</em></strong> be included in Video Friday.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="bc37f01ece7f72aae9e972b63ed5f39d" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/fLMTgAD-SNU?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://qbrobotics.com/">QB Robotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="hdktpcuzwli"><em>Astorino is a 6-axis educational robot created for practical and affordable teaching of robotics in schools and beyond. It has been created with 3D printing, so it allows for experimentation and the possible addition of parts. With its design and programming, it replicates the actions of industrial robots giving students the necessary skills for future work.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="9175161ed153e8deaaa1697322641517" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/HDKtpcUzwLI?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://astorino.com.pl/en/">Astorino by Kawasaki</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="zdqvhaoagcu">We need more <a data-linked-post="2668536834" href="https://spectrum.ieee.org/autonomous-vehicles-great-at-straights" target="_blank">autonomous driving datasets</a> that accurately reflect how sucky driving can be a lot of the time.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="32bf318792e72e34dbc1d1ef25b3d572" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/zDQVhAOagcU?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://asrl.utias.utoronto.ca/">ASRL</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="g-g0wl_tqw4">This Carnegie Mellon University Robotics Institute Seminar is by CMU’s own Victoria Webster-Wood, on “Robots as Models for Biology and Biology and Materials for Robots.”</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="5ad1f7c8448c962c5dc5ae76dffc81e9" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/G-G0wL_TqW4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><blockquote><em>In the last century, it was common to envision robots as shining metal structures with rigid and halting motion. This imagery is in contrast to the fluid and organic motion of living organisms that inhabit our natural world. The adaptability, complex control, and advanced learning capabilities observed in animals are not yet fully understood, and therefore have not been fully captured by current robotic systems. Furthermore, many of the mechanical properties and control capabilities seen in animals have yet to be achieved in robotic platforms. In this talk, I will share an interdisciplinary research vision for robots as models for neuroscience and biology as materials for robots.</em></blockquote><p>[ <a href="https://www.ri.cmu.edu/event/robots-as-models-for-biology-and-biology-and-materials-for-robots/">CMU RI</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 27 Feb 2026 18:00:55 +0000</pubDate><guid>https://spectrum.ieee.org/quadruped-farming-robots</guid><category>Humanoid-robots</category><category>Video-friday</category><category>Swarm-robotics</category><category>Quadruped-robots</category><category>Farm-robots</category><category>Bipedal-robots</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/quadruped-robots-with-crates-on-their-backs-carry-produce-on-a-path-amidst-lush-leafy-green-crops.png?id=65095903&amp;width=980"></media:content></item><item><title>Bond Strength, Biocompatibility, and Beyond</title><link>https://content.knowledgehub.wiley.com/a-guide-to-selecting-adhesives-for-medical-device-applications/</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/master-bond-logo.png?id=28859628&width=980"/><br/><br/><p>Designing a medical device? This whitepaper helps you evaluate adhesive options for biocompatibility, sterilization resistance, and manufacturability — so you can make the right material decision early.</p><p><strong>What Attendees will Learn</strong></p><ol><li> How to select between epoxy, silicone, cyanoacrylate, and UV/LED curable adhesives based on your device requirements</li><li>Which adhesive systems meet USP Class VI and ISO 10993-5 biocompatibility standards</li><li>How different sterilization methods, such as autoclaving, EtO, gamma, chemical immersion affect adhesive performance over repeated cycles</li><li>Why integrating adhesive selection early in the design process reduces costly trade-offs between performance and manufacturability</li><li>Download this free whitepaper now!</li></ol><p><span><a href="https://content.knowledgehub.wiley.com/a-guide-to-selecting-adhesives-for-medical-device-applications/" target="_blank">Download this free whitepaper now!</a></span></p>]]></description><pubDate>Fri, 27 Feb 2026 11:00:02 +0000</pubDate><guid>https://content.knowledgehub.wiley.com/a-guide-to-selecting-adhesives-for-medical-device-applications/</guid><category>Type-whitepaper</category><category>Adhesive</category><category>Medical-devices</category><category>Biocompatibility</category><dc:creator>Master Bond</dc:creator><media:content medium="image" type="image/png" url="https://assets.rbl.ms/28859628/origin.png"></media:content></item><item><title>This Startup Makes Access to Rehabilitation Facilities Easier</title><link>https://spectrum.ieee.org/carenector-health-care-startup</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/naheem-noah-speaking-to-a-small-crowd-while-giving-a-presentation.jpg?id=65026929&width=2000&height=1500&coordinates=166%2C0%2C167%2C0"/><br/><br/><p>When doctors in the United States refer patients to specialty or post-acute medical care such as physical therapy or long-term nursing care, <a href="https://medcitynews.com/2025/12/the-referral-is-broken-why-healthcares-last-bottleneck-still-lacks-innovation/" rel="noopener noreferrer" target="_blank">nearly half never complete the process</a> of finding help. Referrals stall in part because provider directories are outdated, insurance coverage is unclear, and much coordination still relies on phone calls and faxes.</p><p><a href="https://carenectorhealth.com/" rel="noopener noreferrer" target="_blank">Carenector</a>, a Denver-based startup launched in 2024, is working to improve the process with software that quickly connects patients with appropriate care providers while protecting their personal data. Instead of presenting a long list of providers, many of whom would not be a good match, the company’s referral platform uses AI to eliminate facilities that don’t meet the patient’s rehabilitation needs, don’t accept the patient’s insurance, or are not conveniently located.</p><h3>Carenector</h3><br/><p><strong>Cofounder:</strong></p><p>Naheem Noah</p><p><strong>Founded:</strong></p><p>2024</p><p><strong>Headquarters:</strong></p><p>Denver</p><p><strong>Employees:</strong></p><p>5</p><p>The startup’s platform serves individuals seeking care as well as health care organizations and care coordination teams that manage patient referrals. The company aims to help patients while reducing the administrative burden on clinicians and discharge planners, says cofounder <a href="https://www.linkedin.com/in/naheemn/" rel="noopener noreferrer" target="_blank">Naheem Noah</a>. As of now, Carenector works with patients and facilities only in Colorado, but it plans to expand coverage nationwide.</p><p>Noah, a Ph.D. candidate who joined <a href="https://www.ieee.org/" rel="noopener noreferrer" target="_blank">IEEE</a> in 2022 as a student member, encountered the referral problem firsthand after tearing an<a href="https://medlineplus.gov/ency/article/001074.htm" rel="noopener noreferrer" target="_blank"> anterior cruciate ligament</a> in a knee while playing soccer. Finding a physical therapist who accepted his insurance, specialized in ACL rehabilitation, had appointments available, and was near his home required hours of phone calls and searches through inaccurate provider lists, he says.</p><p>That experience helped shape the company’s direction, but Carenector is aimed at a broader, persistent failure in U.S. health care coordination.</p><h2>A broken referral system</h2><p>The company took shape when Noah connected with his cofounder, licensed social worker <a href="https://daniels.du.edu/blog/base-camp-cohort-2025/" rel="noopener noreferrer" target="_blank">Aminata Diarra</a>, a social director at a nursing facility. Her role included discharge planning: placing patients in post-acute-care facilities that bridge the gap between hospital discharge and the patient’s ability to independently manage life’s daily activities.</p><p>For a single patient, Diarra says, that often meant she made 10 to 15 phone calls over the course of a week to find a facility with a bed available, that accepted the patient’s insurance, and that could meet the care requirements.</p><p>She and Noah soon realized they were dealing with the same broken system from opposite sides. Existing research on referral lapses supported their experience. Primary care physicians often send referral notes—analogous to prescriptions—that list the patient’s medical history and describe the needed treatment.</p><p>Noah discovered that only about one-third of the notes are transmitted in a way that allows providers at nursing homes and rehab facilities to access the information.</p><p>Physicians often post their suggestions for ongoing treatment in sections of a patient’s electronic health records, but providers at post-acute facilities don’t have access to those because of medical privacy laws. What gets shared is a pared-down document that omits progress notes and discharge summaries.</p><h2>Engineering a research-driven startup</h2><p>Noah is currently a researcher in the <a href="https://www.du.edu/" rel="noopener noreferrer" target="_blank">University of Denver</a> computer science department, where his academic work focuses on privacy and security in digital systems.</p><p>He is Carenector’s chief executive and technical lead, overseeing the system’s design, making technical decisions, and meeting with investors.</p><p>Although the startup is separate from his dissertation research, the company reflects his broader interest in building secure systems that work in real-world conditions.</p><p>Beginning a company while a student, he has access to university resources that many early-stage startups lack. He has participated in the university’s <a href="https://crimsonconnect.du.edu/web/rsvp_boot?id=2274507" rel="noopener noreferrer" target="_blank">BaseCamp accelerator</a> and received mentorship and business planning support.</p><p>The Carenector team was assembled with the plan to scale up in the future with health care compliance in mind. The group includes professionals from regulatory, legal, and data engineering fields.</p><h2>Replacing phone calls with digital matching</h2><p>By using standardized digital information shared among medical facilities, Carenector eliminates the need for staff to make phone calls or send faxes. At the core of the platform is a structured database that links care providers—including post-acute, specialty, and rehabilitation facilities—with insurance plan criteria and facility attributes such as accessibility and service capabilities.</p><p>One of the biggest challenges for Noah is getting accurate data on which services facilities offer, which insurance they accept, and whether a patient’s insurance plan covers the treatment proposed by the referring physician.</p><p>“Health care information in the United States is not centralized,” he says, “and insurance provider directories are often wrong or out of date.”</p><p>To address that, Carenector incorporates publicly available datasets from the <a href="https://www.cms.gov/" rel="noopener noreferrer" target="_blank">U.S. Centers for Medicare & Medicaid Services</a> (CMS), including plan attributes, service areas, quality ratings, and issuer-level transparency data. These public-use files provide plan-level and provider-level information that help standardize coverage criteria, geographic availability, and performance indicators. Carenector integrates this structured public data with facility-supplied information and referral outcome analytics to improve matching accuracy.</p><p class="pull-quote">“By replacing manual coordination with clear rules, accurate data, and built-in privacy protections, we hope to make accessing care a routine step in recovery—not another obstacle.”</p><p>This structured data helps Carenector evaluate plan criteria, provider capabilities, geographic availability, and quality indicators to support referral decision-making. The company standardizes and organizes the information within its own system architecture and uses mapping and geolocation APIs to integrate location-based filtering and workflow functionality for patients, providers, and care coordinators.</p><p>Because CMS data is updated periodically, Carenector supplements it with additional structured data sources and referral outcome analytics to better understand plan acceptance patterns. Room availability information comes directly from participating facilities, which are responsible for updating their status within Carenector’s system.</p><p>Whether referrals succeed or fail provides critical feedback, Noah says. When referrals to specific facilities repeatedly go uncompleted—meaning the patient does not receive the recommended care from the provider—Carenector’s AI-driven matching algorithm adjusts to that pattern and reduces the likelihood of that facility being considered for similar cases. Facilities that consistently accept and complete referrals are ranked preferentially.</p><h2>Apps for patients and facilities</h2><p>The company has poured its data management wizardry and AI smarts into apps for patients and clinicians.</p><p>The patient app helps users locate appropriate health care services at no cost. Users can search for care by service type, <a href="https://faq.usps.com/s/article/ZIP-Code-The-Basics" target="_blank">ZIP code</a>, or insurance company without creating an account. They receive a list of matching facilities that can be shared via clipboard or sent by email to themselves or family members..</p><p>In the facility app, clinicians enter the diagnosis, rehabilitation needs, equipment requirements, insurance type, and location without sharing personally identifiable patient information. Organizations can communicate using secure messages that disappear after a set period. Files and images are shown only once and deleted after viewing.</p><p>Facilities that use the app pay Carenector a flat fee for each successful referral. The patient app is free.</p><p>The startup does not sell or share data with third parties, Noah says.</p><p>“<a href="https://spectrum.ieee.org/us-states-selling-hospital-data-that-puts-patients-privacy-at-risk" target="_self">Privacy</a> is a central design requirement for Carenector’s system, not a last-minute add-on to the finished product,” he says.</p><p>The company minimizes the collection of personal data to avoid becoming a data repository. Although its role is limited to coordinating referrals, Carenector is working with independent security auditors to validate that its operational and data-handling practices align with <a href="https://www.google.com/search?q=Health+Insurance+Portability+and+Accountability+Act&rlz=1C5GCCM_en&oq=HIPAA&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQABiABDIKCAIQABixAxiABDIHCAMQABiABDIHCAQQABiABDIKCAUQABixAxiABDIHCAYQABiABDIHCAcQABiABDIHCAgQABiABDIHCAkQABiPAtIBBzQ5MGowajSoAgCwAgA&sourceid=chrome&ie=UTF-8&ved=2ahUKEwjfuPPY77mSAxWaGFkFHQuoEKMQgK4QegQIARAB" rel="noopener noreferrer" target="_blank">Health Insurance Portability and Accountability Act</a> (HIPAA) requirements. The HIPAA law sets standards meant to protect sensitive patient information from unauthorized disclosure.</p><p>Noah says he is confident that Carenector will achieve that rating because the app is designed to reduce the collection and exposure of sensitive information wherever possible.</p><h2>Business model and measured expansion</h2><p>Carenector’s growth plan, Noah says, is strategic. Rather than scaling rapidly, he says, he is looking to enter one region at a time, incorporating feedback from each local deployment before expanding the company further.</p><p>He envisions that in five years, Carenector will serve as a core piece of health care referral infrastructure—embedded in the workflows of hospitals, post-acute facilities, insurers, employers, and major <a href="https://spectrum.ieee.org/tag/electronic-medical-records" target="_self">electronic health record</a> systems such as <a href="https://www.ehrinpractice.com/epic-ehr-software-profile-119.html?campaignid=904189137&adgroupid=45242491815&creative=674018285024&keyword=epic%20ehr&device=c&matchtype=e&campaignid=904189137&adgroupid=45242491815&creative=674018285024&keyword=epic%20ehr&gad_source=1&gad_campaignid=904189137&gbraid=0AAAAADl0cfaE2nUYqhzYbJvWbivwrLt0Y&gclid=Cj0KCQiAtfXMBhDzARIsAJ0jp3CwHIKU3juXZPWICzTaHo9-klTUgKu3TNeF1zI6nLY-9hB7C0JGVGwaAqt5EALw_wcB" rel="noopener noreferrer" target="_blank">Epic</a> and <a href="https://cernerhealth.com/" rel="noopener noreferrer" target="_blank">Cerner</a>—while also increasing visibility for care facilities in underserved and remote areas. The plan, he says, is to support thousands of facility recommendations per day, compared with the approximately 200 daily facility recommendations it currently generates. Noah also looks forward to the broader adoption of APIs that allow care coordination and facility discovery to occur directly within clinical workflows.</p><p>He says he sees his startup as a way to reduce unnecessary stress from moments when patients are vulnerable.</p><p>“By replacing manual coordination with clear rules, accurate data, and built-in privacy protections,” he says, “we hope to make accessing care a routine step in recovery—not another obstacle.”</p>]]></description><pubDate>Thu, 26 Feb 2026 19:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/carenector-health-care-startup</guid><category>Ieee-member-news</category><category>Carenector</category><category>Health-care</category><category>Ai-app</category><category>Careers</category><category>Startup</category><category>Type-ti</category><dc:creator>Willie D. Jones</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/naheem-noah-speaking-to-a-small-crowd-while-giving-a-presentation.jpg?id=65026929&amp;width=980"></media:content></item><item><title>New Path to Battery-Grade Lithium Uses Electrochemistry</title><link>https://spectrum.ieee.org/mangrove-lithium-refining-ev-bottleneck</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-young-adult-male-in-a-lab-coat-holds-a-box-in-an-industrial-yet-scientific-warehouse.jpg?id=65024666&width=2000&height=1500&coordinates=416%2C0%2C417%2C0"/><br/><br/><p><span>As electric vehicles roll off assembly lines, a bottleneck sits upstream: lithium refinement. T</span><span>urning raw lithium into the compounds needed for batteries is expensive, messy, and energy intensive, but </span><span><a href="https://www.mangrovelithium.com/" target="_blank">Mangrove Lithium</a>, a Vancouver-based startup, has a better way. The company has developed an electrochemical refining process that converts lithium feedstocks into battery-grade lithium hydroxide.</span></p><p>Converting raw lithium to lithium hydroxide typically requires roasting spodumene—a mineral from which lithium is derived—at high temperatures, and then leaching it with acid to convert it to lithium sulfate. That compound <span>then needs to be converted to lithium hydroxide. “It’s a thermochemical reaction that uses heavy amounts of reagent chemicals, and generates a sodium sulfate waste stream,” says </span><a href="https://www.linkedin.com/in/rpday/" target="_blank">Ryan Day</a><span>, Mangrove Lithium’s director of operations.</span></p><p>Further tightening the bottleneck, the majority of the world’s lithium—<a href="https://www.iea.org/reports/energy-technology-perspectives-2023/clean-energy-supply-chains-vulnerabilities" target="_blank">60 to 70 percent</a>—is now refined in China, and export restrictions and geopolitical tensions have disrupted <a href="https://spectrum.ieee.org/evs-to-drive-a-lithium-supply-crunch" target="_blank">supply chains</a> in recent years. Shipping raw lithium overseas to be refined also adds to batteries’ total carbon footprint. A new model for lithium refining could reshape not just the <a href="https://spectrum.ieee.org/the-ev-transition-explained-2658463682" target="_blank">economics of electric vehicles</a> but also the geography and environmental footprint of the global battery supply chain. </p><p>Mangrove’s demo plant in British Columbia is scheduled to start production in the second half of 2026. </p><h2>How Does Mangrove’s Refinement Work?</h2><p>Mangrove replaces the conventional, resource-intensive reaction with a process that uses electricity, water, and oxygen. In an electrochemical cell, they flow brine through an electrolyzer, which consists of a metal box with three compartments between the cathode and anode. The compartments are separated by ion exchange membranes, semipermeable barriers that allow only certain ions to pass. Lithium sulfate flows through <span>the central compartment, and the cell’s electric field splits the salt apart. “Lithium, which is a positive ion, will move across a membrane toward the cathode,” says Day. There, “we are reacting oxygen and water to create hydroxide ions, which join with the lithium from the salt to make lithium hydroxide.”</span></p><p>Meanwhile, on the opposite side of the cell, the sulfate—a negative ion—moves toward the anode, where water is being split to produce protons and oxygen gas. The protons combine with sulfate ions to make sulfuric acid. </p><p>“You run that process continuously, and over time you’re generating lithium hydroxide, which you can send to a crystallizer,” Day says. “There’s no significant waste product, and all you’re feeding in is brine, water, oxygen, and electricity.” The sulfuric acid is recovered and can be circulated back upstream to leach more brine from the raw feed material. </p><p>In general, keeping the ion exchange membrane intact is one of the biggest challenges for scaling this type of process, says <a href="https://www.eme.psu.edu/directory/feifei-shi" target="_blank">Feifei Shi</a>, assistant professor of energy engineering at Penn State. Shi, who researches electrochemical-based refinement methods, notes that the approach can more easily activate the necessary reactions,  but faces limitations for large-scale applications. </p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A young adult male in a lab coat using a touch-screen interface in an industrial setting." class="rm-shortcode" data-rm-shortcode-id="be4d574d9cfa28a23b65e833bc77fd11" data-rm-shortcode-name="rebelmouse-image" id="826a8" loading="lazy" src="https://spectrum.ieee.org/media-library/a-young-adult-male-in-a-lab-coat-using-a-touch-screen-interface-in-an-industrial-setting.jpg?id=65024669&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The electrochemical process separates out lithium by passing it through three compartments separated by semipermeable barriers. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Mangrove Lithium</small></p><h2>Mangrove’s Oxygen-Based Cathode</h2><p>Mangrove’s key innovation and what enables the process is an oxygen-based cathode. “Driving the reaction requires detailed engineering,” says Day. The company designed an electrode that lets a gas and a liquid react together, using just enough water to make the oxygen reaction work—without adding so much that it floods the system and creates hydrogen gas instead.<strong></strong></p><p>The electrodes are made with a proprietary process that combines several dedicated layers that allow for a balanced flow of water and oxygen to access the active catalyst sites. This design favors the oxygen-reduction reaction for over 99.5 percent <span>of the total cathode activity. It also reduces the amount of electricity needed to drive the process, because “oxygen reduction requires less voltage than water reduction,” Day says. </span><span>Demand for battery minerals is surging beyond just lithium, with automakers competing for supplies of nickel, cobalt, graphite, and manganese. Simultaneously, utilities are deploying grid-scale batteries that use the same materials in even larger volumes. Refining capacity—not just mining—could become the critical choke point in this buildout, because battery makers require highly specified, ultrapure compounds.</span><br/></p><p>While Mangrove is initially targeting lithium, their electrochemical architecture is not inherently lithium-specific, and could be adapted to other battery materials that face similar purification bottlenecks. Nickel and cobalt sulfate production, for example, still rely on multistep precipitation and solvent-extraction processes that generate significant waste and require large reagent inputs. “It would work immediately in application to other alkali-metal salts,” Day says. </p><p>Mangrove’s demo plant in British Columbia will make 1,000 tonnes per year of lithium hydroxide. If the company can scale its technology as it hopes, it could begin to reshape not just the battery supply chain but also the geopolitics of the energy transition. </p>]]></description><pubDate>Thu, 26 Feb 2026 17:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/mangrove-lithium-refining-ev-bottleneck</guid><category>Electric-vehicles</category><category>Lithium</category><category>Electrochemistry</category><category>Lithium-battery</category><category>Ev-batteries</category><dc:creator>Vanessa Bates Ramirez</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-young-adult-male-in-a-lab-coat-holds-a-box-in-an-industrial-yet-scientific-warehouse.jpg?id=65024666&amp;width=980"></media:content></item><item><title>From Headsets to Hearing Aids</title><link>https://spectrum.ieee.org/bluetooth-low-energy-audio</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/electronic-audio-equipment-with-connected-cables-and-large-black-headphones-on-a-mannequin-head.png?id=63280000&width=2000&height=1500&coordinates=250%2C0%2C250%2C0"/><br/><br/><p><em>This is a sponsored article brought to you by <a href="https://www.ap.com/?utm_source=ieee&utm_medium=sponsored_article&utm_campaign=bt5_q12026&utm_content=byline" target="_blank">Audio Precision</a>.</em></p><p>Bluetooth started as a simple wireless connection between a phone and a headset. Since its inception, it has become the invisible scaffolding for music, calls, gaming, and hearing assistance across consumer and professional devices alike. Bluetooth’s evolution to support more use cases has been driven not by a single breakthrough but by a steady accumulation of radio innovations, codecs, transport schemes, and power management strategies that together enhance the user experience with wireless audio. Today, a new architectural baseline—Bluetooth Low Energy (LE) Audio—promises low-power, high quality, and scalable audio delivery to open up the standard for an even wider range of applications [1][2].</p><h2>Evolution of Bluetooth Radio Technologies</h2><p>The original Basic Rate (BR) radio introduced with Bluetooth 1.0 in 1999 used a Gaussian frequency-shift keying (GFSK) at 1 Msym/s, hopping through 79 channels in the 2.4 GHz band with alternating transmission directions in a tight time-division duplex rhythm. The short-range robustness and reliability afforded by this technology helped gain performance at par with traditional cable-based devices.</p><p>In 2003, the Advanced Audio Distribution Profile (A2DP) arrived as the enabling standard for stereo audio streaming over Bluetooth Classic, marking the technology’s expansion beyond voice into music playback. A2DP uses the Audio/Video Distribution Transport Protocol (AVDTP) for stream management and mandates the Sub-Band Codec (SBC) as its baseline audio compression format. The SBC codec employs 4- or 8-band analysis/synthesis filter banks with adaptive bit allocation, spanning bitrates from 128 to 345 kbps for stereo content. Embedded DSP work showed how to optimize SBC implementation—Weighted Overlap Add (WOLA) filter banks, fixed-point pipelines, and real-time decoding that is audibly indistinguishable from floating point reference implementations while consuming fewer MIPS and milliwatts [3].</p><p>In 2004, Bluetooth 2.0 introduced Enhanced Data Rate (EDR) that moved payloads to π/4 DQPSK or 8 DPSK modulation to boost gross throughput to 2–3 Mb/s, while retaining the GFSK for packet headers. This innovation boosted stereo streaming quality and adoption during the decade.</p><p>Around 2010, Bluetooth Low Energy (BLE) 1 M PHY technology was introduced via Bluetooth 4.0. This new radio technology continued to use GFSK but tuned for low duty cycles and intermittent bursts. This fundamental difference with BR/EDR (Basic Rate/Enhanced Data Rate) led to common usage of the term “Bluetooth Classic” for Bluetooth 1.0 to distinguish it from BLE.</p><h2>Isochronous Transport Architecture</h2><p>In late 2016, Bluetooth 5.0 introduced the LE 2M PHY, doubling the symbol rate to 2 Msym/s. For a healthy link margin, halving a packet’s airtime was found to reduce collision exposure and lower the energy delivered/bit. By 2020, Bluetooth 5.2 or Bluetooth LE Audio radically shifted the focus from continuous streaming to a transport designed explicitly around deadlines. LE (Low Energy) Audio leverages the existing LE 1M and LE 2M PHYs but carries audio over isochronous channels—slots with timing commitments. The isochronous channel architecture comes in two forms. Connected Isochronous Streams (CIS) are unicast flows whose parameters (intervals, subevents, retransmissions) can be tuned to meet frame deadlines with bounded jitter, enabling the radio to sleep predictably between bursts while the application knows precisely when a frame will arrive. A systematic review of BLE performance corroborates that output and latency in the real world are bounded as much by connection interval, event length, and retransmissions as by the raw symbol rate; under the right parameters, faster PHYs reduce radioactive time and improve energy efficiency, while coded long-range modes trade airtime for robustness in harsher channels [1].</p><p>Broadcast Isochronous Streams (BIS)—commercially branded as Auracast—extend that scheduling to one-to-many transmissions, enabling connectionless audio delivery to unlimited receivers [2][7].</p><p>This difference in architecture over continuous streams requires careful selection of intervals, packetization, codec forming and appropriate models to determine parameters that meet deadlines without wasting airtime. Markov chain analyses of CIS—validated via simulation—translate developer choices (intervals, subevents, retransmission counts) into quantitative predictions for packet loss rate (PLR), backlog, delay, throughput, and average power consumption. [7]</p><h2>The LC3 Codec Advantage</h2><p>LE Audio’s Low Complexity Communication Codec (LC3) fundamentally shifts the bitrate-quality-complexity balance. Peer-reviewed listening tests across speech and music demonstrate that LC3 delivers superior perceived quality compared with SBC and mSBC at roughly half the bitrate; it also provides robust packet loss concealment and flexible frame sizes, including low-latency modes that make the encoding delay a smaller slice of the end‑to-end budget [2]. The benefits are practical: lower bitrate shrinks airtime, which reduces collision risk; shorter frames pair cleanly with CIS scheduling so deadlines are easier to meet; the codec’s computational footprint is modest enough for miniature devices [2].</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" rel="float: left;" style="float: left;"> <img alt='AP logo with blue swoosh, text reads "An Axiometrics Solutions Brand."' class="rm-shortcode" data-rm-shortcode-id="cb909d3eec20c4f191a479fe8407f82f" data-rm-shortcode-name="rebelmouse-image" id="51199" loading="lazy" src="https://spectrum.ieee.org/media-library/ap-logo-with-blue-swoosh-text-reads-an-axiometrics-solutions-brand.png?id=63280879&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Audio Precision provides high-performance audio analyzers, accessories, and applications that have helped engineers worldwide design, validate, characterize, and manufacture audio products for over 40 years. </small></p><h2>Hearing Aids: Power-Constrained Wireless Audio</h2><p>Modern hearing devices are a complex assembly of multiple microphones, digital signal processors, and miniature power sources. Except for Completely-in-Canal (CIC) and Invisible-in-Canal (IIC) designs, which are so small they fit entirely within the ear canal, most hearing aids incorporate two or more microphones to support directional processing, beamforming, and noise reduction. Audio output is provided by a single electro-acoustic transducer. The compact form factor severely limits battery capacity, making energy efficiency critical.</p><p>Compared to Bluetooth Classic (A2DP/HFP), LE Audio improves energy efficiency through three broad mechanisms: the LC3 codec achieves equivalent perceived audio quality at significantly lower bitrates than the SBC codec used in Bluetooth Classic; the LE 1M and 2M PHYs reduce on-air time per packet relative to BR/EDR; and Connected Isochronous Streams (CIS) enable precise scheduling, allowing the radio to sleep between transmissions, whereas BR/EDR audio requires longer active radio periods.</p><p>BLE‑compliant wake‑up receivers (WuRx) monitor the air with micro/nano-watt sensitivity and trigger the main radio with packet preambles. Reported designs demonstrate sensitivity to extremely weak radio signals (down to −80 dBm), with within‑bit duty cycling that trades latency for power from hundreds of microseconds to seconds [4]. Sleep scheduling techniques primarily apply heuristics for periodic check‑ins, event‑driven wake-ups, clustering, and time division to stretch lifetime while meeting QoS targets [5][6].</p><h2>From True Wireless Stereo to Coordinated Sets</h2><p>Bluetooth Classic’s A2DP supports only a single audio stream. In Bluetooth Classic’s True Wireless Stereo (TWS) devices, one earbud acts as the primary, receiving the stereo stream from the phone and relaying audio to the secondary earbud—a forwarding or relay architecture. The additional transmission hop adds latency to the secondary earbud, while increasing power consumption in the primary.</p><p>LE Audio eliminates this limitation entirely. The technology’s dual CIS capability lets the phone send synchronized left and right streams directly to both earbuds. This architectural shift enables independent CIS connections from the phone to the left and right earbuds or hearing aids, enabling synchronized stereo delivery without relaying.</p><p>Discovery and pairing have evolved to match multi‑device use. The Coordinated Set Identification Service (CSIS) allows two earbuds—or two hearing aids—to be discovered and managed as a coordinated set rather than independently, with resolvable identifiers and set‑level locks. While peer‑reviewed empirical literature on CSIS is thin, timing and carrier synchronization theory is mature: clock‑offset estimation, jitter control, phase‑locked loops, buffer alignment, and recovery strategies hold binaural timing within tens of milliseconds for lip‑sync and spatial imaging [9].</p><h2>Gaming Headsets: Low Latency With Bidirectional Stereo</h2><p>Gaming represents a demanding stress test for wireless audio. Bluetooth Classic’s Headset Profile (HSP) and Hands-Free Profile (HFP) support bidirectional audio for voice communication but are fundamentally limited: they transmit only in mono with a maximum sampling rate of 16 kHz, restricting both spatial audio quality and voice fidelity.</p><p><span>LE Audio Unicast Voice transforms this scenario by supporting stereo audio with sampling rates up to 32 kHz, significantly improving spatial audio and speech quality for gaming while maintaining voice communication with other players. End‑to‑end latency often must stay under a few tens of milliseconds for responsive play and coherent spatial sound. LC3’s shorter frames and lower bitrates shrink codec delay; tuned CIS parameters preserve deadlines while limiting retransmissions to useful values; beamforming improves capture quality for bidirectional voice without ballooning computational cost [2][7].</span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Close-up of smartphone screen showing Bluetooth icon in blue with other icons around it." class="rm-shortcode" data-rm-shortcode-id="5f8ff32ae91d2f1eb32bfd0764a647ae" data-rm-shortcode-name="rebelmouse-image" id="1a8d7" loading="lazy" src="https://spectrum.ieee.org/media-library/close-up-of-smartphone-screen-showing-bluetooth-icon-in-blue-with-other-icons-around-it.png?id=63280594&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Audio Precision’s new Bluetooth® 5 module provides an interface to audio devices using the latest version of the Bluetooth specification, including LE Audio devices utilizing Unicast and Auracast™. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Adobe Stock</small></p><h2>Public Broadcast Audio: Auracast</h2><p>Bluetooth Classic supports only one active audio connection and typically provides a range of approximately 10 meters, making it fundamentally unsuitable for broadcast scenarios such as lecture halls, churches, gyms, and airports.</p><p>LE Audio introduces the Broadcast Isochronous Stream (BIS), commercially branded as Auracast, enabling true one-to-many audio transmission. Multiple hearing aids, headphones, and earbuds can receive the same broadcast, which may be public (e.g., airport announcements) or private (encrypted, non-discoverable, optional password protection). Typical Auracast ranges extend up to 30 meters indoors and 100 meters outdoors, depending on environment and configuration.</p><p>BIS’s connectionless nature scales easily to unlimited receivers without pairing overhead; isochronous delivery tolerates packet loss well through forward error correction and interleaving; and the unidirectional transmission eliminates return traffic, reducing radio congestion. Assistive listening studies report that bypassing room acoustics and delivering audio directly can improve signal‑to‑noise ratios by 15–20 dB, making announcements comprehensible and lectures clearer [8].</p><h2>Ensuring It Sounds Good in, on or Over the Listener’s Ear</h2><p>LE Audio delivers the music or voice signal more efficiently than its predecessor, Bluetooth Classic. Audio engineers still need to verify their devices’ audio performance as experienced by the end user.</p><p>The listener’s pinna, the external part of the ear, and ear canal are a critical part of the playback system. For example, the low-frequency response and the effectiveness of active noise-cancellation are highly dependent on the seal between the device and the listener’s ear canal. Similarly, on-ear and over-ear headphones interact with the listener’s pinnas.</p><p>Anthropomorphic test fixtures—most notably <a href="https://www.grasacoustics.com/products/head-torso-simulators-kemar?utm_source=ieee&utm_medium=sponsored_article&utm_campaign=bt5_q12026&utm_content=kemar" target="_blank">GRAS KEMAR</a> (Knowles Electronics Manikin for Acoustic Research) head and torso simulators—incorporate soft, deformable anthropomorphic pinnas that replicate realistic insertion and sealing conditions. These allow accurate replication of insertion depth, sealing, low-frequency response, and ANC performance [10][12].</p><p>Gaming headsets both receive and send audio. Just like music headphones, gaming headset testing benefits from fixtures with a human-like pinna to ensure repeatable measurement of ear-pad interaction. The headset’s microphone can be either a traditional boom microphone positioned close to the mouth or an array of microphones located farther away on the ear cups incorporating beamforming to isolate the wearer’s voice from any background noise. Test fixtures use an artificial mouth and a microphone positioned at the Mouth Reference Point (MRP) according to ITU-T standards to evaluate microphone performance under realistic speech and background noise conditions [10].</p><p><span><span>For </span>testing of<span> devices intended as broadcast receivers, an integrated test system with Auracast broadcast capability—like the </span><a href="https://www.ap.com/analyzers-accessories/interfaces-modules/bluetooth-5-le-audio-module?utm_source=ieee&utm_medium=sponsored_article&utm_campaign=bt5_q12026&utm_content=bt5_module_1" target="_blank">Audio Precision Bluetooth 5 module</a><span>—proves invaluable.</span></span></p><h2>Conclusion</h2><p>Bluetooth audio is no longer defined by a single radio or a single profile. It is defined by a timed pipeline—a codec that makes better sound with fewer bits, a transport that guarantees when those bits arrive, a radio that can sleep most of the time, and front‑end processing that gives the codec an easier job.</p><p>Hearing aids illustrate the payoff: arrays and beamformers improve intelligibility first; LC3 compresses with low delay; CIS schedules delivery; the radio sleeps; batteries last. Enhancements in other applications, such as gaming and public broadcast, further strengthen the case for adoption of this cutting-edge technology.</p><p><span><span>While Bluetooth audio began as a low-bandwidth, mono voice technology over Basic Rate (BR) radio in 1999, more than 25 years of evolution has produced a fundamental architectural shift. LE Audio replaces continuous point-to-point streams with scheduled, low-power, scalable audio delivery, enabling new classes of devices and use cases. The standards are ready, and audio test systems like </span><a href="https://www.ap.com/analyzers-accessories/interfaces-modules/bluetooth-5-le-audio-module?utm_source=ieee&utm_medium=sponsored_article&utm_campaign=bt5_q12026&utm_content=bt5_module_2" target="_blank">Audio Precision’s Bluetooth 5 module</a><span> are updated to incorporate the new transmission technology; the rest is execution—deploying LE Audio broadly so audio becomes instant, clear, and inclusive [2][7].</span></span></p><h3>References</h3><p>[1] Tosi, J., Taffoni, F., Santacatterina, M., Sannino, R., & Formica, D. (2017). Performance evaluation of Bluetooth Low Energy: A systematic review. <em>Sensors</em>, <em>17</em>(12), Article 2898. <a href="https://doi.org/10.3390/s17122898" target="_blank">https://doi.org/10.3390/s17122898</a></p><p>[2] Schnell, M., Riedl, M., Löllmann, H., & Multrus, M. (2021). LC3 and LC3plus: The new audio transmission standards for wireless communication. <em>Proceedings of the AES 150th Convention</em>, Online.</p><p>[3] Hermann, D., Herre, J., & Teichmann, R. (2004). Low-power implementation of the Bluetooth subband audio codec. <em>Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)</em>, Montreal, QC, Canada.</p><p>[4] Abdelhamid, M. R., Chen, R., Cho, J., Chandrakasan, A. P., & Wentzloff, D. D. (2018). A −80 dBm BLE-compliant, FSK wake-up receiver with system and within-bit duty-cycling for scalable power and latency. <em>Proceedings of the IEEE Custom Integrated Circuits Conference (CICC)</em>, San Diego, CA, USA.</p><p>[5] Mutar, M. S., Mohammed, A. H., & Abdulkareem, M. B. (2024). A survey of sleep scheduling techniques in wireless sensor networks for maximizing energy efficiency. <em>AIP Conference Proceedings</em>.</p><p>[6] Mikhaylov, K., & Karvonen, H. (2020). Wake-up radio enabled BLE wearables: Empirical and analytical evaluation of energy efficiency. <em>Proceedings of the IEEE International Symposium on Medical Information and Communication Technology (ISMICT)</em>.</p><p>[7] Yan, Z., Xu, H., & Shen, Z. (2024). Modeling and analysis of the performance for CIS-based Bluetooth LE Audio [Preprint].</p><p>[8] Kaufmann, T. B., Weller, T., Stiefelhagen, R., & Adiloglu, K. (2023). Requirements for mass adoption of assistive listening technology by the general public. <em>arXiv</em>. <a href="https://arxiv.org/abs/2303.02523" target="_blank">https://arxiv.org/abs/2303.02523</a></p><p>[9] Nasir, A. A., Durrani, S., Mehrpouyan, H., Blostein, S. D., & Kennedy, R. A. (2015). Timing and carrier synchronization in wireless communication systems: A survey and classification of research in the last five years. <em>arXiv</em>. <a href="https://arxiv.org/abs/1507.02032" target="_blank">https://arxiv.org/abs/1507.02032</a></p><p>[10] Okorn, E., & Wulf-Andersen, P. (2019). Acoustic test fixtures: From KEMAR and beyond! <em>The Journal of the Acoustical Society of America</em>, <em>146</em>(4), 2815. <a href="https://doi.org/10.1121/1.5136656" target="_blank">https://doi.org/10.1121/1.5136656</a></p><p>[11] An analytical model of Bluetooth performance considering physical and link-layer effects. (2021). <em>IEEE Xplore</em>.</p><p><span><span></span><span>[12] IEC/ITU acoustic standards literature for headphone and earbud testing. (n.d.). Indexed in </span><em>The Journal of the Acoustical Society of America</em><span> and </span><em>AIP Conference Proceedings</em><span>.</span></span></p><p><span><span><em>Disclosure: AI tools were used by Wiley, which produced this sponsored article, to skim through research literature for technical insights on the evolution and state of the art of Bluetooth technology. AI was also used to polish the text for conciseness and technical accuracy.</em></span></span></p>]]></description><pubDate>Thu, 26 Feb 2026 14:23:17 +0000</pubDate><guid>https://spectrum.ieee.org/bluetooth-low-energy-audio</guid><category>Bluetooth</category><category>Hearing-aids</category><category>Audio-electronics</category><dc:creator>Wiley</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/electronic-audio-equipment-with-connected-cables-and-large-black-headphones-on-a-mannequin-head.png?id=63280000&amp;width=980"></media:content></item><item><title>How Stupid Would It Be to Put Data Centers in Space?</title><link>https://spectrum.ieee.org/orbital-data-centers</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/diagram-of-a-spatial-server-farm-with-solar-panels-and-antennas-earth-indicated-in-orbit-view.png?id=64967042&width=2000&height=1500&coordinates=0%2C112%2C0%2C112"/><br/><br/><p>What’s the difference between a stupid idea and a brilliant one? Sometimes, it just comes down to resources. Practically unlimited funds, like limitless thrust, can get even a mad idea off the ground. </p><p>And so it might be for the concept of putting AI data centers in orbit. In a rare moment of unalloyed agreement, some of the richest and most powerful men in technology are staunchly backing the idea. The group includes <a href="https://spectrum.ieee.org/u/elon-musk" target="_self">Elon Musk</a>, <a href="https://spectrum.ieee.org/tag/jeff-bezos" target="_self">Jeff Bezos</a>, <a href="https://corporate-awards.ieee.org/recipient/jensen-huang/" rel="noopener noreferrer" target="_blank">Jensen Huang</a>, <a href="https://spectrum.ieee.org/search/?q=sam+altman" target="_self">Sam Altman</a>, and <a href="https://about.google/company-info/" rel="noopener noreferrer" target="_blank">Google</a> CEO <a href="https://spectrum.ieee.org/search/?q=sundar+pichai" target="_blank">Sundar Pichai</a>. In all likelihood, hundreds of people are now working on the concept of space data centers at the firms directly or indirectly controlled by these men—SpaceX, Starlink, Tesla, Amazon, Blue Origin, Nvidia, OpenAI, and Google, among others.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Pie charts compare the costs of orbital solar\u2014$51.1billion\u2014vs. terrestrial data center\u2014$16 billion." class="rm-shortcode" data-rm-shortcode-id="884247a7e06c8578a54420da9a5519d9" data-rm-shortcode-name="rebelmouse-image" id="ee7ad" loading="lazy" src="https://spectrum.ieee.org/media-library/pie-charts-compare-the-costs-of-orbital-solar-u2014-51-1billion-u2014vs-terrestrial-data-center-u2014-16-billion.png?id=64967044&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Likely costs to design, build, and launch a 1-GW orbital data center, based on a network of some 4,300 satellites and including operating costs over a five-year period, would exceed US $50 billion. That’s about three times the cost of a 1-GW data center on Earth, including five years of operation.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">John MacNeill</small></p>So how much would it cost to start training large language models in space? Probably the best accounting is<a href="https://andrewmccalip.com/space-datacenters" target="_blank"> one created by aerospace engineer Andrew McCalip</a>. McCalip’s exhaustive, detailed analysis includes interactive sliders that let you compare costs for space-based and terrestrial data centers in the range of 1 to 100 gigawatts. <a href="https://spectrum.ieee.org/data-center-growth" target="_self">One-gigawatt data centers</a> are being <a href="https://www.reuters.com/technology/openai-oracle-related-digital-announce-new-stargate-data-center-michigan-2025-10-30/" target="_blank">built now</a> on terra firma, and Meta has announced plans for a <a href="https://www.datacenterdynamics.com/en/news/meta-establishes-meta-compute-plans-multiple-gigawatt-plus-scale-ai-data-centers/" target="_blank">5-GW facility</a>, with anticipated completion some time after 2030.<p>In an interview, McCalip says his initial rough calculations a few years ago suggested that data centers in space would cost in the range of 7 to 10 times more, per gigawatt of capacity, than their terrestrial counterparts. “It just wasn’t practical,” he says. “Not even close.” But when Elon Musk began publicly backing the idea, McCalip revisited the numbers using publicly available information about <a href="https://starlink.com/" target="_blank">Starlink</a>’s and <a href="https://www.tesla.com/" target="_blank">Tesla</a>’s technologies and capabilities.</p><p>That changed the picture substantially. The figures in his online analysis assume an orbital network of data-center satellites that borrows heavily from Musk’s tech treasure chest—“essentially…you just start putting some radiation-resistant ASIC chips on the Starlink fleet and you start growing edge capacity organically on the Starlink fleet,” McCalip says. The network would rely on the kind of <a href="https://www.oreateai.com/blog/indepth-analysis-of-nvidia-tesla-gpu-architecture-design-principles/9099c63c7637949b2101e7c477158509" target="_blank">watt-efficient GPU architecture</a> used in Teslas for <a href="https://arxiv.org/html/2411.10291v1" rel="noopener noreferrer" target="_blank">self-driving</a>, he adds. “You start dropping those onto the backs of Starlinks. You can slowly grow this out, and this would be approximately the performance that you would get.”</p><p>Bottom line, with some solid but not necessarily heroic engineering, the cost of an orbital data center could be as low as three times that of the comparable terrestrial one. That differential, while still high, at least nudges the concept out of the instantly dismissible category. “I have my particular views, but I want the data to speak for itself,” McCalip says.</p><p>For this illustration, we picked a configuration with an aggregate 1 GW of capacity. The network would consist of some 4,300 satellites, each of which would be outfitted with a 1,024-square-meter solar array that generates 250 kilowatts. The data center on that satellite, powered by the array, might have at least 175 GPUs; McCalip notes that a popular GPU rack, <a href="https://www.nvidia.com/en-us/data-center/vera-rubin-nvl72/" rel="noopener noreferrer" target="_blank">Nvidia’s NVL72</a>, has 72 GPUs and requires 120 to 140 kW.</p><p>The total cost of the satellite network would be around US $51 billion, including launch and five years of operational expenses; a comparable terrestrial system would cost about $16 billion over the same period.</p><p>Stupid? Not stupid? You decide.</p>]]></description><pubDate>Thu, 26 Feb 2026 14:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/orbital-data-centers</guid><category>Ai-data-centers</category><category>Orbital-data-centers</category><category>Nvidia</category><category>Tesla</category><category>Google</category><dc:creator>Glenn Zorpette</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/diagram-of-a-spatial-server-farm-with-solar-panels-and-antennas-earth-indicated-in-orbit-view.png?id=64967042&amp;width=980"></media:content></item><item><title>Achieving Micron-Level Tolerances: CAD Optimization for Sub-10µm 3D Printing</title><link>https://content.knowledgehub.wiley.com/designing-for-precision-cad-tips-for-micro-scale-3d-printing/</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/boston-micro-fabrication-logo-with-red-cubic-design-next-to-bold-bmf-text.png?id=64987960&width=980"/><br/><br/><p><span>Achieve successful micro-scale 3D prints by optimizing tolerances, wall thickness, support strategies, microfluidic channels, and material selection in your CAD models from the start.</span></p><p><strong><span>What Attendees will Learn</span></strong></p><ol><li><span>Tolerance-driven design -- How to define resolution and tolerance constraints that translate directly from CAD intent to sub-10µm printed geometry.</span></li><li><span>Geometry-aware fabrication -- Principles for engineering wall thickness, aspect ratios, and orientation to maintain structural fidelity at micron scale.</span></li><li><span>Support-free design strategies -- Leveraging self-supporting geometries and build orientation to preserve feature integrity without post-processing trade-offs.</span></li><li><span>Integrated material-process thinking -- Matching resin properties, shrinkage behavior, and export parameters to your application’s functional requirements.</span></li></ol><div><span><a href="https://content.knowledgehub.wiley.com/designing-for-precision-cad-tips-for-micro-scale-3d-printing/" target="_blank">Download this free whitepaper now!</a></span></div>]]></description><pubDate>Thu, 26 Feb 2026 11:00:02 +0000</pubDate><guid>https://content.knowledgehub.wiley.com/designing-for-precision-cad-tips-for-micro-scale-3d-printing/</guid><category>Typewhitepaper</category><category>3d-printing</category><category>Microfluidics</category><category>Fabrication</category><category>Type-whitepaper</category><dc:creator>Boston Micro Fabrication</dc:creator><media:content medium="image" type="image/png" url="https://assets.rbl.ms/64987960/origin.png"></media:content></item><item><title>How to Thrive as a Remote Worker</title><link>https://spectrum.ieee.org/remote-work</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/an-illustration-of-stylized-people-wearing-business-casual-clothing.webp?id=61876810&width=2000&height=1500&coordinates=0%2C0%2C0%2C0"/><br/><br/><p><em>This article is crossposted from </em>IEEE Spectrum<em>’s careers newsletter. <a href="https://engage.ieee.org/Career-Alert-Sign-Up.html" rel="noopener noreferrer" target="_blank"><em>Sign up now</em></a><em> to get insider tips, expert advice, and practical strategies, <em><em>written i<em>n partnership with tech career development company <a href="https://www.parsity.io/" rel="noopener noreferrer" target="_blank">Parsity</a> and </em></em></em>delivered to your inbox for free!</em></em></p><h2>Standing Out as a Remote Worker Takes a Different Strategy</h2><p>My first experience as a remote worker was a disaster.</p><p>Before I joined a San Francisco-based team with a lead developer in Connecticut, I had worked in person, five days a week. I thought success was simple: write good code, solve hard problems, deliver results. So I put my head down and worked harder than ever.</p><p>Twelve-hour days became normal as the boundary between work and personal life disappeared. My kitchen table became my office.</p><p>I rarely asked for help because I didn’t want to seem incompetent. I stayed quiet in team Slack channels because I wasn’t sure what to say.</p><p>Despite working some of the longest hours of my career, I made the slowest progress. I felt disconnected from the team. I had no idea if my work mattered or if anyone noticed what I was doing. I was burning out.</p><p>Eventually, I realized the real problem: I was invisible.</p><h3>The Office Advantage You Lose When Remote</h3><p>In an office, visibility happens naturally. Colleagues see you arrive early or stay late. They notice when you are stuck on a problem. They hear about your work in hallway conversations and over lunch. Physical presence creates recognition with almost no effort.</p><p>Remote work removes those signals. Your manager cannot see you at your desk. Your teammates don’t know you’ve hit a roadblock unless you say so. You can work long days and still appear less engaged than someone in the office.</p><p>That is the shift many people miss: Remote work requires execution plus deliberate communication.</p><h3>What Actually Works</h3><p>By my second remote role, I knew I had to change to protect my sanity and still succeed.</p><p>Here are five things I did that made a real difference.</p><p><strong>1. Over-communicating</strong></p><p>I began sharing updates in team channels regularly, not just when asked. “Working on the payment integration today; ready for review tomorrow.” “Hit a blocker with API rate limits; investigating options.” These took seconds but made my work visible and invited help sooner.</p><p><strong>2. Setting limits</strong></p><p>When your home is also your office, overwork becomes the default. I started ending most days at 5 p.m. and transitioning out of work mode with a walk or gym session. That ritual helped prevent burnout.</p><p><strong>3. Volunteering for presentations</strong></p><p>Presenting remotely felt less intimidating than standing in front of a room. I started volunteering for demos and lunch-and-learns. This increased my visibility beyond my immediate team and improved my communication skills.</p><p><strong>4. Promoting others publicly</strong></p><p>When someone helped me, I thanked them in a public channel. When a teammate shipped something impressive, I called it out. This builds goodwill and signals collaboration. In remote environments, gratitude is visible and memorable.</p><p><strong>5. Building relationships deliberately</strong></p><p>In an office, relationships form naturally. Remotely, you have to create those moments. I started an engineering book club that met every other week to discuss a technical book. It became a low-pressure way to connect with people across the organization.</p><h3>The Counterintuitive Reality</h3><p>With these habits, I got promoted faster in this remote job than I ever did in an office. I moved from senior engineer to engineering manager in under two years, while maintaining a better work-life balance.</p><p>Remote work offers flexibility and freedom, but it comes with a tax. You are easier to overlook and more likely to burn out unless you are intentional in your actions.</p><p>So, succeeding remotely takes deliberate effort in communication, relationships, and boundaries. If you do that well, remote work can unlock more opportunities than you might expect.</p><p>—Brian</p><h2><a href="https://spectrum.ieee.org/network-security-engineer-alan-dekok" target="_self">This Former Physicist Helps Keep the Internet Secure</a></h2><p>Despite its critical role in maintaining a secure network, authentication software often goes unnoticed by users. Alan DeKok now runs one of the most widely used remote authentication servers in the world—but he didn’t initially set out to work in cybersecurity. DeKok studied nuclear physics before starting the side project that eventually turned into a three-decade-long career. </p><p><a href="https://spectrum.ieee.org/network-security-engineer-alan-dekok" target="_blank">Read more here.</a></p><h2><a href="https://www.rationalfx.com/forex-brokers/tech-industry-layoffs/" rel="noopener noreferrer" target="_blank">More Than 30,000 Tech Employees Laid Off in 2026</a></h2><p>We’re just two months into 2026, and layoffs in the tech industry are already ramping up. According to data compiled by RationalFX, more than half of the 30,700 layoffs this year have come from Amazon, which announced that it would be cutting the roles of 16,000 employees in late January. Will the trend continue through 2026? </p><p><a href="https://www.rationalfx.com/forex-brokers/tech-industry-layoffs/" target="_blank">Read more here.</a></p><h2><a href="https://spectrum.ieee.org/ieee-online-mini-ai-mba" target="_self">IEEE Online Mini-MBA Aims to Fill Leadership Skills Gaps in AI</a></h2><p>Recent research suggests that a majority of organizations have a significant gap when it comes to AI skills among leadership. To help fill the gap, IEEE has partnered with the Rutgers Business School to offer an online “mini-MBA” program, combining business strategy and deep AI literacy. The program spans 12 weeks and 10 modules that teach students how to implement AI strategies in their own organizations. </p><p><a href="https://spectrum.ieee.org/ieee-online-mini-ai-mba" target="_blank">Read more here.</a></p>]]></description><pubDate>Wed, 25 Feb 2026 17:48:29 +0000</pubDate><guid>https://spectrum.ieee.org/remote-work</guid><category>Worklife-balance</category><category>Tech-career-development</category><category>Careers-newsletter</category><dc:creator>Brian Jenney</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/an-illustration-of-stylized-people-wearing-business-casual-clothing.webp?id=61876810&amp;width=980"></media:content></item><item><title>AI Is Acing Math Exams Faster Than Scientists Write Them</title><link>https://spectrum.ieee.org/ai-math-benchmarks</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/line-graph-demonstrates-how-google-deepminds-aletheia-ai-scores-at-least-5-percent-higher-on-ph-d-math-exercises-than-the-lat.jpg?id=65007034&width=2000&height=1500&coordinates=166%2C0%2C167%2C0"/><br/><br/><p><span>Mathematics is often regarded as the ideal domain for measuring AI progress effectively. Math’s step-by-step logic is easy to track, and its definitive, automatically verifiable answers remove any human or subjective factors. But AI systems are improving at such a pace that math </span><a href="https://spectrum.ieee.org/melanie-mitchell" target="_self">benchmarks are struggling to keep up</a><span>.</span></p><p>Way back in November 2024, nonprofit research organization Epoch AI quietly released <a href="https://doi.org/10.48550/arXiv.2411.04872" target="_blank">FrontierMath</a>. A standardized, rigorous benchmark, FrontierMath was designed to measure the mathematical reasoning capabilities of the latest AI tools.</p><p>“It’s a bunch of really hard math problems,” explains <a href="https://epoch.ai/team" target="_blank">Greg Burnham</a>, Epoch AI senior researcher. “Originally, it was 300 problems that we now call tiers 1–3, but having seen AI capabilities really speed up, there was a feeling that we had to run to stay ahead, so now there’s a special challenge set of extra carefully constructed problems that we call tier 4.”</p><p>To a rough approximation, tiers 1–4 go from advanced undergraduate through to early postdoc-level mathematics. When introduced, state-of-the-art AI models were unable to solve more than 2 percent of the problems FrontierMath contained. <a href="https://epoch.ai/frontiermath/tiers-1-4" target="_blank">Fast forward to today</a>: The best publicly available AI models, such as GPT-5.2 and Claude Opus 4.6, are solving over 40 percent of FrontierMath’s 300 tier 1–3 problems, and over 30 percent of the 50 tier 4 problems.</p><h2>AI takes on Ph.D.-level mathematics</h2><p>And this dizzying pace of advancement is showing no signs of abating. For example, just recently <a href="https://deepmind.google/blog/accelerating-mathematical-and-scientific-discovery-with-gemini-deep-think/" target="_blank">Google DeepMind announced</a> that Aletheia, an experimental AI system derived from Gemini Deep Think, <a href="https://doi.org/10.48550/arXiv.2601.23245" target="_blank">achieved publishable Ph.D.-level research results</a>. Though obscure mathematically—it was calculated with certain structure constants in arithmetic geometry called eigenweights—the result is significant in terms of AI development.</p><p>“They’re claiming it was essentially autonomous, meaning a human wasn’t guiding the work, and it’s publishable,” Burnham says. “It’s definitely at the lower end of the spectrum of work that would get a mathematician excited, but it’s new—it’s something we truly haven’t really seen before.”</p><p>To place this achievement in context, every FrontierMath problem has a known answer that a human has derived. Though a human could probably have achieved Aletheia’s result “if they sat down and steeled themselves for a week,” says Burnham, no human had ever done so.</p><p>Aletheia’s results and other recent achievements by AI mathematicians point to new, tougher benchmarks being needed to understand AI capabilities—and fast, because existing ones will soon become irrelevant. “There are easier math benchmarks that are already obsolete, several generations of them,” says Burnham. “FrontierMath will probably saturate [Ed. note: This means that state-of-the-art AI models score 100 percent] within the next two years—could be faster.”</p><h2>The First Proof challenge</h2><p>To begin to address this problem, on 6 February, a group of 11 highly distinguished mathematicians <a href="https://doi.org/10.48550/arXiv.2602.05192" rel="noopener noreferrer" target="_blank">proposed the First Proof challenge</a>, a set of 10 extremely difficult math questions that arose naturally in the authors’ research processes, and whose proofs are roughly five pages or less and had not been shared with anyone. <a href="https://1stproof.org/" rel="noopener noreferrer" target="_blank">The First Proof challenge</a> was a preliminary effort to assess the capabilities of AI systems in solving research-level math questions on their own.</p><p>Generating serious buzz in the math community, professional and amateur mathematicians, and teams including OpenAI, all stepped up to the challenge. But by the time the authors <a href="https://codeberg.org/tgkolda/1stproof/src/branch/main/2026-02-batch/FirstProofSolutionsComments.pdf" rel="noopener noreferrer" target="_blank">posted the proofs</a> on 14 February, no one had submitted correct solutions to all 10 problems.</p><p>In fact, far from it. The authors themselves only solved two of the 10 problems using Gemini 3.0 Deep Think and ChatGPT 5.2 Pro. And most outside submissions fared little better, apart from OpenAI and a small Aletheia team at Google DeepMind. With “limited human supervision,” OpenAI’s most advanced internal AI system <a href="https://openai.com/index/first-proof-submissions/" rel="noopener noreferrer" target="_blank">solved five of the 10 problems</a>, with Aletheia achieving similar outcomes—results met with a spectrum of emotions by different members of the mathematics community, from awe to disappointment. The team behind First Proof plans an even tougher <a href="https://1stproof.org/" rel="noopener noreferrer" target="_blank">second round on 14 March</a>.</p><h2>A new frontier for AI</h2><p>“I think First Proof is terrific: It’s as close as you could realistically get to putting an AI system in the shoes of a mathematician,” says Burnham. Though he admires how First Proof tests AI’s mathematical utility for a wide range of mathematics and mathematicians, Epoch AI has its own new approach to testing—<a href="https://epoch.ai/frontiermath/open-problems" rel="noopener noreferrer" target="_blank">FrontierMath: Open Problems</a>. Uniquely, the pilot benchmark consists of 16 open problems (with more to follow) from research mathematics that professional mathematicians have tried and failed to solve. Since Open Problems’ <a href="https://epochai.substack.com/p/introducing-frontiermath-open-problems" rel="noopener noreferrer" target="_blank">release on 27 January</a>, none have been solved by an AI.</p><p>“With Open Problems, we’ve tried to make it more challenging,” says Burnham. “The baseline on its own would be publishable, at least in a specialty journal.” What’s more, each question is designed so that it can be automatically graded. “This is a bit counterintuitive,” Burnham adds. “No one knows the answers, but we have a computer program that will be able to judge whether the answer is right or not.”</p><p>Burnham sees First Proof and Open Problems as being complementary. “I would say understanding AI capabilities is a more-the-merrier situation,” he adds. “AI has gotten to the point where it’s—in some ways—better than most Ph.D. students, so we need to pose problems where the answer would be at least moderately interesting to some human mathematicians, not because AI was doing it but because it’s mathematics that human mathematicians care about.”</p>]]></description><pubDate>Wed, 25 Feb 2026 16:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/ai-math-benchmarks</guid><category>Ai-benchmarks</category><category>Mathematics</category><category>Large-language-models</category><category>Artificial-intelligence</category><dc:creator>Benjamin Skuse</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/line-graph-demonstrates-how-google-deepminds-aletheia-ai-scores-at-least-5-percent-higher-on-ph-d-math-exercises-than-the-lat.jpg?id=65007034&amp;width=980"></media:content></item><item><title>Jimi Hendrix Was a Systems Engineer</title><link>https://spectrum.ieee.org/jimi-hendrix-systems-engineer</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/an-electric-guitar-connected-to-a-number-of-effects-pedals-connect-to-an-amplifier-and-loudspeaker-stack.png?id=64983431&width=2000&height=1500&coordinates=0%2C99%2C0%2C100"/><br/><br/><p>3 February 1967 is a day that belongs in the annals of music history. <a href="https://www.jimihendrix.com/" rel="noopener noreferrer" target="_blank">It’s the day that Jimi Hendrix</a> entered London’s <a href="https://en.wikipedia.org/wiki/Olympic_Studios" rel="noopener noreferrer" target="_blank">Olympic Studios</a> to record a song using a new component. The song was “<a href="https://www.youtube.com/watch?v=WGoDaYjdfSg" rel="noopener noreferrer" target="_blank">Purple Haze</a>,” and the component was the Octavia guitar pedal, created for Hendrix by sound engineer <a href="https://www.roger-mayer.co.uk/" rel="noopener noreferrer" target="_blank">Roger Mayer</a>. The pedal was a key element of a complex chain of analog elements responsible for the final sound, including the acoustics of the studio room itself. When they sent the tapes for remastering in the United States, the sounds on it were so novel that they included an accompanying note explaining that the distortion at the end was not malfunction but intention. A few months later, Hendrix would deliver his <a href="https://www.youtube.com/watch?v=P-IRA5UrZ7g" rel="noopener noreferrer" target="_blank">legendary electric guitar </a>performance at the Monterey International Pop Festival.<a href="https://www.jimihendrix.com/" rel="noopener noreferrer" target="_blank"> </a></p><p>“Purple Haze” firmly established that an electric guitar can be used not just as a stringed instrument with built-in pickups for convenient sound amplification, but also as a full-blown wave synthesizer whose output can be manipulated at will. Modern guitarists can reproduce Hendrix’s chain using separate plug-ins in digital audio workstation software, but the magic often disappears when everything is buffered and quantized. I wanted to find out if a more systematic approach could do a better job and provide insights into how Hendrix created his groundbreaking sound.</p><p>My fascination with Hendrix’s Olympic Studios’ performance arose because there is a “Hendrix was an alien” narrative surrounding his musical innovation—that his music appeared more or less out of nowhere. I wanted to replace that narrative with an engineering-driven account that’s inspectable and reproducible—plots, models, and a signal chain from the guitar through the pedals that you can probe stage by stage.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Four plots showing magnitudes plotted against time and frequency." class="rm-shortcode" data-rm-shortcode-id="b51dacaf694059d798f30940df9e55e1" data-rm-shortcode-name="rebelmouse-image" id="f2cc6" loading="lazy" src="https://spectrum.ieee.org/media-library/four-plots-showing-magnitudes-plotted-against-time-and-frequency.png?id=64983773&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Each effects pedal in Hendrix’s chain contributed to enhancing the electric guitar beyond its intrinsic limits. A selection of plots from the full-circuit analysis shows how the Fuzz Face turns a sinusoid signal from a string into an almost square wave; how the Octavia pedal inverts half the input waveform to double its frequency; how the wah-wah pedal acts as band-pass filter; and how the Uni-Vibe pedal introduces selective phase shifts to color the sound.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">James Provost/ Rohan S. Puranik</small></p><p>Although I work mostly in the digital domain as an edge-computing architect in my day job, I knew that analog circuit simulations would be the key to going deeper.</p><p>My first step was to look at the challenges Hendrix was trying to address. Before the 1930s, guitars were too quiet for large ensembles. Electromagnetic pickups—coils of wire wrapped around magnets that detect the vibrations of metal strings—fixed the loudness problem. But they left a new one: the <em><em>envelope</em></em>, which specifies how the amplitude of a note varies as it’s played on an instrument, starting with a rising initial <em><em>attack</em></em>, followed by a falling <em><em>decay</em></em>, and then any <em><em>sustain</em></em> of the note after that. Electric guitars attack hard, decay fast, and don’t sustain like bowed strings or organs. Early manufacturers tried to modify the electric guitar’s characteristics by using hollow bodies fitted with magnetic pickups, but the instrument still barked more than it sang.</p><p>Hendrix’s mission was to reshape both the electric guitar’s envelope and its tone until it could feel like a human voice. He tackled the guitar’s constraints by augmenting it. His solution was essentially a modular analog signal chain driven not by knobs but by hands, feet, gain staging, and physical movement in a feedback field.</p><p>Hendrix’s setups are well documented: Set lists, studio logs, and interviews with Mayer and Eddie Kramer, then the lead engineer at Olympic Studios, fill in the details. The signal chain for “Purple Haze” consisted of a set of pedals—a <a href="https://en.wikipedia.org/wiki/Fuzz_Face" target="_blank">Fuzz Face</a>, the Octavia, and a<a href="https://en.wikipedia.org/wiki/Wah-wah_pedal" target="_blank"> wah-wah</a>—plus a <a href="https://mopop.emuseum.com/objects/100504/marshall-super-lead-amplifier-formerly-owned-by-jimi-hendrix" target="_blank">Marshall 100-watt amplifier</a> stack, with the guitar and room acoustics closing a feedback loop that Hendrix tuned with his own body. Later, Hendrix would also incorporate a <a href="https://en.wikipedia.org/wiki/Uni-Vibe" target="_blank">Uni-Vibe</a> pedal for many of his tracks. All the pedals were commercial models except for the <a href="https://www.roger-mayer.co.uk/octavia.htm" rel="noopener noreferrer" target="_blank">Octavia</a>, which Mayer built to produce a distorted signal an octave higher than its input.</p><p class="pull-quote"><span>Hendrix didn’t speak in decibels and ohm values, but he collaborated with engineers who did.</span></p><p>I obtained the schematics for each of these elements and their accepted parameter ranges, and converted them into <a href="https://en.wikipedia.org/wiki/Netlist" target="_blank">netlists</a> that <a href="https://ngspice.sourceforge.io/" target="_blank">ngspice</a> can process (ngpsice is an open source implementation of the Spice circuit analyzer). The Fuzz Face pedal came in two variants, using <a href="https://spectrum.ieee.org/germanium-can-take-transistors-where-silicon-cant" target="_blank">germanium</a> or silicon transistors, so I created models for both. In my models, Hendrix’s guitar pickups had a resistance of 6 kiloohms and an inductance of 2.5 henrys with a realistic cable capacitance.</p><p>I chained the circuit simulations together using a script, and I produced data-plot and sample sound outputs with Python scripts. All of the ngspice files and other scripts are available in my GitHub repository at<a href="https://github.com/nahorov/Hendrix-Systems-Lab" target="_blank"> github.com/nahorov/Hendrix-Systems-Lab</a>, with instructions on how to reproduce my simulations.</p><h2>What Does The Analysis of Hendrix’s Signal Chain Tell Us?</h2><p>Plotting the signal at different points in the chain with different parameters reveals how Hendrix configured and manipulated the nonlinear complexities of the system as a whole to reach his expressive goals.</p><p>A few highlights: First, the Fuzz Face is a two-transistor feedback amplifier that turns a gentle sinusoid signal into an almost binary “fuzzy” output. The interesting behavior emerges when the guitar’s volume is reduced. Because the pedal’s input impedance is very low (about 20 kΩ), the pickups interact directly with the pedal circuit. Reducing amplitude restores a sinusoidal shape—producing the famous “<a href="https://www.youtube.com/shorts/XJdw_KrUN2w" target="_blank">cleanup effect</a>” that was a hallmark of Hendrix’s sound, where the fuzz drops in and out as desired while he played.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A photograph of three young men beside a recording studio mixing desk." class="rm-shortcode" data-rm-shortcode-id="e4c563fb619e01f04942bbacee6a3b54" data-rm-shortcode-name="rebelmouse-image" id="35a49" loading="lazy" src="https://spectrum.ieee.org/media-library/a-photograph-of-three-young-men-beside-a-recording-studio-mixing-desk.png?id=64983993&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Engineer Eddie Kramer, Jimi Hendrix, and studio manager Jim Marron at the Electric Lady Studios in New York City.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Fred W. McDarrah/Getty Images</small></p><p>Second, the Octavio pedal used a rectifier, which normally converts alternating to direct current. Mayer realized that a rectifier effectively flips each trough of a waveform into a peak, doubling the number of peaks per second. The result is an apparent doubling of frequency—a bloom of second-harmonic content that the ear hears a bright octave above the fundamental.</p><p>Third, the wah-wah pedal is a band-pass filter: Frequency plots show the center frequency sweeping from roughly 300 hertz to 2 kilohertz. Hendrix used it to make the guitar “talk” with vowel sounds, most iconically on “<a href="https://www.youtube.com/watch?v=C0XwPUFfFwE" target="_blank">Voodoo Child</a> (Slight Return).”</p><p>Fourth, the Uni-Vibe cascades four phase-shift sections controlled by photoresistors. In circuit terms, it’s a low-frequency oscillator modulating a variable-phase network; in musical terms it’s motion and air.</p><p>Finally, the whole chain became a closed loop by driving the <a href="https://spectrum.ieee.org/the-cool-sound-of-tubes" target="_blank">Marshall amplifier near saturation</a>, which among other things extends the sustain. In a reflective room, the guitar strings couple acoustically to the speakers—move a few centimeters and you shift from one stable feedback mode to another. To an engineer, this is a gain-controlled acoustic feedback system. To Hendrix, it was part of the instrument. He learned to tune oscillation with distance and angle, shaping sirens, bombs, and harmonics by walking the edge of instability.</p><p>Hendrix didn’t speak in decibels and ohm values, but he collaborated with engineers who did—Mayer and Kramer—and iterated fast as a systems engineer. Reframing Hendrix as an engineer doesn’t diminish the art. It explains how one person, in under four years as a bandleader, could pull the electric guitar toward its full potential by systematically augmenting the instrument’s shortcomings for maximum expression.</p><p><em>This article appears in the March 2026 print issue as “<span>Jimi Hendrix, Systems Engineer</span>.”</em></p><p><em>A correction to this article was made on 27 Feb 2026 to correctly identify the men posing with Jimi Hendrix in the recording studio.</em></p>]]></description><pubDate>Wed, 25 Feb 2026 15:39:03 +0000</pubDate><guid>https://spectrum.ieee.org/jimi-hendrix-systems-engineer</guid><category>Music</category><category>Jimi-hendrix</category><category>Sound-engineering</category><category>Typedepartments</category><dc:creator>Rohan S. Puranik</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/an-electric-guitar-connected-to-a-number-of-effects-pedals-connect-to-an-amplifier-and-loudspeaker-stack.png?id=64983431&amp;width=980"></media:content></item><item><title>Andrew Ng: Unbiggen AI</title><link>https://spectrum.ieee.org/andrew-ng-data-centric-ai</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/andrew-ng-listens-during-the-power-of-data-sooner-than-you-think-global-technology-conference-in-brooklyn-new-york-on-wednes.jpg?id=29206806&width=2000&height=1500&coordinates=0%2C0%2C0%2C0"/><br/><br/><p><strong><a href="https://en.wikipedia.org/wiki/Andrew_Ng" rel="noopener noreferrer" target="_blank">Andrew Ng</a> has serious street cred</strong> in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at <a href="https://stanfordmlgroup.github.io/" rel="noopener noreferrer" target="_blank">Stanford University</a>, cofounded <a href="https://research.google/teams/brain/" rel="noopener noreferrer" target="_blank">Google Brain</a> in 2011, and then served for three years as chief scientist for <a href="https://ir.baidu.com/" rel="noopener noreferrer" target="_blank">Baidu</a>, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told <em>IEEE Spectrum</em> in an exclusive Q&A.</p><hr/><p>
	Ng’s current efforts are focused on his company 
	<a href="https://landing.ai/about/" rel="noopener noreferrer" target="_blank">Landing AI</a>, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the <a href="https://www.youtube.com/watch?v=06-AZXmwHjo" target="_blank">data-centric AI movement</a>, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.
</p><p>
	Andrew Ng on...
</p><ul>
<li><a href="#big">What’s next for really big models</a></li>
<li><a href="#career">The career advice he didn’t listen to</a></li>
<li><a href="#defining">Defining the data-centric AI movement</a></li>
<li><a href="#synthetic">Synthetic data</a></li>
<li><a href="#work">Why Landing AI asks its customers to do the work</a></li>
</ul><p>
<strong>The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an <a href="https://spectrum.ieee.org/deep-learning-computational-cost" target="_self">unsustainable trajectory</a>. Do you agree that it can’t go on that way?</strong>
</p><p>
<strong>Andrew Ng: </strong>This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.
</p><p>
<strong>When you say you want a foundation model for computer vision, what do you mean by that?</strong>
</p><p>
<strong>Ng:</strong> This is a term coined by <a href="https://cs.stanford.edu/~pliang/" rel="noopener noreferrer" target="_blank">Percy Liang</a> and <a href="https://crfm.stanford.edu/" rel="noopener noreferrer" target="_blank">some of my friends at Stanford</a> to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, <a href="https://spectrum.ieee.org/open-ais-powerful-text-generating-tool-is-ready-for-business" target="_self">GPT-3</a> is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.
</p><p>
<strong>What needs to happen for someone to build a foundation model for video?</strong>
</p><p>
<strong>Ng:</strong> I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.
</p><p>
	Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.
</p><p>
<a href="#top">Back to top</a>
</p><p>
<strong>It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.</strong>
</p><p>
<strong>Ng: </strong>Over a decade ago, when I proposed starting the <a href="https://research.google/teams/brain/" rel="noopener noreferrer" target="_blank">Google Brain</a> project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.
</p><p class="pull-quote">
	“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”<br/>
	—Andrew Ng, CEO & Founder, Landing AI
</p><p>
	I remember when my students and I published the first 
	<a href="https://nips.cc/" rel="noopener noreferrer" target="_blank">NeurIPS</a> workshop paper advocating using <a href="https://developer.nvidia.com/cuda-zone" rel="noopener noreferrer" target="_blank">CUDA</a>, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.
</p><p>
<strong>I expect they’re both convinced now.</strong>
</p><p>
<strong>Ng:</strong> I think so, yes.
</p><p>
	Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”
</p><p>
<a href="#top">Back to top</a>
</p><p>
<strong>How do you define data-centric AI, and why do you consider it a movement?</strong>
</p><p>
<strong>Ng:</strong> Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.
</p><p>
	When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.
</p><p>
	The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a 
	<a href="https://neurips.cc/virtual/2021/workshop/21860" rel="noopener noreferrer" target="_blank">data-centric AI workshop at NeurIPS</a>, and I was really delighted at the number of authors and presenters that showed up.
</p><p>
<strong>You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?</strong>
</p><p>
<strong>Ng: </strong>You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.
</p><p>
<strong>When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?</strong>
</p><p>
<strong>Ng: </strong>Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of <a href="https://developers.arcgis.com/python/guide/how-retinanet-works/" rel="noopener noreferrer" target="_blank">RetinaNet</a>. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.
</p><p class="pull-quote">
	“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”<br/>
	—Andrew Ng
</p><p>
	For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.
</p><p>
<strong>Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?</strong>
</p><p>
<strong>Ng:</strong> Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, <a href="https://www.cs.princeton.edu/~olgarus/" rel="noopener noreferrer" target="_blank">Olga Russakovsky</a> gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed <a href="https://neurips.cc/virtual/2021/invited-talk/22281" rel="noopener noreferrer" target="_blank">Mary Gray’s presentation,</a> which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like <a href="https://www.microsoft.com/en-us/research/project/datasheets-for-datasets/" rel="noopener noreferrer" target="_blank">Datasheets for Datasets</a> also seem like an important piece of the puzzle.
</p><p>
	One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.
</p><p>
<strong>When you talk about engineering the data, what do you mean exactly?</strong>
</p><p>
<strong>Ng: </strong>In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a <a href="https://jupyter.org/" rel="noopener noreferrer" target="_blank">Jupyter notebook</a> and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.
</p><p>
	For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.
</p><p>
<a href="#top">Back to top</a>
</p><p>
<strong>What about using synthetic data, is that often a good solution?</strong>
</p><p>
<strong>Ng: </strong>I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, <a href="https://tensorlab.cms.caltech.edu/users/anima/" rel="noopener noreferrer" target="_blank">Anima Anandkumar</a> gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.
</p><p>
<strong>Do you mean that synthetic data would allow you to try the model on more data sets?</strong>
</p><p>
<strong>Ng: </strong>Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.
</p><p class="pull-quote">
	“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”<br/>
	—Andrew Ng
</p><p>
	Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.
</p><p>
<a href="#top">Back to top</a>
</p><p>
<strong>To make these issues more concrete, can you walk me through an example? When a company approaches <a href="https://landing.ai/" rel="noopener noreferrer" target="_blank">Landing AI</a> and says it has a problem with visual inspection, how do you onboard them and work toward deployment?</strong>
</p><p>
<strong>Ng: </strong>When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the <a href="https://landing.ai/platform/" rel="noopener noreferrer" target="_blank">LandingLens</a> platform. We often advise them on the methodology of data-centric AI and help them label the data.
</p><p>
	One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.
</p><p>
<strong>How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?</strong>
</p><p>
<strong>Ng:</strong> It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.
</p><p>
	In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?
</p><p>
<strong>So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.</strong>
</p><p>
<strong>Ng: </strong>Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.
</p><p>
<strong>Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?</strong>
</p><p>
<strong>Ng: </strong>In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.
</p><p>
<a href="#top">Back to top</a>
</p><p><em>This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist</em><em>.”</em></p>]]></description><pubDate>Wed, 09 Feb 2022 15:31:12 +0000</pubDate><guid>https://spectrum.ieee.org/andrew-ng-data-centric-ai</guid><category>Deep-learning</category><category>Artificial-intelligence</category><category>Andrew-ng</category><category>Type-cover</category><dc:creator>Eliza Strickland</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/andrew-ng-listens-during-the-power-of-data-sooner-than-you-think-global-technology-conference-in-brooklyn-new-york-on-wednes.jpg?id=29206806&amp;width=980"></media:content></item><item><title>How AI Will Change Chip Design</title><link>https://spectrum.ieee.org/ai-chip-design-matlab</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/layered-rendering-of-colorful-semiconductor-wafers-with-a-bright-white-light-sitting-on-one.jpg?id=29285079&width=2000&height=1500&coordinates=166%2C0%2C167%2C0"/><br/><br/><p>The end of <a href="https://spectrum.ieee.org/on-beyond-moores-law-4-new-laws-of-computing" target="_self">Moore’s Law</a> is looming. Engineers and designers can do only so much to <a href="https://spectrum.ieee.org/ibm-introduces-the-worlds-first-2nm-node-chip" target="_self">miniaturize transistors</a> and <a href="https://spectrum.ieee.org/cerebras-giant-ai-chip-now-has-a-trillions-more-transistors" target="_self">pack as many of them as possible into chips</a>. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.</p><p>Samsung, for instance, is <a href="https://spectrum.ieee.org/processing-in-dram-accelerates-ai" target="_self">adding AI to its memory chips</a> to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has <a href="https://spectrum.ieee.org/heres-how-googles-tpu-v4-ai-chip-stacked-up-in-training-tests" target="_self">doubled its processing power</a> compared with that of  its previous version.</p><p>But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with <a href="https://www.linkedin.com/in/heather-gorr-phd" rel="noopener noreferrer" target="_blank">Heather Gorr</a>, senior product manager for <a href="https://www.mathworks.com/" rel="noopener noreferrer" target="_blank">MathWorks</a>’ MATLAB platform.</p><p><strong>How is AI currently being used to design the next generation of chips?</strong></p><p><strong>Heather Gorr:</strong> AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-resized-container rm-resized-container-25 rm-float-left" data-rm-resized-container="25%" style="float: left;">
<img alt="Portrait of a woman with blonde-red hair smiling at the camera" class="rm-shortcode rm-resized-image" data-rm-shortcode-id="1f18a02ccaf51f5c766af2ebc4af18e1" data-rm-shortcode-name="rebelmouse-image" id="2dc00" loading="lazy" src="https://spectrum.ieee.org/media-library/portrait-of-a-woman-with-blonde-red-hair-smiling-at-the-camera.jpg?id=29288554&width=980" style="max-width: 100%"/>
<small class="image-media media-caption" placeholder="Add Photo Caption..." style="max-width: 100%;">Heather Gorr</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit..." style="max-width: 100%;">MathWorks</small></p><p>Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see  something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.</p><p><strong>What are the benefits of using AI for chip design?</strong></p><p><strong>Gorr:</strong> Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a <a href="https://en.wikipedia.org/wiki/Model_order_reduction" rel="noopener noreferrer" target="_blank">reduced order model</a>, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your <a href="https://www.ibm.com/cloud/learn/monte-carlo-simulation" rel="noopener noreferrer" target="_blank">Monte Carlo simulations</a> using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.</p><p><strong>So it’s like having a digital twin in a sense?</strong></p><p><strong>Gorr:</strong> Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.</p><p><strong>So, it’s going to be more efficient and, as you said, cheaper?</strong></p><p><strong>Gorr:</strong> Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.</p><p><strong>We’ve talked about the benefits. How about the drawbacks?</strong></p><p><strong>Gorr: </strong>The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years.</p><p>Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together.</p><p>One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.</p><p><strong>How can engineers use AI to better prepare and extract insights from hardware or sensor data?</strong></p><p><strong>Gorr: </strong>We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.</p><p>One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on <a href="https://github.com/" rel="noopener noreferrer" target="_blank">GitHub</a> or <a href="https://www.mathworks.com/matlabcentral/" rel="noopener noreferrer" target="_blank">MATLAB Central</a>, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.</p><p><strong>What should engineers and designers consider wh</strong><strong>en using AI for chip design?</strong></p><p><strong>Gorr:</strong> Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.</p><p><strong>How do you think AI will affect chip designers’ jobs?</strong></p><p><strong>Gorr:</strong> It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.</p><p><strong>How do you envision the future of AI and chip design?</strong></p><p><strong>Gorr</strong><strong>:</strong> It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.</p>]]></description><pubDate>Tue, 08 Feb 2022 14:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/ai-chip-design-matlab</guid><category>Chip-fabrication</category><category>Matlab</category><category>Moores-law</category><category>Chip-design</category><category>Ai</category><category>Digital-twins</category><dc:creator>Rina Diane Caballar</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/layered-rendering-of-colorful-semiconductor-wafers-with-a-bright-white-light-sitting-on-one.jpg?id=29285079&amp;width=980"></media:content></item><item><title>Atomically Thin Materials Significantly Shrink Qubits</title><link>https://spectrum.ieee.org/2d-hbn-qubit</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-golden-square-package-holds-a-small-processor-sitting-on-top-is-a-metal-square-with-mit-etched-into-it.jpg?id=29281587&width=2000&height=1500&coordinates=166%2C0%2C167%2C0"/><br/><br/><p>Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.</p><p>IBM has adopted the superconducting qubit road map of <a href="https://spectrum.ieee.org/ibms-envisons-the-road-to-quantum-computing-like-an-apollo-mission" target="_self">reaching a 1,121-qubit processor by 2023</a>, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.</p><p>Now researchers at <a href="https://www.nature.com/articles/s41563-021-01187-w" rel="noopener noreferrer" target="_blank">MIT have been able to both reduce the size of the qubits</a> and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.</p><p>“We are addressing both qubit miniaturization and quality,” said <a href="https://equs.mit.edu/william-d-oliver/" rel="noopener noreferrer" target="_blank">William Oliver</a>, the director for the <a href="https://cqe.mit.edu/" target="_blank">Center for Quantum Engineering</a> at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”</p><p>The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.</p><p>Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-resized-container rm-resized-container-25 rm-float-left" data-rm-resized-container="25%" style="float: left;">
<img alt="Golden dilution refrigerator hanging vertically" class="rm-shortcode rm-resized-image" data-rm-shortcode-id="694399af8a1c345e51a695ff73909eda" data-rm-shortcode-name="rebelmouse-image" id="6c615" loading="lazy" src="https://spectrum.ieee.org/media-library/golden-dilution-refrigerator-hanging-vertically.jpg?id=29281593&width=980" style="max-width: 100%"/>
<small class="image-media media-caption" placeholder="Add Photo Caption..." style="max-width: 100%;">Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit..." style="max-width: 100%;">Nathan Fiske/MIT</small></p><p>In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.</p><p>As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.</p><p>In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.</p><p>“We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author <a href="https://equs.mit.edu/joel-wang/" rel="noopener noreferrer" target="_blank">Joel Wang</a>, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics. </p><p>On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.</p><p>While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.</p><p>“What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”</p><p>This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.</p><p>“The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.</p><p>Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.</p>]]></description><pubDate>Mon, 07 Feb 2022 16:12:05 +0000</pubDate><guid>https://spectrum.ieee.org/2d-hbn-qubit</guid><category>Quantum-computing</category><category>2d-materials</category><category>Ibm</category><category>Qubits</category><category>Hexagonal-boron-nitride</category><category>Superconducting-qubits</category><category>Mit</category><dc:creator>Dexter Johnson</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-golden-square-package-holds-a-small-processor-sitting-on-top-is-a-metal-square-with-mit-etched-into-it.jpg?id=29281587&amp;width=980"></media:content></item></channel></rss>