<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:media="http://search.yahoo.com/mrss/"><channel><title>IEEE Spectrum</title><link>https://spectrum.ieee.org/</link><description>IEEE Spectrum</description><atom:link href="https://spectrum.ieee.org/feeds/feed.rss" rel="self"></atom:link><language>en-us</language><lastBuildDate>Fri, 03 Apr 2026 18:50:35 -0000</lastBuildDate><item><title>Video Friday: Digit Learns to Dance—Virtually Overnight</title><link>https://spectrum.ieee.org/video-humanoid-dancing</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/bipedal-teal-robot-practices-side-to-side-dance-move-with-arm-movement.gif?id=65460048&width=1200&height=600&coordinates=0%2C79%2C0%2C80"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://2026.ieee-icra.org/">ICRA 2026</a>: 1–5 June 2026, VIENNA</h5><h5><a href="https://roboticsconference.org/">RSS 2026</a>: 13–17 July 2026, SYDNEY</h5><h5><a href="https://mrs.fel.cvut.cz/summer-school-2026/">Summer School on Multi-Robot Systems</a>: 29 July–4 August 2026, PRAGUE</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="pc-n6aciusu"><em>Getting Digit to dance takes more than putting on some fancy shoes—our AI Team can teach Digit new whole-body control capabilities overnight. Using raw motion data from mocap, animation, and teleop methods, Digit gets new skills through sim-to-real reinforcement training.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="4477bcbaf1f5072afe88c2c0015eebd1" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Pc-n6ACIuSU?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.agilityrobotics.com/">Agility</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="sy2xyrmv44y"><em>We’ve created GEN-1, our latest milestone in scaling robot learning. We believe it to be the first general-purpose AI model that crosses a new performance threshold: mastery of simple physical tasks. It improves average success rates to 99% on tasks where previous models achieve 64%, completes tasks roughly 3x faster than state of the art, and requires only 1 hour of robot data for each of these results. GEN-1 unlocks commercial viability across a broad range of applications—and while it cannot solve all tasks today, it is a significant step towards our mission of creating generalist intelligence for the physical world.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="bbbeecb0e15f3b78f50b3ebf230ecf33" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/SY2xyrmV44Y?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://generalistai.com/blog/apr-02-2026-GEN-1">Generalist</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="pn_bj5-qyw8"><em>Unitree open-sources UnifoLM-WBT-Dataset—high-quality real-world humanoid robot <a data-linked-post="2650273084" href="https://spectrum.ieee.org/mit-humanoid-robot-teleoperation-dynamic-tasks" target="_blank">whole-body teleoperation</a> (WBT) dataset for open environments. Publicly available since March 5, 2026, the dataset will continue to receive high-frequency rolling updates. It aims to establish the most comprehensive real-world humanoid robot dataset in terms of scenario coverage, task complexity, and manipulation diversity.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="bd19da6e3dfeb2ede20007b534d1b9a6" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/pN_bj5-QyW8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://huggingface.co/collections/unitreerobotics/unifolm-wbt-dataset">Hugging Face</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="79mr-_-a9js"><em>Autonomous mobile robots operating in human-shared indoor environments often require paths that reflect human spatial intentions, such as avoiding interference with pedestrian flow or maintaining comfortable clearance. This paper presents MRReP, a Mixed Reality-based interface that enables users to draw a Hand-drawn Reference Path (HRP) directly on the physical floor using hand gestures.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="783457e452248043a5ec6e2898ae5289" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/79mR-_-a9js?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://mertcookimg.github.io/mrrep/">MRReP</a> ]</p><p>Thanks, Masato!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="97qialc5hnm"><em>Eye contact, even momentarily between strangers, plays a pivotal role in fostering human connection, promoting happiness, and enhancing belonging. Through autonomous navigation and adaptive mirror control, Mirrorbot facilitates serendipitous, nonverbal interactions by dynamically transitioning reflections from self-focused to mutual recognition, sparking eye contact, shared awareness, and playful engagement.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="232f93e3a45a2e11d81366bb7ed95286" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/97qIaLC5hNM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://arl.human.cornell.edu/research-MirrorBot.html">ARL</a> ] via [ <a href="https://news.cornell.edu/stories/2026/04/mirrorbot-fostering-human-connection">Cornell University</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="jya06ffonyg"><em>Experience PAL Robotics’ new teleoperation system for TIAGo Pro, the AI-ready mobile manipulator designed for advanced research. This real-time VR teleoperation setup allows precise control of TIAGo Pro’s dual arms in Cartesian space, ideal for remote manipulation, AI data collection, and robot learning.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="86699af54f2bfd064590b0cd59aa3f8c" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/jya06FFONyg?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://pal-robotics.com/robot/tiago-pro/">PAL Robotics</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="t52sq8gk5ks">Utter brilliance from Robust AI. No notes.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="71e7d47e220a5b61b914c1491f1df3dc" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/T52SQ8Gk5Ks?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.robust.ai/">Robust AI</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="w8lqu8dkvp4"><em>Come along with our Senior Test Engineer, Nick L., as he takes us on a tour of the <a data-linked-post="2650277831" href="https://spectrum.ieee.org/qa-irobot-roomba-i7" target="_blank">Home Test Labs</a> inside the iRobot HQ.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="56a753f2b7e0640f199e35246a22843f" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/W8lQU8dKvP4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.irobot.com/en_US/our-story.html">iRobot</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="gjukjrwjpxg"><em>By automating the final “magic 5%” of production—the precise trimming of swim goggles’ silicone gaskets based on individual face scans—UR cobots allow THEMAGIC5 to deliver affordable, custom-fit goggles, enabling the company to scale from a Kickstarter sensation to selling over 400,000 goggles worldwide.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="76ebeda03bf930b9cd576a8e870f8dad" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/GJukJRWjPxg?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.universal-robots.com/case-stories/non-stop-robot-precision-for-7-years-cobots-deliver-the-last-magic-5-in-swim-goggle-production/">Universal Robots</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="x16ht1erjhk"><em>Sanctuary AI has once again demonstrated its industry-leading approach to training dexterous manipulation policies for its advanced hydraulic hands. In this video, their proprietary hydraulic hand autonomously manipulates a lettered cube, continuously reorienting it to match a specified goal (displayed in the bottom-left corner of the video).</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ad1d77f7ce4f331c7e74b0b779ff6cae" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/X16Ht1ERjHk?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.sanctuary.ai/">Sanctuary AI</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="r3toz2pgppy"><em>China’s Yuxing 3-06 commercial experimental satellite, the first of its kind to be equipped with a flexible robotic arm, has recently completed an in-orbit refueling test and verification of key technologies. The test paves the way for Yuxing 3-06, dubbed a “space refueling station,” to refuel other satellites in orbit, manage space debris, and provide other in-orbit services.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="eaf9d2765bb1e0ebff60f038ccba42fd" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/R3TOZ2PgPPY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://mp.weixin.qq.com/s/1c-9aNwuXv_p-VhojMkwwA">Sanyuan Aerospace</a> ] via [ <a href="https://spacenews.com/chinese-startup-tests-flexible-robotic-arm-in-space-for-on-orbit-servicing/">Space News</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="z4poalprrhe"><em>This is a demonstration of natural walking, whole-body teleoperation, and motion tracking with our custom-built humanoid robot. The control policies are trained using large-scale parallel reinforcement learning (RL). By deploying robust policies learned in a physics simulator onto the real hardware, we achieve dynamic and stable whole-body motions.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="703bacdcb0167fb3aa9bfe36e1da07ac" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/z4POaLPRRhE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://robotics.tokyo/">Tokyo Robotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="5olcwku7l9u"><em>Faced with aging railway infrastructure, a shrinking workforce and rising construction costs, Japan Railway West asked construction innovator Serendix to replace an old wooden building at its Hatsushima railway station using its 3D printing technology. An ABB robot enabled the company to assemble the new building in a single night ready for the first train service the next day.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="031eec5b200f86cdad72129d9a002cfc" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/5olcWkU7l9U?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.abb.com/global/en/news/134689">ABB</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="1k1phiqcfty"><em>Humanoid, SAP, and Martur Fompak team up to test humanoid robots in automotive manufacturing logistics. This joint proof of concept explores how robots can streamline operations, improve efficiency, and shape the future of smart factories.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="cc54aa14687108db3bc231b8cc456fea" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/1K1phiQCftY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://thehumanoid.ai/">Humanoid</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="oqglmefwbt8">This MIT Robotics Seminar is from Dario Floreano at EPFL, on “Avian Inspired Drones.”</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="7013e7fe97df52eb328681b647c9fddc" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/oqglMEFWBt8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://robotics.mit.edu/robotics-seminar/">MIT</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="etk5es0jvm4">This MIT Robotics Seminar is from Ken Goldberg at UC Berkeley: “Good Old-Fashioned Engineering Can Close the 100,000 Year ‘Data Gap’ in Robotics.”</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="710bc514cbab6092dc5f439cf03127c6" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/EtK5es0jVM4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://robotics.mit.edu/robotics-seminar/">MIT</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 03 Apr 2026 16:30:01 +0000</pubDate><guid>https://spectrum.ieee.org/video-humanoid-dancing</guid><category>Humanoid-robots</category><category>Video-friday</category><category>Robot-ai</category><category>Human-robot-interaction</category><category>Teleoperation</category><category>Industrial-robots</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/gif" url="https://spectrum.ieee.org/media-library/bipedal-teal-robot-practices-side-to-side-dance-move-with-arm-movement.gif?id=65460048&amp;width=980"></media:content></item><item><title>ENIAC’s Architects Wove Stories Through Computing</title><link>https://spectrum.ieee.org/eniac-80th-anniversary-weaving</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/close-up-black-and-white-1940-s-image-of-a-woman-holding-a-metallic-brick-like-controller-with-large-knobs.jpg?id=65453792&width=1200&height=600&coordinates=0%2C250%2C0%2C250"/><br/><br/><p><em><em>This year marks the </em></em><a href="https://spectrum.ieee.org/eniac-80-ieee-milestone" target="_self"><em><em>80th anniversary of ENIAC</em></em></a><em><em>, the first general-purpose digital computer. The computer was built during World War II to speed up ballistics calculations, but its contributions to computing extend well beyond military applications. </em></em></p><p><em><em>Two of ENIAC’s key architects—John W. Mauchly, its co-inventor, and Kathleen “Kay” McNulty, one of the <a href="https://spectrum.ieee.org/eniac-woman-programmers" target="_blank">six original programmers</a>—married a few years after its completion and raised seven children together. Mauchly and McNulty’s grandchild Naomi Most </em></em><a href="https://youtu.be/XYEVmqGhVxo?si=fseDLKFz1W8meWR6&t=4515" rel="noopener noreferrer" target="_blank"><em><em>delivered a talk</em></em></a><em><em> as part of a celebration in honor of ENIAC’s anniversary on 15 February, which was held online and in-person at the American Helicopter Museum in West Chester, Pa. The following is adapted from that presentation.</em></em></p><p class="ieee-inbody-related">RELATED: <a href="https://spectrum.ieee.org/eniac-80-ieee-milestone" target="_blank">ENIAC, the First General-Purpose Digital Computer, Turns 80</a></p><p>There was a library at my grandparents’ farmhouse that felt like it went on forever. September light through the windows, beech leaves rustling outside on the stone porch, the sounds of cousins and aunts and uncles somewhere in the house. And in the corner of that library, an IBM personal computer.</p><p>When I spent summers there as a child, I didn’t yet know that the computer was closely tied to my family’s story.</p><p>My grandparents are known for their contributions to creating the Electronic Numerical Integrator and Computer, or ENIAC. But both were interested in more than just crunching numbers: My grandfather wanted to predict the weather. My grandmother wanted to be a good storyteller. </p><p>In Irish, the first language my grandmother Kathleen “Kay” McNulty ever spoke, a word existed to describe both of these impulses: <em><em>ríomh</em></em>.</p><p>I began to learn the Irish language myself five years ago, and I was struck by how certain words and phrases had multiple meanings. According to renowned Irish cultural historian Manchán Magan—from whom I took lessons—the word <em><em>ríomh</em></em> has at different times been used to mean to compute, but also <a href="https://www.making.ie/stories/irish-words-weaving" rel="noopener noreferrer" target="_blank">to weave, to narrate, or to compose a poem</a>. That one word that can tell the story of ENIAC, a machine with wires woven like thread that was built to compute, make predictions, and search for a signal in the noise. </p><h2>John Mauchly’s Weather-Prediction Ambitions</h2><p>Before working on ENIAC, John Mauchly <a href="https://fi.edu/en/news/case-files-john-w-mauchly-and-j-presper-eckert" rel="noopener noreferrer" target="_blank">spent years collecting rainfall data</a> across the United States. His favorite pastime was meteorology, and he wanted to find patterns in storm systems to predict the weather.</p><p>The Army, however, funded ENIAC to make simpler predictions: calculating ballistic trajectory tables. Start there, co-inventors J. Presper Eckert and Mauchly realized, and perhaps the weather would soon be computable.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Black and white 1960s image of two white men in suits looking at a wall of computer controls." class="rm-shortcode" data-rm-shortcode-id="7872d50df109149c936e400909defc38" data-rm-shortcode-name="rebelmouse-image" id="75108" loading="lazy" src="https://spectrum.ieee.org/media-library/black-and-white-1960s-image-of-two-white-men-in-suits-looking-at-a-wall-of-computer-controls.jpg?id=65428294&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Co-inventors John Mauchly [left] and J. Presper Eckert look at a portion of ENIAC on 25 November 1966. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Hulton Archive/Getty Images</small></p><p>Weather is a system unfolding through time, and a model of a storm is a story about how that system might unfold. There’s an old Irish saying related to this idea: <a href="https://daltai.com/is-maith-an-scealai-an-aimsir/" target="_blank"><em><em>Is maith an scéalaí an aimsir</em></em></a><em><em>.</em></em> Literally, “weather is a good storyteller.” But <em><em>aimsir</em></em> also means time. So the usual translation of this phrase into English becomes “time will tell.”</p><p>Mauchly wanted to <em><em>ríomh an aimsire</em></em>—to weave the weather into pattern, to compute the storm, to narrate the chaos. He realized that complex systems don’t reveal their full purpose at conception. They reveal it through <em><em>aimsir</em></em>—through weather, through time, through use.</p><h2>ENIAC’s First Programmers Were Weavers</h2><p>Kathleen “Kay” McNulty was born on 12 February 1921, in Creeslough, Ireland, on the night <a href="https://en.wikipedia.org/wiki/James_McNulty_(Irish_activist)" target="_blank">her father</a>—an IRA training officer—was arrested and imprisoned in Derry Gaol.</p><p>Family oral history holds that her people were weavers. She spoke only Irish until her family reached Philadelphia when she was 4 years old, entering American school the following year knowing virtually no English. She graduated in 1942 from Chestnut Hill College with a mathematics degree, was recruited to compute artillery firing tables by hand for the U.S. Army, and was then selected—along with <a href="https://spectrum.ieee.org/the-women-behind-eniac" target="_blank">five other women</a>—to program ENIAC.</p><p>They had no manual. They had only blueprints.</p><p>McNulty and her colleagues learned ENIAC and its quirks the way you learn a loom: by touch, by memory, by routing threads of electricity into patterns. They developed embodied knowledge the designers could only approximate. They could narrow a malfunction to a specific failed vacuum tube before any technician could locate it.</p><p>McNulty and Mauchly are also credited with conceiving the subroutine, the sequence of instructions that can be repeatedly recalled to perform a task, now essential in any programming. The subroutine was not in ENIAC’s blueprints, nor in the funding proposal. The concept emerged as highly determined people extended their imagination into the machine’s affordances.</p><p>The engineers designed the loom. Weavers discovered its true capabilities.</p><p>In 1950, four years after ENIAC was switched on, Mauchly’s dream was realized as it was used in the <a href="https://www.guinnessworldrecords.com/world-records/775520-first-computer-assisted-weather-forecast" target="_blank">world’s first computer-assisted weather forecast</a>. That was made possible after Klara von Neumann and Nick Metropolis reassembled and upgraded the ENIAC with a small amount of digital program memory. The programmers who transformed the math into operational code for the ENIAC were Norma Gilbarg, Ellen-Kristine Eliassen, and Margaret Smagorinsky. Their names are not as well-known as they should be.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Black and white 1940s image of three women operating a differential analyser in a basement." class="rm-shortcode" data-rm-shortcode-id="298168a77d38fd343eeb7d4bbfc219a7" data-rm-shortcode-name="rebelmouse-image" id="aacec" loading="lazy" src="https://spectrum.ieee.org/media-library/black-and-white-1940s-image-of-three-women-operating-a-differential-analyser-in-a-basement.jpg?id=65453828&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Before programming ENIAC, Kay McNulty [left] was recruited by the U.S. Army to compute artillery firing tables. Here, she and two other women, Alyse Snyder [center] and Sis Stump, operate a mechanical analog computer designed to solve differential equations in the basement of the University of Pennsylvania’s Moore School of Electrical Engineering.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">University of Pennsylvania</small></p><h2>Kay McNulty, Family Storyteller</h2><p>Kay married John Mauchly in 1948, describing him as “the greatest delight of my life. He was so intelligent and had so many ideas.... He was not only lovable, he was loving.” She spent the rest of her life ensuring he, Eckert, and the ENIAC programmers would be recognized.</p><p>When she died in 2006, I came to her funeral in shock, not fully knowing what I’d lost. As she drifted away, it was said, she had been reciting her prayers in Irish. This understanding made it quickly over to Creeslough, in County Donegal, and awaited me when I visited to honor her memory with the <a href="https://www.youtube.com/watch?v=zbkk2RJMW9g" target="_blank">dedication of a plaque</a> right there in the center of town.</p><p>In <a href="https://mathshistory.st-andrews.ac.uk/Extras/Mauchly_Antonelli_story" target="_blank">her own memoir</a>, she wrote: “If I am remembered at all, I would like to be remembered as my family storyteller.”</p><p>In Irish, the word for computer is <em><em>ríomhaire</em></em>. One who ríomhs. One who weaves, computes, and tells. My grandfather wanted to tell the story of the weather through computing. My grandmother wanted to be remembered as a storyteller. The language of her childhood already had a word that contained both of those ambitions.</p><h2>Computers as Narrative Engines</h2><p>When it was built, ENIAC looked like the back room of a textile production house. Panels. Switchboards. A room full of wires. Thread.</p><p>Thread does not tell you what it will become. We tend to think of computing as calculation—discrete and deterministic. But a model is a structured story about how something behaves.</p><p>Weather models, ballistic tables, economic forecasts, neural networks: These are all narrative engines, systems that take raw inputs and produce accounts of how the world might unfold. In complex systems, when parts are woven together through use, new structures arise that no one specified in advance.</p><p>Like ENIAC, the machines we are building now—the large models, the autonomous systems—are not merely calculators. They are looms.</p><p>Their most important properties will not be specified in advance. They will emerge through use, through the people who learn how to weave with them.</p><p>Through imagination.</p><p>Through <em><em>aimsir</em></em>.</p>]]></description><pubDate>Fri, 03 Apr 2026 13:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/eniac-80th-anniversary-weaving</guid><category>Eniac</category><category>Weather-prediction</category><category>Computer-history</category><category>Ireland</category><dc:creator>Naomi Most</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/close-up-black-and-white-1940-s-image-of-a-woman-holding-a-metallic-brick-like-controller-with-large-knobs.jpg?id=65453792&amp;width=980"></media:content></item><item><title>Young Professional’s AI Tool Spots Mental Health Conditions</title><link>https://spectrum.ieee.org/abhishek-appaji-ai-diagnostic-tool</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/an-adult-indian-man-using-a-machine-to-capture-images-of-an-adult-womans-retina.jpg?id=65452299&width=1200&height=600&coordinates=0%2C312%2C0%2C313"/><br/><br/><p><a href="https://www.abhishekappaji.com/" rel="noopener noreferrer" target="_blank">Abhishek Appaji</a> has committed his career to bringing lifesaving technology to underresourced communities. The IEEE senior member weaves together artificial intelligence, biomedical engineering, deep learning, and neuroscience to make doctors’ jobs easier and to improve patient outcomes.</p><p>“The intersection of these fields is where the most impactful breakthroughs in diagnostic precision occur,” says Appaji, an associate professor of medical electronics engineering at the <a href="https://www.bmsce.ac.in/" target="_blank">B.M.S. College of Engineering</a>, in Bengaluru, India.</p><h3>Abhishek Appaji</h3><br/><p><strong>Employer </strong></p><p><strong></strong>B.M.S. College of Engineering, in Bengaluru, India</p><p><strong>Job title</strong></p><p><strong></strong>Associate professor of medical electronics engineering</p><p><strong>Member grade </strong></p><p><strong></strong>IEEE senior member</p><p><strong>Alma maters </strong></p><p><strong></strong>B.M.S. College of Engineering; University of Visvesvaraya, in Bengaluru; Maastricht University, in the Netherlands</p><p>Many of his inventions have been deployed in remote areas of India, providing physicians with quality diagnostic tools, including an AI-powered machine that can scan retinas to detect medical conditions and a smart bed that continuously monitors a patient’s vital signs.</p><p>An active volunteer with the <a href="https://yp.ieee.org/" rel="noopener noreferrer" target="_blank">IEEE Young Professionals</a> <a href="https://yp.ieeebangalore.org/" rel="noopener noreferrer" target="_blank">Bangalore Section</a>, he has launched professional networking events, technology workshops, a mentorship program, and other initiatives.</p><p>For his “contributions to accessible AI-driven health care solutions and leadership in empowering young professionals,” Appaji is the recipient of this year’s <a href="https://corporate-awards.ieee.org/award/ieee-theodore-w-hissey-outstanding-young-professional-award/" rel="noopener noreferrer" target="_blank">IEEE Theodore W. Hissey Outstanding Young Professional Award</a>. The honor is sponsored by the <a href="https://ieeephotonics.org/" rel="noopener noreferrer" target="_blank">IEEE Photonics</a> and <a href="https://ieee-pes.org/" rel="noopener noreferrer" target="_blank">Power & Energy</a> societies as well as IEEE Young Professionals. The award is scheduled to be presented this month during the <a href="https://corporate-awards.ieee.org/event/laureate-forum-honors-ceremony-gala/" rel="noopener noreferrer" target="_blank">IEEE Honors Ceremony</a> in New York City.</p><p>“This award represents a significant milestone in my career,” Appaji says. “It validates my core belief that our success as engineers is not solely measured by research outcomes or publications but by the tangible impact we have on lives through accessible technology and the quality of the next generation of leaders we empower.”</p><h2>Developing a blood glucose measurement device</h2><p>After earning a bachelor’s degree in engineering from B.M.S. in 2010, he joined the school as a lecturer in its medical electronics engineering department. At the same time, he pursued master’s degrees in bioinformatics at the <a href="https://uvce.ac.in/" rel="noopener noreferrer" target="_blank">University Visvesvarya College of Engineering</a>, also in Bengaluru. He graduated in 2013 and continued to teach at B.M.S.C.E.</p><p>Four years later, Appaji signed up for the <a href="https://openlearning.mit.edu/courses-programs/mit-bootcamps" rel="noopener noreferrer" target="_blank">MIT Global Entrepreneurship Bootcamp</a>, a two-week intensive hybrid program that includes webinars, online courses, and a five-day stay at MIT. It’s designed to give teams of aspiring entrepreneurs, innovators, and early-stage founders the structured mindset, tools, and frameworks they need to succeed.</p><p>Appaji says he discovered the program while researching opportunities in innovation.</p><p>“I had the technical expertise, but I needed a structured framework to transition my research from the laboratory to the market,” he says.</p><p>During the MIT boot camp, he and a team of four other participants were tasked with approaching a complex health care challenge. They developed a noninvasive blood glucose measurement device to manage gestational diabetes—a condition that causes high blood sugar and insulin resistance during pregnancy. When the program ended, Appaji and two of his Australia-based teammates continued their collaboration by founding <a href="https://au.linkedin.com/company/glucotekinc" rel="noopener noreferrer" target="_blank">Glucotek</a> in Brisbane, Australia.</p><p>Inspired to continue his research in health care technology, Appaji pursued a doctorate in mental health and neurosciences at <a href="https://www.maastrichtuniversity.nl/" rel="noopener noreferrer" target="_blank">Maastricht University</a>, in the Netherlands.</p><p>His <a href="https://cris.maastrichtuniversity.nl/en/publications/retinal-vascular-features-as-a-biomarker-for-psychiatric-disorder/" rel="noopener noreferrer" target="_blank">thesis</a> focused on computational methods to identify retinal vascular patterns.</p><p class="pull-quote">“The patterns we analyze—including the curvature of the vessels, the angles at which they branch out, and their dimensions—reveal the health of the microvascular system,” he says. “With conditions like schizophrenia and bipolar disorder, microvascular changes mirror neurovascular changes in the brain.”</p><p><span>“My journey has shown me that IEEE is much more than a professional society; it is a global platform that allows me to collaborate with a diverse network of experts to solve local humanitarian challenges.”</span></p><p>Examining and measuring the retinal vascular system offers physicians a noninvasive way to examine neural changes, which can be biomarkers for psychiatric illnesses, he says.</p><p>To bring his idea to life, he collaborated with an ophthalmologist, a psychiatrist, and colleagues from his engineering school to develop a screening device. They also created and trained the AI models that analyze retinal images.</p><p>Ideas from his thesis led to the creation of the Smart Eye Kiosk, an AI-powered tool that scans the network of small veins that deliver blood to the inner retina. The tool monitors stress levels and mental health. It also screens for basic eye diseases such as diabetic retinopathy, as well as damage to retinal blood vessels caused by high blood sugar.</p><p>Retinal images also can reveal physiological changes in the brain associated with psychiatric disorders such as schizophrenia and bipolar disorder, Appaji says. The kiosk uses AI models to analyze measurements of the vasculature network, such as vessel thickness, which can be biomarkers for psychiatric conditions. Since mental illnesses can be linked to genetics, relatives of patients with schizophrenia and bipolar disorder were also invited to participate in a study funded by India’s <a href="https://dst.gov.in/cognitive-science-research-initiative-csri" target="_blank">Cognitive Science Research Initiative’s Department of Science & Technology</a>. The clinical data from this study can pave the way for earlier, more accurate diagnoses.</p><p>“The biological basis for this is fascinating,” Appaji says. “The retina is the only place in the human body where the central nervous system and the vascular system can be visualized directly and noninvasively. Anatomically, the retina is an extension of the posterior part of the brain. Therefore, physiological changes in the brain are often reflected in the eyes.”</p><p>This kiosk was developed in collaboration with <a href="https://www.ttsh.com.sg/" target="_blank">Tan Tock Seng Hospital</a> and <a href="https://www.ntu.edu.sg/" target="_blank">Nanyang Technological University</a>, which was funded by <a href="https://www.chi.sg/platformprogrammes/ourfundingprogrammes/ntfhip/" rel="noopener noreferrer" target="_blank">Ng Teng Fong Healthcare Innovation Program</a>.</p><p>He earned his Ph.D. in 2020 from Maastricht, and he received the Best Thesis Award from the university’s <a href="https://www.maastrichtuniversity.nl/research/mental-health-and-neuroscience-research-institute" rel="noopener noreferrer" target="_blank">Mental Health and Neuroscience Research Institute</a>. Appaji credits his time at the school for his multidisciplinary approach to developing medical devices.</p><p>“Having the perspectives of mentors from diverse fields was essential to help me move my research beyond theory into a data-driven diagnostic tool,” he says.</p><p>He was then named institutional coordinator of R&D at B.M.S. and later was promoted to be its head.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="An adult Indian man looking at a rectangular device in his hand, labeled \u201cdozee\u201d." class="rm-shortcode" data-rm-shortcode-id="bc22f80982f03961c7b5f5fd684014f2" data-rm-shortcode-name="rebelmouse-image" id="40db1" loading="lazy" src="https://spectrum.ieee.org/media-library/an-adult-indian-man-looking-at-a-rectangular-device-in-his-hand-labeled-u201cdozee-u201d.jpg?id=65452303&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Abhishek Appaji working on a smart bed sensor that continuously monitors a patient’s vital signs without the use of wires or wearable sensors.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Abhishek Appaji</small></p><h2>A wireless smart bed to monitor vital signs</h2><p>Appaji continues to develop technologies for patients who need them most. “I feel a deep need to bridge this gap and ensure innovations have a tangible impact on society,” he says. In addition to the Smart Eye Kiosk, he improved the performance of the sensors of the smart beds that continuously monitor a patient’s vital signs without the use of wires or wearable sensors. The beds help hospital staff check on their patients in a noninvasive way.</p><p>The project was done in collaboration with health AI company <a href="https://www.dozeehealth.ai/" target="_blank">Dozee (Turtle Shell Technologies)</a> in Bengaluru. The system measures mechanical microvibrations produced by the body in response to the ejection of blood into the aorta, which occurs with each heartbeat. A thin, industrial-grade sensor sheet is placed underneath the mattress. Additional funding is being provided by India’s <a href="https://dst.gov.in" rel="noopener noreferrer" target="_blank">Department of Science and Technology</a>.</p><p>“These sensors are incredibly sensitive,” Appaji says. “They pick up minute mechanical tremors through the mattress material.”</p><p>The sensors detect the force of the patient’s heartbeat and the expansion and contraction of their chest during respiration. The vibrations are converted into electrical signals and analyzed using deep learning algorithms developed by Appaji and his team at the university in collaboration with Dozee.</p><p>The technology is used in more than 200 hospitals throughout India and in thousands of households, he says.</p><h2>Mentoring budding entrepreneurs </h2><p>Appaji is also executive director of the <a href="https://bigfoundation.org.in/" rel="noopener noreferrer" target="_blank">BMSreenivasiah Innovators Guild Foundation</a>, dedicated to nurturing entrepreneurial talent among students and faculty across the BMS group of Institutions. A not-for-profit company promoted by the BMS Education Trust, BIG Foundation provides a structured ecosystem for innovation, incubation, and startup growth.</p><p>There, Appaji mentors budding entrepreneurs, offering advice on business plans, product pitches, marketing strategies, and licensing. Participants are students and faculty members.</p><p>The foundation has incubated more than 10 ventures, according to Appaji.</p><p>“The majority are centered on health care applications,” he says, “and have successfully secured backing from investors and seed funds.”</p><h2>Taking IEEE’s mission to heart</h2><p>Appaji was introduced to IEEE as an undergraduate when one of his professors encouraged him to volunteer for a conference sponsored by the <a href="https://www.embs.org/" rel="noopener noreferrer" target="_blank">IEEE Engineering in Medicine and Biology Society</a>. He transcribed the seminars for session chairs, assisted with managing the talks, and helped answer attendees’ questions.</p><p>“That experience was transformative,” he recalls. “I was amazed to find myself in the same room with the speakers and scientists who had authored the very textbooks I was studying.</p><p>“It was then that I realized IEEE is far more than just technology and volunteering; it is a global platform for high-level networking with world-class scientists and technologists.”</p><p>Appaji has served in several IEEE leadership positions, including 2018–2019 chair of the Young Professionals Bangalore Section. He is now treasurer of the <a href="https://ieee-edusociety.org/home" rel="noopener noreferrer" target="_blank">IEEE Education Society</a>, chair of <a href="https://ieeecsbangalore.org/" rel="noopener noreferrer" target="_blank">IEEE Computer Society Bangalore Chapter</a>, member of the steering committee of <a href="https://ieee-dataport.org/" rel="noopener noreferrer" target="_blank">IEEE DataPort</a>, and serves on the IEEE <a href="https://www.ieee.org/communities/geographic-activities" rel="noopener noreferrer" target="_blank">Member and Geographic Activities</a> and <a href="https://ea.ieee.org/ea-programs" rel="noopener noreferrer" target="_blank">IEEE Educational Activities</a> boards.</p><p>“What motivates me to remain active within IEEE is the profound alignment between my personal goals and the organizational mission of advancing technology for the benefit of humanity,” he says. “My journey has shown me that IEEE is much more than a professional society; it is a global platform that allows me to collaborate with a diverse network of experts to solve local humanitarian challenges.”</p><p>The organization has helped fund some of Appaji’s lifesaving work. During the <a href="https://spectrum.ieee.org/tag/covid-19" target="_self">COVID-19 pandemic</a>, he received a grant from the <a href="https://ieeeht.org/" rel="noopener noreferrer" target="_blank">IEEE Humanitarian Technologies Board </a>and <a href="https://www.ieeer10.org/" rel="noopener noreferrer" target="_blank">Region 10</a> to develop <a href="https://spectrum.ieee.org/ieee-sections-receive-grants-for-their-innovative-ways-of-helping-to-fight-the-coronavirus" target="_self">3D-printed protective equipment</a> for people in Bengaluru’s underserved communities. The virus spread quickly in the high-density areas, where social distancing was nearly impossible. The kits, which included a door opener to avoid high-touch surfaces and an elbow-operated soap dispenser, were sent to nearly 500 households.</p><p>“This work remains one of my most meaningful contributions to humanitarian technology,” Appaji says, “demonstrating how engineering can be rapidly deployed to protect vulnerable populations during a global crisis.”</p><p>He advises younger IEEE members to: “Say yes to taking on roles of responsibility. Don’t wait for a formal title to lead; instead, start by volunteering to do small, manageable tasks within your local chapter or section.”</p><p>“The networking opportunities and leadership skills you gain through these early responsibilities will shape your professional career far more than any textbook ever could.”</p>]]></description><pubDate>Thu, 02 Apr 2026 18:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/abhishek-appaji-ai-diagnostic-tool</guid><category>Ieee-member-news</category><category>Health-care</category><category>Biomedical</category><category>Ieee-young-professionals</category><category>Ieee-awards</category><category>Type-ti</category><dc:creator>Amanda Davis</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/an-adult-indian-man-using-a-machine-to-capture-images-of-an-adult-womans-retina.jpg?id=65452299&amp;width=980"></media:content></item><item><title>What Exoskeletons Learned From One Relentless User</title><link>https://spectrum.ieee.org/exoskeleton-user-experience</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-man-wearing-a-full-body-robotic-exoskeleton-standing-on-a-city-sidewalk.png?id=65426945&width=1200&height=600&coordinates=0%2C833%2C0%2C834"/><br/><br/><p><strong>It’s easy to assume</strong> that Robert Woo was defined by the accident that took away his ability to walk.</p><p>Certainly, the day of his accident—14 December 2007—was a turning point. Woo, an architect working on the new Goldman Sachs headquarters in New York City, hadn’t attended his company’s holiday party the night before, and that morning he was the only one in the trailer that served as the construction-site office. He was bent over his laptop when, 30 floors above, a <a href="https://www.nydailynews.com/2007/12/14/west-side-crane-accident-injures-1-at-goldman-sachs-site/" target="_blank">crane’s nylon sling gave way</a>, sending about 6 tonnes of steel plummeting toward the trailer. The roof collapsed, folding Woo in half and smashing his face into his laptop, which smashed through his desk.</p><p>“I was conscious throughout the whole ordeal,” Woo remembers. “It was an out-of-body experience. I could hear myself screaming in pain. I could hear the voices of the rescue workers. I heard one firefighter say, ‘Don’t worry, we’re getting to you.’” The rescue workers hauled him out of the rubble and got him to the emergency room in 18 minutes flat; with one lung crushed and the other punctured, he wouldn’t have lasted much longer. In those frantic early moments, a doctor told him that he might be paralyzed from the neck down for the rest of his life. He remembers asking the doctors to let him die.</p><p>Woo simply couldn’t imagine how a paralyzed version of himself could continue living his life. Then 39 years old, he worked long hours and jetted around the world to supervise the construction of skyscrapers. More important, he had two young boys, ages 6 months and 2 years. “I couldn’t see having a life while being paralyzed from the neck down, not being able to teach my boys how to play ball,” he recalls. “What kind of life would that be?”</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="2986541a87f62bd11465a0fd835782ed" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/UNddtkBGuAs?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">Robert Woo walks inside the Wandercraft facility in New York City using the company’s latest self-balancing exoskeleton. </small> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Nicole Millman </small> </p><p><span>But in a Manhattan showroom last May, Woo showed that he’s not defined by that accident, which left him paralyzed from the chest down, but with the use of his arms. Instead, he has defined himself by how he has responded to his injury, and the new life he built after it.</span></p><h3></h3><br/><div class="rblad-ieee_in_content"></div><p>In the showroom, Woo transferred himself from his wheelchair to a 80-kilogram (176-pound) exoskeleton suit. After strapping himself in, he manipulated a joystick in his left hand to rise from a chair and then proceeded to walk across the room on robotic legs. Woo’s steps were short but smooth, and he clanked as he walked.</p><p>This exoskeleton, from the French company <a href="https://en.wandercraft.eu/" target="_blank">Wandercraft</a>, is one of the first to let the user walk without arm braces or crutches, which most other models require to stabilize the user’s upper body. The battery-powered exoskeleton took care of both propulsion and balance; Woo just had to steer. The bulky apparatus had a backplate that extended above Woo’s head, a large padded collar, armrests, motorized legs, and footplates. Walking across the room, he appeared to be half man, half machine. On the other side of the showroom’s plate-glass window, on Park Avenue, a kid walking by with his family came to a dead halt on the sidewalk, staring with awe at the cyborg inside.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Person seated wearing a full lower-body robotic exoskeleton for mobility assistance" class="rm-shortcode" data-rm-shortcode-id="eeace6a9e987149ce383ccec6937a1b8" data-rm-shortcode-name="rebelmouse-image" id="73d05" loading="lazy" src="https://spectrum.ieee.org/media-library/person-seated-wearing-a-full-lower-body-robotic-exoskeleton-for-mobility-assistance.jpg?id=65427288&width=980"/></p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Close-up of a hand operating the joystick and controls on a powered wheelchair armrest" class="rm-shortcode" data-rm-shortcode-id="c5dd0b296623bb32a2eb37c88ac0b5f0" data-rm-shortcode-name="rebelmouse-image" id="2d73d" loading="lazy" src="https://spectrum.ieee.org/media-library/close-up-of-a-hand-operating-the-joystick-and-controls-on-a-powered-wheelchair-armrest.jpg?id=65427286&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Robert Woo prepares to walk in a Wandercraft exoskeleton; the device’s controller enables him to stand up, initiate walk mode, and choose a direction. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Bryan Anselm/Redux </small></p><p>The amazement on the boy’s face was reminiscent of Woo’s young sons’ reaction when they saw a photo of Woo trying out an early exoskeleton, back in 2011. “Their first comment was, ‘Oh, Daddy’s in an Iron Man suit,’” he remembers. Then they asked, “When are you going to start flying?” To which Woo replied, “Well, I’ve got to learn how to walk first.”</p><p>The title of exoskeleton superhero suits Woo. He’s as soft-spoken and mild-mannered as Clark Kent, with a smile that lights up his face. Yet the strength underneath is undeniable; he has built a new life out of sheer determination. </p><p>For 15 years, he’s been a test pilot, early adopter, and clinical-study subject for the most prominent exoskeletons under development around the world. He placed the first order for an exoskeleton that was approved for home use, and he learned what it was like to be Iron Man around the house. Throughout it all, he has given the companies detailed feedback drawn from both his architectural design skills and his user experience. He has shaped the technology from inside of it.</p><p><a href="https://people.njit.edu/profile/pal" target="_blank">Saikat Pal</a>, a researcher at the New Jersey Institute of Technology, in Newark, met Woo during clinical trials for Wandercraft’s first model. Like so many others in the field, Pal quickly recognized that Woo brought a lot to the table. “He’s a super-mega user of exoskeletons: very enthusiastic, very athletic,” Pal says. “He’s the perfect subject.”</p><p>By pushing the technology forward, Woo has paved the way for thousands of people with spinal cord injuries as well as other forms of paralysis, who are now benefiting from exoskeletons in rehab clinics and in their homes. “Our bionics program at Mount Sinai started with Robert Woo,” says <a href="https://profiles.mountsinai.org/angela-riccobonno" target="_blank">Angela Riccobono</a>, the director of rehabilitation neuropsychology at <a href="https://www.mountsinai.org/" target="_blank">Mount Sinai Hospital</a>, in New York City, where Woo became an outpatient after his accident. “We have a plaque that dedicates our bionics program to him.”</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="3b08ced9c1ebb53070cf467341ccabd1" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/6kIvBtYeYUs?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span><small class="image-media media-caption" placeholder="Add Photo Caption...">Robert Woo walks down a sidewalk in New York City in 2015 using a ReWalk exoskeleton, one of the first exoskeletons designed for use outside the rehab clinic. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Eliza Strickland</small></p><p>It’s a fitting tribute. Woo’s post-accident life has been marked by victories, frustrations, deep love, and one devastating loss, and yet he has continued to devote himself to bionics. And while his vision for exoskeletons hasn’t changed, experience has reshaped what he expects from them in his lifetime.</p><h2>Rebuilding a Life After his Spinal Cord Injury </h2><p>Long before Woo ever stood up in a robotic suit, he had developed the habits of mind that would later make him an unusually perceptive test pilot.</p><p>Woo has always been a builder, a tinkerer, a fixer. Growing up in the suburbs of Toronto, he put together model kits of battleships and airplanes without looking at the instructions. “I just put things together the way I thought it would work out,” he says. He trained as an architect and in 2000 joined the Toronto-based firm <a href="https://www.adamson-associates.com/" target="_blank">Adamson Associates Architects</a>, a job that soon had him traveling to Europe and Asia to work on corporate high-rises.</p><p>Adamson specializes in taking the stunning designs of visionary architects and turning them into practical buildings with elevators and bathrooms. “Most of the design architects don’t really have a clue about how to build buildings,” Woo says. He liked solving those problems; he liked reconciling beautiful designs with the stubborn reality of construction. That talent for understanding a structure from the inside and spotting the flaws would prove essential later.</p><p>After his accident, Woo had two major surgeries to stabilize his crushed spine, which required surgeons to cut through muscles and nerves that connected to his arms. For two months, he couldn’t feel or move his arms; there was a chance he never would again. Only when sensation began creeping back into his fingertips did he allow himself to imagine a different future. If he wasn’t paralyzed from the neck down, he thought, maybe more of his body could be brought back online. “My focus was to walk again,” he says.</p><p>Woo was discharged in March 2008 and went back to his New York City apartment. He was still bedridden and required around-the-clock care. He doesn’t much like to talk about this next part: By May, his then-wife had moved back to Canada and filed for divorce, asking for full custody of their two children. Woo remembers her saying, “I can’t look after three babies, and one of them for life.”</p><p>It was a dark time. Riccobono of Mount Sinai, who met Woo shortly after he became an outpatient there in 2008, recalls the despondent look on his face the first time they talked. “I wasn’t sure that he wasn’t going to take his life, to be honest,” she says. “He felt like he had nothing to live for.”</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="One photo shows a smiling man in an exoskeleton with his arm around a smiling woman. The other photo shows a metal plaque saying that the Rehabilitation Bionics Program was made possible by the advocacy and dedication of Robert Woo." class="rm-shortcode" data-rm-shortcode-id="24060627efe3d5ed4b5585e963e6cd34" data-rm-shortcode-name="rebelmouse-image" id="7a1d5" loading="lazy" src="https://spectrum.ieee.org/media-library/one-photo-shows-a-smiling-man-in-an-exoskeleton-with-his-arm-around-a-smiling-woman-the-other-photo-shows-a-metal-plaque-saying.jpg?id=65427290&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Angela Riccobono of Mount Sinai Hospital (left) credits Woo with jump-starting the hospital’s bionics program; a plaque in the department of rehabilitation medicine recognizes his role. </small></p><p>Yet Woo harbors no animosity toward his ex-wife. “If we hadn’t separated and gone through the custody hearing, I don’t think I would have gotten this far,” he says. To win partial custody of his children, Woo had to become independent. He had to get off narcotic pain medications, regain strength, and learn how to navigate life in a wheelchair. He had to show that he no longer needed constant nursing, and that he could take care of both himself and his boys.</p><p>There were milestones: learning how to get back into his wheelchair after a fall, learning to drive a car with hand controls, learning to manage his body as it was, not as it had been. The biggest change came when he reconnected with his high school sweetheart, a vivacious woman named Vivian Springer. She was then dividing her time between Toronto and New York City, and she had a son who was almost the same age as Woo’s two boys. Springer had worked in a nursing home and knew how to change the sheets without getting him out of bed; she was currently working in human resources and knew how to deal with insurance companies. “You wouldn’t believe how much stress it lifted off of me,” Woo says. Over time, they became a family.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Man using a robotic exoskeleton with support, shopping and standing with children." class="rm-shortcode" data-rm-shortcode-id="893ec3e7bbaf953f0fa9b20e639dd9a4" data-rm-shortcode-name="rebelmouse-image" id="54575" loading="lazy" src="https://spectrum.ieee.org/media-library/man-using-a-robotic-exoskeleton-with-support-shopping-and-standing-with-children.jpg?id=65427555&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Robert Woo’s wife, Vivian, was trained in how to operate the device he used at home. His sons, Tristan (left) and Adrien, grew up watching their dad test exoskeletons. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Left: Lifeward; Right: Robert Woo </small></p><p>Once Woo had that foundation in place, Riccobono witnessed a profound change. “He went from focusing on ‘what I can’t do anymore’ to ‘What’s still possible? What can I do with what I have?’” At Mount Sinai, Woo remembers asking his doctor <a href="https://profiles.mountsinai.org/kristjan-t-ragnarsson" target="_blank">Kristjan Ragnarsson</a>, who was then chairman of the department of rehabilitation medicine, if he would ever walk again. “His response was, ‘Yes, you can walk again,’” Woo remembers, “‘but not the way you used to walk.’”</p><h2>First Steps in an Exoskeleton </h2><p>As soon as he had regained use of his hands, Woo had started googling, looking for anything that could get him back on his feet. He tried rehab equipment like the <a href="https://www.sralab.org/services/lokomat" target="_blank">Lokomat</a>, which used a harness suspended above a treadmill to enable users to walk. But at the time, it required three physical therapists: one to move each leg and one to control the machine. It was a far cry from the independent strides he dreamed of.</p><p>Several years in, he learned about two companies that had built something radically different: exoskeleton suits for people with spinal cord injuries. These prototypes had motors at the knees and the hips to move the legs, with the user stabilizing their upper body with arm braces. Woo desperately wanted to try one, although the technology was still experimental and far from regulatory approval. So he took the idea to Ragnarsson, asking if Mount Sinai could bring an exoskeleton into its rehab clinic for a test drive. Ragnarsson, who’s now retired, remembers the request well. “He certainly gave us the kick in the behind to get going with the technology,” he says.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Man in robotic exoskeleton walks with canes during rehab demo as clinicians observe" class="rm-shortcode" data-rm-shortcode-id="08a494fb70ca5c5d7c0e5a3bb263b28c" data-rm-shortcode-name="rebelmouse-image" id="16b99" loading="lazy" src="https://spectrum.ieee.org/media-library/man-in-robotic-exoskeleton-walks-with-canes-during-rehab-demo-as-clinicians-observe.jpg?id=65427556&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Robert Woo tries out an early exoskeleton from Ekso Bionics at Mount Sinai Hospital, where he first began testing the technology. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Mario Tama/Getty Images</small></p><p>Ragnarsson had seen decades of failed attempts to get paraplegics upright, including “inflatable garments made of the same material the astronauts used when they went to the moon,” he says. All those devices had proved too tiring for the user; in contrast, the battery-powered exoskeletons promised to do most of the work. And he knew the CEO of <a href="https://eksobionics.com/" target="_blank">Ekso Bionics</a>, a Berkeley, Calif.–based company that had built exoskeletons for the military. In 2011, Ekso <a href="https://spectrum.ieee.org/goodbye-wheelchair-hello-exoskeleton" target="_blank">brought its new clinical prototype to Mount Sinai</a>.</p><p>The day came for Woo’s first walk. “I was excited, and I was also scared, because I hadn’t stood up for almost five years,” he remembers. “Standing up for the first time was like floating, because I couldn’t feel my feet.” In that first Ekso model, Woo didn’t control when he stepped forward; instead, he shifted his weight in preparation, and then a physical therapist used a remote control to trigger the step. Woo walked slowly across the room, using a walker to stabilize his upper body, his steps a symphony of clunks and creaks and whirs. He found it mentally and physically exhausting, but the effort felt like progress.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="996f7d01a8c62b70fe92b38fa003fe59" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/l-QJx8QWCyc?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span><small class="image-media media-caption" placeholder="Add Photo Caption...">Robert Woo stands using an exoskeleton and embraces his wife, Vivian. Woo says that exoskeleton use has both physical and psychological benefits. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Mt. Sinai</small></p><p>Riccobono was there for those first steps, with tears running down her face. “I remembered how he looked the day I first met him, so defeated,” she says. “To see him rise from the chair, to see him rise to a standing position, to see how tall he was, to see him take those first steps—it was beautiful.” Ragnarsson saw clear benefits to the technology. “Any type of walking is good physiologically,” he says. “And it’s a tremendous boost psychologically to stand up and look someone in the eye.” Woo remembers hugging his partner, Springer, and for the first time not worrying about running over her toes with his wheelchair. I first met Woo a few days later, during his third session with the Ekso at Mount Sinai.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Two people stand outside; one uses blue exoskeleton crutches for mobility." class="rm-shortcode" data-rm-shortcode-id="69a52fa10854ff73f463efd70c6fbaac" data-rm-shortcode-name="rebelmouse-image" id="b81ad" loading="lazy" src="https://spectrum.ieee.org/media-library/two-people-stand-outside-one-uses-blue-exoskeleton-crutches-for-mobility.jpg?id=65427570&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Ann Spungen (left), a researcher at a Veterans Affairs hospital, led early clinical trials of exoskeletons. Her research focused on the medical benefits of exoskeleton use. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Robert Woo </small></p><p>Later that same year, at a Department of Veterans Affairs (VA) hospital in the Bronx, Woo got to try a prototype of the world’s other leading exoskeleton: the <a href="https://golifeward.com/products/rewalkpersonal-exoskeleton/" target="_blank">ReWalk</a>, from the Israeli company of the same name (since renamed <a href="https://golifeward.com/" target="_blank">Lifeward</a>). VA researchers, led by <a href="https://www.linkedin.com/in/ann-spungen-3971b246/" target="_blank">Ann Spungen</a>, were keen to determine if exoskeleton use had real medical value for veterans with spinal cord injuries. Woo was part of <a href="https://clinicaltrials.gov/study/NCT01454570?lat=40.8673611&lng=-73.9065313&locStr=James%20J.%20Peters%20Department%20of%20Veterans%20Affairs%20Medical%20Center,%20West%20Kingsbridge%20Road,%20The%20Bronx,%20NY&distance=50&term=ReWalk&viewType=Card&rank=1" target="_blank">that clinical trial</a>, for which he had more than 70 walking sessions, and he’s since been in many others. But he remembers the first VA trial with the most gratitude. “Dr. Spungen’s first exoskeleton clinical trial really turned things around for me,” he says.</p><p>Over the course of the trial’s nine intense months, Woo says he saw noticeable improvements to many facets of his health. “By the end of the trial, I eliminated about three-quarters of my medication intake,” he says, including narcotic pain pills and medication for muscle spasms. He grew fitter, with <a href="https://www.sciencedirect.com/science/article/abs/pii/S1094695018300970" target="_blank">less body fat</a>, more muscle mass, and lower cholesterol. His circulation improved, he says, causing scrapes and cuts to heal more quickly, and his <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC7957745/" target="_blank">digestion improved too</a>. The results Woo experienced have generally been borne out in research studies at the VA and elsewhere—exoskeletons aren’t just good for the mind, they’re good for the body.</p><h2>Improving Exoskeletons From the Inside </h2><p>During the VA trial, Woo began to think of exoskeletons not as miraculous machines, but as works in progress.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Man wearing robotic exoskeleton and using crutches on a city sidewalk" class="rm-shortcode" data-rm-shortcode-id="c6e269240874c399dd042e63b52fc7f6" data-rm-shortcode-name="rebelmouse-image" id="8c60a" loading="lazy" src="https://spectrum.ieee.org/media-library/man-wearing-robotic-exoskeleton-and-using-crutches-on-a-city-sidewalk.jpg?id=65427579&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Pierre Asselin (right), a biomedical engineer, worked with Robert Woo during clinical trials of exoskeletons. He says Woo was always pushing the limits of the technology. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Robert Woo </small></p><p><a href="https://www.linkedin.com/in/pierre-asselin-195a0b4/" target="_blank">Pierre Asselin</a>, the biomedical engineer coordinating the VA’s study, watched participants respond very differently to the equipment. “These devices are not the equivalent of walking—you’re tired after walking a mile,” he says. He notes that later models of both the Ekso and ReWalk enabled users to initiate each step through software that recognized when they shifted their weight. Asselin adds that the cognitive load is “like learning to drive a manual transmission car, where at first you’re really struggling to coordinate the clutch and the brake.” Woo picked it up immediately, he remembers.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Man in a leg exoskeleton reaches into a kitchen cabinet while another observes." class="rm-shortcode" data-rm-shortcode-id="c537ce4f78539951c11063a9cb902729" data-rm-shortcode-name="rebelmouse-image" id="236cd" loading="lazy" src="https://spectrum.ieee.org/media-library/man-in-a-leg-exoskeleton-reaches-into-a-kitchen-cabinet-while-another-observes.jpg?id=65427582&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Robert Woo uses an exoskeleton to reach items in a kitchen cabinet during a test of the device’s utility for everyday tasks.  </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Eliza Strickland </small></p>Woo became an invaluable partner, Asselin says. “When we first started with the devices, there was no training manual. We developed all of that through collaboration with Robert and other participants.” Woo pushed the limits of the technology, Asselin says, whether it was seeing how many steps he could take on one battery charge or simulating a failure mode. “He’d say, ‘What happens if I was to fall? What would be the approach to getting up?’”<p><span>Woo approached the ReWalk the way he had approached buildings in his previous life: He looked inside the structure and found the weak points. An early model left some users with leg abrasions where the straps rubbed—a small injury for most people, but a serious risk for someone who can’t feel a wound forming. Woo suggested better padding and stronger abdominal supports to redistribute the load. He also hated the heavy backpack that carried the battery and computer, so one afternoon he grabbed an old pack, cut off the straps, and rebuilt it into a compact hip-mounted pouch. Then he snapped photos and sent them to the company. The next model arrived with a fanny pack.</span></p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Hand-drawn concept sketch of a modular device labeled \u201cReWack 6.0\u201d with notes and arrows" class="rm-shortcode" data-rm-shortcode-id="d0e09446b489c6a5f720b68d263450a3" data-rm-shortcode-name="rebelmouse-image" id="76e48" loading="lazy" src="https://spectrum.ieee.org/media-library/hand-drawn-concept-sketch-of-a-modular-device-labeled-u201crewack-6-0-u201d-with-notes-and-arrows.jpg?id=65427594&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Robert Woo sent detailed design sketches as part of his feedback to exoskeleton engineers.  </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Robert Woo </small></p><p>Sometimes his fixes were more ambitious. One Ekso unit that he used at Mount Sinai kept shutting down after 30 minutes. Woo felt the hip motors and found them hot to the touch. “I said, ‘Can I remove these? I’m going to make a really quick fix, okay? Give me a drill and I’ll put a couple of holes in it,” he recalls telling the therapists, proposing to create a DIY heat sink. He wasn’t allowed to modify the prototype, but a year later the company introduced improved cooling around the hip motors. “There is a Robert Woo design on this device,” one therapist told him.</p><p><a href="https://www.linkedin.com/in/eythorbender/" target="_blank">Eythor Bender</a>, who was then the CEO of Ekso, called Woo to thank him for his feedback and invite him to spend a week at Ekso’s headquarters. “There was no lack of engineering power in that building,” says Bender. “But sometimes when you work with engineers, they overlook important things.” Bender says Woo brought both design skills and lived experience to his weeklong residency. “He told the engineers, ‘Guys, this has to be something that people actually like to wear.’”</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Patient in exoskeleton uses walker, flanked by doctor in lab coat and man in suit" class="rm-shortcode" data-rm-shortcode-id="0769a2526e44a9360c2a966a9839c4ee" data-rm-shortcode-name="rebelmouse-image" id="2e1fa" loading="lazy" src="https://spectrum.ieee.org/media-library/patient-in-exoskeleton-uses-walker-flanked-by-doctor-in-lab-coat-and-man-in-suit.jpg?id=65427643&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Ekso Bionics CEO Eythor Bender and Mount Sinai physician Kristjan Ragnarsson were both on hand for Woo’s early trials of the Ekso device. Ragnarsson says he saw physical and psychological benefits of exoskeleton use. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Robert Woo </small></p><p>The longer Woo tested, the further ahead he started thinking. With motors only at the hips and knees, every exoskeleton still required crutches. Add powered ankles, he told the Ekso and ReWalk teams, and the suits could balance themselves, freeing the user’s hands. But Woo was ahead of his time. “They said they weren’t going to do that. They weren’t going to change their whole platform,” he remembers. Years later, though, hands-free exoskeletons like those from Wandercraft would emerge built around exactly that principle.</p><h2>When the Exoskeleton Came Home </h2><p>By the mid-2010s, Woo had pushed the technology as far as he could in clinics. What he wanted now was to use an exoskeleton at home.</p><p>That milestone came after <a href="https://spectrum.ieee.org/rewalk-robotics-new-exoskeleton-lets-paraplegic-stroll-the-streets-of-nyc" target="_blank">ReWalk’s exoskeleton</a> became the first to win <a href="https://ir.rewalk.com/news-releases/news-release-details/rewalktm-personal-exoskeleton-system-cleared-fda-home-use" target="_blank">FDA approval for home use</a> in 2014. ReWalk engineers still remember Woo’s help on the final tests for that personal-use model. It was the end of May in 2015, recalls <a href="https://www.linkedin.com/in/david-hexner-8699413/" target="_blank">David Hexner</a>, the company’s vice president of research and development. “He said, ‘Guys, this is great. I’m going to buy it.’”</p><p>Woo was the first customer to buy an exoskeleton to bring home, paying US $80,000 out of pocket. His insurance wouldn’t cover the cost, but he was able to make the purchase in part because of a legal settlement after his accident. The home-use model came with a requirement that the user have at least one companion who was fully trained in operating the device. In Woo’s case, that meant that Springer learned to suit him up, realign his balance, and help him if he fell.</p><p>On delivery day, two SUVs drove up to a hotel down the street from Woo’s condo in the Toronto area. The technicians hauled two huge boxes into a hotel room and assembled his personal exoskeleton. They took Woo’s measurements, made adjustments, checked the software. This latest version could be controlled by either weight shifting or tapping commands on a smartwatch, and Woo had the app ready. He tested out everything in the hotel room, signed off, and then the technicians drove his robot legs to his home.</p><p>That was the start of his golden period with the ReWalk—similar to the excitement many people experience with a new piece of exercise equipment. “I used it every day for a few hours, and then I started logging how many steps I’d done,” Woo says. “My last count was probably just slightly over a million steps,” he says, with half of those steps taken in his home unit and half in training programs and clinical trials.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Person using a ReWalk exoskeleton with crutches beside stacked ReWalk shipping boxes" class="rm-shortcode" data-rm-shortcode-id="3341315ea904071979a50c6d8ab999dd" data-rm-shortcode-name="rebelmouse-image" id="ddd70" loading="lazy" src="https://spectrum.ieee.org/media-library/person-using-a-rewalk-exoskeleton-with-crutches-beside-stacked-rewalk-shipping-boxes.jpg?id=65434618&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The ReWalk was the first exoskeleton available for use outside the clinic. Robert Woo’s ReWalk arrived in two large boxes. ReWalk engineers assembled it in a hotel room, and Woo tried it out in the hallway before taking it home.  </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Robert Woo</small></p><p>Tristan, Woo’s eldest son, remembers doing laps with his dad in the condo’s underground parking garage while his dad was training for a 5-kilometer race in New York City. Tristan admits that he had previously been embarrassed about his dad, but training for the race shifted something for him. “I was so used to not wanting to tell people that my dad was in a wheelchair, but then I shared his passion for the training,” he says. “When people would come up to us, I’d tell them about it.”</p><p>The ReWalk could turn ordinary moments into small engineering projects. On weekends, Woo would take his boys to the golf course behind their condo and bring a baseball. He had rigged two holsters to the sides of the suit so he could stash a crutch and stand on three points (two legs and one arm) while he pitched or caught. Throw, switch crutches, catch. On the day of his accident, he never thought such a scene would be possible. But with the exoskeleton, it became just another design problem to solve. “It’s a little more work. It’s not perfect,” he says. “But in the end, you still get to do what you want to do—which is play ball with your sons.”</p><p>Tristan, now a college student, says he didn’t realize at the time how hard his dad worked to make those mundane activities possible. “Reflecting on it now,” he says, “he has shaped almost every element of my life, and he definitely is my hero.”</p><p>But even during that golden stretch, the ReWalk had a way of asserting its limits. Every so often it would freeze mid-stride and require a reboot—a small technical hiccup in theory, but a serious problem when there’s a person strapped inside. Once, when he was walking on his own in the parking garage (without his mandated companion), the suit glitched and went into “graceful collapse” mode, lowering him to a seated position on the ground. Woo had to ask security to bring his wheelchair and a dolly.</p><p>He had imagined the exoskeleton would be most useful in the kitchen. Woo loves to cook, and he had pictured himself standing at the stove, looking down into pots, and moving easily between counter and sink. The reality, he found out, was more complicated. “It’s actually very time-consuming and troublesome” to cook in an exoskeleton, he says.</p><p>Preparing a meal meant first rolling through the kitchen in his wheelchair to gather every ingredient and utensil, then transferring himself into the ReWalk and moving himself into position at the counter, stopping at just the right moment. “That’s when I fell once,” Woo says. “I collided with the counter and then lost my balance and fell backward.” If all went well, he’d lean either on one crutch or the counter to keep his balance while he worked. But if he’d forgotten to grab the vinegar from the cabinet, he’d have to go into walk mode, crutch over to it, and figure out how to carry the bottle back to his workstation.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Powered exoskeleton suit and crutches positioned in a modern clinical room" class="rm-shortcode" data-rm-shortcode-id="a984e71926de8dd39f35b478e1bbe279" data-rm-shortcode-name="rebelmouse-image" id="6a40f" loading="lazy" src="https://spectrum.ieee.org/media-library/powered-exoskeleton-suit-and-crutches-positioned-in-a-modern-clinical-room.jpg?id=65434518&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Sitting unused in Robert Woo’s home, his ReWalk exoskeleton reflects both the promise and the limits of early devices.  </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Robert Woo</small></p><p>Gradually, he stopped trying. The suit, which he’d once worn every day, spent more time sitting idle in the hallway; like so many abandoned treadmills and stationary bikes, it gathered dust. Part of the reason was the exoskeleton’s practical limitations, but part of it was a shocking development: In 2024, Vivian was diagnosed with an aggressive form of breast cancer. She died in November of that year, at the age of 54.</p><p>Woo was scheduled to begin a new round of clinical trials for the Wandercraft home-use exoskeleton that month. In the aftermath of Vivian’s death, he postponed his sessions and questioned whether he would ever go back. “At the time, I thought, ‘What’s the point?’” he remembers.</p><p>He did go back, though. “He just rolled up, right into my office,” says Mount Sinai’s Riccobono. “He still had Vivian’s box of ashes on his lap. That’s how fresh it was.” Woo brought the box into a meeting of spinal cord injury patients and shared the story of losing the love of his life. And he told them that he heard his wife’s voice in his head every day, telling him to get back to work. Once again, he was figuring out how to move forward with what he had.</p><h2>How Close Are We to Everyday Exoskeletons? </h2><p>In the Wandercraft showroom last May, Woo steered toward the door to the street, technicians flanking him like spotters. The slope down to the sidewalk was barely an inch high, but everyone tensed. He shifted his weight and took a step forward. The suit halted automatically. He tried again—step, stop; step, stop—as the suit kept detecting the slight decline and a safety feature kicked in. The Wandercraft isn’t yet rated for slopes of more than 2 percent, and even the gentle pitch of Park Avenue was enough to trigger its safeguards. When he finally reached the sidewalk, Woo broke into a grin. A man in the back seat of a stopped Uber leaned out his window, filming.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Knee brace with straps and a leg showing a fresh, red incision scar." class="rm-shortcode" data-rm-shortcode-id="c7d7199f6643de021a7f81d6c256876e" data-rm-shortcode-name="rebelmouse-image" id="2235b" loading="lazy" src="https://spectrum.ieee.org/media-library/knee-brace-with-straps-and-a-leg-showing-a-fresh-red-incision-scar.jpg?id=65427649&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">During testing of the Wandercraft exoskeleton, straps caused an abrasion on Robert Woo’s leg, which he documented as part of his feedback to the company.   </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Robert Woo </small></p><p>Woo had recently completed seven sessions with the Wandercraft at the VA hospital and had been impressed overall. But at the showroom, he rolled up his pants leg to reveal an abrasion on his shin, the result of a strap that had worn away a patch of skin during a long walking session. He would later send Wandercraft a nine-page assessment with photos and a technology wish list, asking the company to work on things like padding, variable walking speeds, and deeper squats.</p><p>Wandercraft’s engineers relish that kind of user feedback, says CEO <a href="https://www.linkedin.com/in/matthieu-masselin-64585537/" target="_blank">Matthieu Masselin</a>. Exoskeletons are a far more difficult engineering problem than humanoid robots, he explains. “You basically have two systems of equal importance. You know about the robot—it’s fully quantified and measured. But you don’t know what the person is doing, and how the person is moving within the device.”</p><p>Since Woo began testing exoskeletons 15 years ago, both the technology and the market have made strides. ReWalk and Ekso won FDA clearance for clinical use in the 2010s, and both now sell home-use versions. The companies have sold thousands of exoskeletons to rehab clinics and personal users, and they see room for growth; in the United States alone, about <a href="https://msktc.org/sites/default/files/Facts-and-Figures-2025-Eng-508.pdf" target="_blank">300,000 people live with spinal cord injuries</a>, and millions more have mobility impairments from stroke, multiple sclerosis, or other conditions. The VA began supplying devices to eligible veterans in 2015, and Medicare recently <a href="https://golifeward.com/blog/medicare-reimbursement-established-for-medically-eligible-beneficiaries/" target="_blank">established a system for reimbursement</a>, a move that private insurers are beginning to follow. What was once experimental is slowly becoming established.</p><p>Researchers who test the devices say the technology still has significant limits. Pal, of the New Jersey Institute of Technology, mentions battery life, dexterity, and reliability as ongoing challenges. But, he says with a laugh, “Our bodies have evolved over many millions of years—these machines will need a bit more time.” Pal hopes the companies will keep pushing the technological frontier. “My lifetime goal is to see the day when someone like Robert Woo can wake up in the morning, put this device on, and then live an ordinary life.”</p><p>For Woo, the real question about the self-balancing Wandercraft was: Could he cook with it? In the VA hospital’s home mockup, he tried it out in the kitchen, stepping sideways to retrieve items from cabinets and squatting to grab something from the fridge’s lower shelf. For the first time in years, he could work at a counter without leaning on crutches. “The self-standing exoskeleton changes everything,” he says. He imagines a user placing a Thanksgiving turkey on a tray attached to the suit and walking it into the dining room.</p><p>Back in the showroom, Woo finishes the demo and brings the suit to a seated position before transferring back to his wheelchair. After so many years of testing prototypes, he’s now realistic about the technology’s timeline. A truly all-day exoskeleton—the kind you live in, the kind that replaces a wheelchair—may be a decade or more away. “It may not be for me,” he says. But that’s no longer the point. He’s thinking about young people who are newly injured, who are lying in hospital beds and trying to imagine how their lives can continue. “This will give them hope.” <span class="ieee-end-mark"></span></p>]]></description><pubDate>Wed, 01 Apr 2026 13:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/exoskeleton-user-experience</guid><category>Bionics</category><category>Paralysis</category><category>Exoskeleton</category><category>Spinal-cord-injury</category><category>Assistive-technology</category><dc:creator>Eliza Strickland</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/a-man-wearing-a-full-body-robotic-exoskeleton-standing-on-a-city-sidewalk.png?id=65426945&amp;width=980"></media:content></item><item><title>The ’80s Submersible That Transformed Underwater Exploration</title><link>https://spectrum.ieee.org/deep-sea-submersible</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/spherical-deep-sea-submersible-with-robotic-arms-exploring-underwater.jpg?id=65416328&width=1200&height=600&coordinates=0%2C125%2C0%2C126"/><br/><br/><p>As a kid, I loved the 1980s aquatic adventure show <a href="https://www.imdb.com/title/tt0086692/" rel="noopener noreferrer" target="_blank"><em><em>Danger Bay</em></em></a>. True to the TV show’s name, danger was always lurking at the Vancouver Aquarium, where the show was set. In one memorable episode, young Jonah and a friend get trapped in a sabotaged mini-submarine, and Jonah’s dad, a marine-mammal veterinarian, comes to the rescue in a bubble-shaped underwater vehicle. Good stuff! Only recently—as in when I started working on this column—did I learn that the rescue vehicle was not a stage prop but rather a real-world research submersible named <em><em>Deep Rover</em></em>.</p><h2>What Was <em><em>Deep Rover</em></em> and What Did It Do?</h2><p> Built in 1984 and launched the following year, <a href="https://ingenium.ca/publications/en/2025/09/deep-dive-with-deep-rover-a-canadian-made-acrylic-submersible/" rel="noopener noreferrer" target="_blank"><em><em>Deep Rover</em></em></a> was a departure from standard underwater vehicles, which typically required divers to lie in a prone position and look through tiny portholes while tethered to a support ship.</p><p><em><em>Deep Rover </em></em>was designed to satisfy human curiosity about the underwater world. As the rover moved freely through the water down to depths of 1,000 meters, the operator sat up in relative comfort in the cab, inside a clear 13-centimeter-thick acrylic bubble with panoramic views—an inverted fishbowl, with the human immersed in breathable air while the sea creatures looked in. Used for scientific research and deepwater exploration, it set a number of dive records along the way.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Photo of a man and a woman in a wood-paneled room with a scale model of an underwater vehicle in front of them." class="rm-shortcode" data-rm-shortcode-id="d011f033c593fe40f9630c519be31ea2" data-rm-shortcode-name="rebelmouse-image" id="573d9" loading="lazy" src="https://spectrum.ieee.org/media-library/photo-of-a-man-and-a-woman-in-a-wood-paneled-room-with-a-scale-model-of-an-underwater-vehicle-in-front-of-them.jpg?id=65416404&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Submarine designer Graham Hawkes [left] and marine biologist Sylvia Earle [right] came up with the idea for <i>Deep Rover</i>.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Alain Le Garsmeur/Alamy </small></p><p> The team behind <em><em>Deep Rover</em></em> included U.S. marine biologist <a href="https://www.britannica.com/biography/Sylvia-Earle" target="_blank">Sylvia Earle</a> and British marine engineer and submarine designer <a href="https://www.linkedin.com/in/graham-hawkes-8bb75558" target="_blank">Graham Hawkes</a>. Earle and Hawkes’s collaboration had begun in May 1980, when Earle complained to Hawkes about the “stupid” arms on <a href="https://www.divingheritage.com/jim.htm" target="_blank">Jim, an atmospheric diving suit</a>; she didn’t realize she was complaining to one of Jim’s designers. Hawkes explained the difficulty of designing flexible joints that could withstand dueling pressures of 101 kilopascals on the inside—that is, the normal atmospheric pressure at sea level—and up to about 4,100 kPa on the outside. But he listened carefully to Earle’s wish list for a useful manipulator. Several months later, he came back with a design for a superbly dexterous arm that could hold a pencil and write normal-size letters.</p><p>Earle and Hawkes next turned to designing a one-person bubble sub, which they considered so practical that it would be an easy sell. But after failing to attract funding, they decided to build it themselves. In the summer of 1981, they pooled their resources and cofounded Deep Ocean Technology, setting up shop in Earle’s garage in Oakland, Calif.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" rel="float: left;" style="float: left;"> <img alt="Photo of a man sitting in an underwater vehicle with the words \u201cNewtsub DeepWorker 2000\u201d across the front and the logos of NASA and the National Geographic Society." class="rm-shortcode" data-rm-shortcode-id="e350d34e1c8a7e8e47bf32da01655b60" data-rm-shortcode-name="rebelmouse-image" id="3e612" loading="lazy" src="https://spectrum.ieee.org/media-library/photo-of-a-man-sitting-in-an-underwater-vehicle-with-the-words-u201cnewtsub-deepworker-2000-u201d-across-the-front-and-the-logo.jpg?id=65416416&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Phil Nuytten, a Canadian designer of submersibles and dive systems, engineered <i>Deep Rover</i>.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Stuart Westmorland/RGB Ventures/Alamy</small></p><p>They still found that customers weren’t interested in their crewed submersible, though, so they turned to unmanned systems. Their first contract was for a remotely operated vehicle (ROV) for use in oil-rig inspection, maintenance, and repair. Other customers followed, and they ended up building 10 of these ROVs. In 1983, they returned to their original idea and contracted with the Canadian inventor and entrepreneur <a href="https://nuytco.com/history/phil-nuytten/" target="_blank">Phil Nuytten</a> to engineer <em><em>Deep Rover</em></em>.</p><p>Nuytten didn’t have to be convinced of the value of the submersible. He had grown up on the water and shared their dream. As a teenager, he opened Vancouver’s first dive shop. He then worked as a commercial diver. He founded the ocean- and research-tech companies Can-Dive Services (in 1965) and Nuytco Research (in 1982), and he developed advanced submersibles as well as diving systems. These included the <a href="https://nuytco.com/products/newtsuit/" target="_blank">Newtsuit</a>, an aluminum atmospheric diving suit for use on drilling rigs and salvage operations.</p><p class="ieee-inbody-related">RELATED: <a href="https://spectrum.ieee.org/virgin-oceanics-voyage-to-the-bottom-of-the-sea" target="_self">Virgin Oceanic’s Voyage to the Bottom of the Sea</a></p><p><em><em>Deep Rover</em></em>’s first assignment was to boost offshore oil exploration and drilling in eastern Canada. Funding came from the provincial government of Newfoundland and Labrador and the oil companies Petro-Canada and Husky Oil. But the collapse of oil prices in the mid-1980s made it uneconomical to operate the submersible. So the rover’s mission broadened to scientific research.</p><h2><em><em>Deep Rover</em></em>’s Technical Specs</h2><p>The pilot could operate <em><em>Deep Rover</em></em> safely for 4 to 6 hours at a depth of 1,000 meters and speeds of up to 1.5 knots (46 meters per minute). The submersible could be tethered to a support ship or move freely on its own. Two deep-cycle, lead-acid battery pods weighing about 170 kilograms apiece provided power. It had a VHF radio and two frequencies of through-water communications, plus tracking beacons.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Park ranger operates aircraft cockpit controls surrounded by cameras and instruments" class="rm-shortcode" data-rm-shortcode-id="a16dd08e15950afd6153e7a309cedfb0" data-rm-shortcode-name="rebelmouse-image" id="df596" loading="lazy" src="https://spectrum.ieee.org/media-library/park-ranger-operates-aircraft-cockpit-controls-surrounded-by-cameras-and-instruments.jpg?id=65416434&width=980"/> </p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Two photos, one showing a smiling man in the cab of a heavily instrumented vehicle, the other showing the underwater view out the front of the vehicle. " class="rm-shortcode" data-rm-shortcode-id="bb0cdb46d97cd3b5b1b634058d02b321" data-rm-shortcode-name="rebelmouse-image" id="2555e" loading="lazy" src="https://spectrum.ieee.org/media-library/two-photos-one-showing-a-smiling-man-in-the-cab-of-a-heavily-instrumented-vehicle-the-other-showing-the-underwater-view-out-th.jpg?id=65416428&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">From 1987 to 1989, Deep Rover did a series of dives in Oregon’s Crater Lake, the deepest lake in the United States. During one dive, National Park Service biologist Mark Buktenica [top] collected rock samples.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">NPS</small></p><p>The rover’s four thrusters—two horizontal fixed aft thrusters and two rotating wing thrusters—could be activated in any combination through microswitches built into the armrest. The pilot navigated using a gyro compass, sonar, and depth gauges (both digital and analog).</p><p>Much to Earle’s delight, <em><em>Deep Rover</em></em> had two excellent manipulators, each with four degrees of freedom, thus solving the problem that had started her down this path of invention. The pilot controlled the manipulators with a joystick at the end of each armrest. Sensory feedback systems helped the pilot “feel” the force, motion, and touch. The two arms had wraparound jaws and could lift about 90 kg.</p><p>If something went wrong, <em><em>Deep Rover</em></em> carried five days’ worth of life support stores and had a variety of redundant safety features: oxygen and carbon dioxide monitoring equipment; a halon (breathable) fire extinguisher; a full-face BIBS (built-in breathing system) that tapped into the starboard air bank; and a ground fault-detection system.</p><p>If needed, the rover could surface quickly by jettisoning equipment, including the battery pods and a 90-kg drop weight in the forward bay. In dire circumstances, the pressure hull (the acrylic bubble, that is) could separate from the frame, taking with it only its oxygen tanks, strobe, through-water communications, and wing thrusters.</p><h2>Deep Rover’s achievements</h2><p>From 1984 to 1992, <em><em>Deep Rover</em></em> conducted about 280 dives. It inspected two of the tunnels near Niagara Falls that divert water to the Sir Adam Beck II hydroelectric plant. In California’s Monterey Bay, the rover let researchers film previously unknown deep-sea marine life, which helped establish the Monterey Bay Aquarium Research Institute. At Crater Lake National Park, in Oregon, <em><em>Deep Rover</em></em> proved the existence of geothermal vents and bacteria mats, leading to the protection of the site from extractive drilling.</p><p><em><em>Deep Rover</em></em> was featured in a <a href="https://www.barbeefilm.com/discovery-ii.html" rel="noopener noreferrer" target="_blank">short film</a> shown at Vancouver’s Expo ’86, the first of several TV and movie appearances. There was <em><em>Danger Bay</em></em>. Director James Cameron used an early prototype of the submersible in his 1989 film <a href="https://www.imdb.com/title/tt0096754/" rel="noopener noreferrer" target="_blank"><em><em>The Abyss</em></em></a>. <em><em>Deep Rover </em></em>also made an appearance in Cameron’s 2005 documentary <a href="https://www.imdb.com/title/tt0417415/" rel="noopener noreferrer" target="_blank"><em><em>Aliens of the Deep</em></em></a>.</p><p>In 1992, <em><em>Deep Rover</em></em> came to the end of its working life. It now resides at <a href="https://ingenium.ca/en/" rel="noopener noreferrer" target="_blank">Ingenium</a>, Canada’s Museums of Science and Innovation, in Ottawa. For a time, Deep Ocean Engineering continued to develop later generations of the submersible. Eventually, though, uncrewed remotely operated and autonomous underwater vehicles became the norm for deep-sea missions, replacing human pilots with sensors and equipment. New ROVs can dive significantly deeper than human-piloted ones, and new cameras are so good that it feels like you’re there…almost. And yet, humans still long to have the personal experience of exploring the depths of the oceans.</p><p><em><em>Part of a </em></em><a href="https://spectrum.ieee.org/collections/past-forward/" target="_self"><em><em>continuing series</em></em></a><em> </em><em><em>looking at historical artifacts that embrace the boundless potential of technology.</em></em></p><p><em><em>An abridged version of this article appears in the April 2026 print issue as “</em></em><em><em>All Alone in the Abyss</em></em><em><em>.”</em></em></p><h3>References</h3><br/><p>My friends at <a href="https://ingenium.ca/en/" target="_blank">Ingenium</a>, Canada’s Museums of Science and Innovation, helpfully provided me with background material on why they decided to acquire <em>Deep Rover</em>. They also published a great <a href="https://ingenium.ca/publications/en/2025/09/deep-dive-with-deep-rover-a-canadian-made-acrylic-submersible/" target="_blank">blog post</a> about the rover.</p><p><a href="https://www.linkedin.com/in/dirk-rosen-b551204/" target="_blank">Dirk Rosen</a>, executive vice president of engineering at DEEP, published specifications for <em>Deep Rover</em> in his 1986 IEEE paper “<a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1160330" rel="noopener noreferrer" target="_blank">Design and Application of the Deep Rover Submersible</a>.”</p><p>Sylvia Earle, known affectionately as “Her Deepness,” has written extensively about the ocean depths. I found her book<em> Sea Change: A Message of the Oceans</em> (G.P. Putnam’s Sons, 1995) to be especially enjoyable.</p>]]></description><pubDate>Tue, 31 Mar 2026 13:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/deep-sea-submersible</guid><category>Ocean-engineering</category><category>Submersibles</category><category>Underwater-vehicles</category><category>Canada</category><category>Past-forward</category><dc:creator>Allison Marsh</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/spherical-deep-sea-submersible-with-robotic-arms-exploring-underwater.jpg?id=65416328&amp;width=980"></media:content></item><item><title>Invences Empowers Small Businesses With Smart Telecom Networks</title><link>https://spectrum.ieee.org/invences-startup-telecom-networks</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/three-men-seated-on-stage-underneath-a-large-presentation-screen-one-of-the-men-is-holding-a-microphone-while-speaking-to-the-a.jpg?id=65416492&width=1200&height=600&coordinates=0%2C250%2C0%2C250"/><br/><br/><p>To stay competitive, many small businesses need advanced wireless communication networks, not only to communicate but also to leverage technologies such as artificial intelligence, the Internet of Things, and robotics. Often, however, the businesses lack the technical expertise needed to install, configure, and maintain the systems.</p><p><a href="https://www.linkedin.com/in/bhaskara-rallabandi-40b20b36/" rel="noopener noreferrer" target="_blank">Bhaskara Rallabandi</a>, who spent more than two decades working for major telecom companies, decided to use his expertise to help small businesses. Rallabandi, an IEEE senior member, is an expert certified by the <a href="https://www.incose.org/" rel="noopener noreferrer" target="_blank">International Council on Systems Engineering</a>.</p><h3>Invences</h3><br/><p><strong>Cofounder</strong></p><p>Bhaskara Rallabandi</p>
<p><strong>Founded</strong></p><p>2023</p><p><strong>Headquarters</strong></p><p>Frisco, Texas</p><p><strong>Employees</strong></p><p>100</p><p>In 2023 he helped found <a href="https://invences.com/" rel="noopener noreferrer" target="_blank">Invences</a>, a telecommunications automation company headquartered in Frisco, Texas.</p><p>Invences services include designing, building, and installing <a href="https://spectrum.ieee.org/ai-data-centers-hts-superconductors" target="_self">data centers</a>, as well as cost-effective and secure wireless, private, <a href="https://spectrum.ieee.org/internet-of-things-5g-mit" target="_self">IoT</a>, and virtual communications networks.</p><p>The company has set up systems for farms, factories, and universities in rural and urban areas including <a href="https://spectrum.ieee.org/broadband-internet-in-nigeria" target="_self">underserved communities</a>. Its mission, Rallabandi says, is to “build autonomous, ethical, and sustainable networks that connect communities intelligently.”</p><p>For his work, he was recognized last year for “entrepreneurial leadership in founding and scaling a U.S.-based technology company, advancing innovation in 5G/6G and Open RAN [radio access network], shaping global standards, and inspiring future leaders through mentorship and community impact” with the <a href="https://ieeeusa.org/2025-ieee-usa-awards-honor-engineering-leaders/" rel="noopener noreferrer" target="_blank">IEEE-USA Entrepreneur Achievement Award for Leadership in Entrepreneurial Spirit</a>.</p><h2>Building a telecommunications career</h2><p>He began his telecommunications career in 2009 as a manager and principal network engineer at <a href="https://www.verizon.com/" rel="noopener noreferrer" target="_blank">Verizon</a>’s <a href="https://www.verizon.com/about/our-company/innovation-labs" rel="noopener noreferrer" target="_blank">Innovation Labs</a> in Waltham, Mass. He and his team ran some of the earliest long-term evolution and evolved packet core performance trials. (LTE is the 4G wireless broadband standard for mobile devices. EPC is the IP-based, high-performance core network architecture for 4G LTE networks.)</p><p>That work at Innovation Labs, he says, was key to the development of the first 4G systems. It set the stage for scalable, interoperable broadband architectures that underpin today’s 5G and 6G designs.</p><p>“We built the first bridge between legacy and cloud-native networks,” he says.</p><p>He left in 2011 to join <a href="https://about.att.com/sites/labs" rel="noopener noreferrer" target="_blank">AT&T Labs</a> in Redmond, Wash. As senior manager and principal solutions architect, he oversaw the design, integration, and testing of the company’s next-generation wireless systems. He also led projects that redefined automation of networks and set up cloud computing systems including <a href="https://www.firstnet.com/" rel="noopener noreferrer" target="_blank">FirstNet</a>, the nationwide broadband network for first responders, and VoLTE, the <a href="https://www.rcrwireless.com/20151123/carriers/att-volte-video-calling-rcs-messaging-launched-with-limited-support-tag2" rel="noopener noreferrer" target="_blank">first voice-over-video LTE</a> for conducting video calls.</p><p>In 2018 Rallabandi was hired as a principal and a senior manager of engineering at <a href="https://www.samsung.com/us/business//networking/" target="_blank">Samsung Networks Division’s Technology Solutions Division,</a> in Plano, Texas.<span> He led the development of 5G virtualization and Open RAN initiatives, which enable more flexible, scalable, and efficient large network deployments and interoperability among vendors.</span></p><h2>Designing networks for small businesses</h2><p>Feeling that he wasn’t reaching his full potential in the corporate world, and to help small businesses, he opted to start his own venture in 2023 with his wife, <a href="https://www.linkedin.com/in/lakshmi-rallabandi-04a17977/" target="_blank">Lakshmi Rallabandi</a>, a computer science engineer. She is Invences’s CEO, and he is its founding principal and chief technology advisor.</p><p>Invences, which is self-funded and employs about 100 people, has more than 50 customers from around the world.</p><p>“I wanted to do something more interesting where I could use the knowledge I gained working for these big companies to fill the gaps they overlooked in terms of automation” for small businesses, he says. “I have a team of people who, combined, have 200 years of technology experience.”</p><p>The startup builds networks that simplify its clients’ operations and reduce their costs, he says.</p><p>Instead of duplicating how major telecom carriers build networks for dense urban areas, he says, his designs reimagine the network architecture to lower its complexity, costs, and operational overhead.</p><p class="pull-quote">“Connectivity should not be a luxury. Rural communities deserve an infrastructure that fits their needs.”</p><p>The systems integrate new technologies such as Open RAN, virtualized RAN, digital twins, telemetry, and advanced analytics. Some networks also incorporate agentic AI, an autonomous system that runs independently of humans and uses AI agents that plan and act across the network. Digital twins evaluate the agent’s decisions before releasing them.</p><p>“Autonomy is not about removing humans from the loop,” Rallabandi says. “It is about giving systems the ability to manage complexity so humans can focus on intent and outcomes.”</p><p>Rallabandi also has worked on AI-driven telecom observability technologies designed to allow networks to detect anomalies and optimize performance automatically.</p><p>He has developed a virtual O-RAN innovation lab, where clients can test the interoperability of their 5G systems, try out their enhancements, run trials of future functions, and experiment with updates.</p><p>Invences partnered with <a href="https://trilogynet.com/" target="_blank">Trilogy Networks</a> to build the <a href="https://trilogynet.com/farmgrid" rel="noopener noreferrer" target="_blank">FarmGrid platform</a> for farms in Fargo, N.D., and Yuma, Ariz. FarmGrid used private 5G networks, edge-computing AI, and digital twins to make the operations more efficient.</p><p>“The project connects farms with sensors, analytics platforms, and autonomous equipment to enable precision agriculture, water optimization, and real-time decision-making,” Rallabandi says.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="0cfc80cc609775b5ff498c9749ec208b" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/TrNkW-Gnw9Y?rel=0&start=47" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span><small class="image-media media-caption" placeholder="Add Photo Caption...">IEEE Senior Member Bhaskara Rallabandi talks about partnering with Trilogy Networks to build the FarmGrid platform for farms in Fargo, N.D., and Yuma, Ariz.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">TECKNEXUS</small></p><h2>Paying it forward through IEEE programs</h2><p>Rallabandi says he believes staying involved with IEEE is important to his career development and a way to give back to the profession. He is a frequent invited <a href="https://events.vtools.ieee.org/m/495517" rel="noopener noreferrer" target="_blank">speaker</a> at IEEE conferences.</p><p>He is active with <a href="https://futurenetworks.ieee.org/" rel="noopener noreferrer" target="_blank">IEEE Future Networks</a> and its <a href="https://ctu.ieee.org/" rel="noopener noreferrer" target="_blank">Connecting the Unconnected</a> (CTU) initiative. Members of the Future Networks technical community work to develop, standardize, and deploy 5G and 6G networks as well as successive generations.</p><p>CTU aims to bridge the digital divide by bringing Internet service to underserved communities. During its<a href="https://ctu.ieee.org/challenge/2025-ctu-challenge/" rel="noopener noreferrer" target="_blank">annual challenge</a>, Rallabandi works with the winning students, researchers, and innovators to help them turn their concepts into affordable, cost-effective options.</p><p>“CTU represents the best of IEEE,” he says. “It is about taking innovation out of conferences and into communities that need it the most.</p><p>“Connectivity should not be a luxury. Rural communities deserve an infrastructure that fits their needs.”</p><p>He participates in the recently launched <a href="https://fnem.futurenetworks.ieee.org/" rel="noopener noreferrer" target="_blank">IEEE Future Networks Empowerment Through Mentorship initiative</a>, which helps innovators, entrepreneurs, and startups expand their companies by educating them about finance, marketing, and related concepts.</p><p>“IEEE gives me both a voice and a responsibility,” Rallabandi says. “We’re not just developing technology; we are shaping how humanity connects.”</p>]]></description><pubDate>Mon, 30 Mar 2026 18:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/invences-startup-telecom-networks</guid><category>Ieee-member-news</category><category>Startups</category><category>Invences</category><category>Telecommunications</category><category>Ieee-future-network</category><category>Careers</category><category>Type-ti</category><dc:creator>Kathy Pretz</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/three-men-seated-on-stage-underneath-a-large-presentation-screen-one-of-the-men-is-holding-a-microphone-while-speaking-to-the-a.jpg?id=65416492&amp;width=980"></media:content></item><item><title>Facial Recognition Is Spreading Everywhere</title><link>https://spectrum.ieee.org/facial-recognition-gone-wrong</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/illustration-34-orange-women-icons-1-blue-man-icon-labels-for-skin-tone-and-gender-comparisons.jpg?id=65407585&width=1200&height=600&coordinates=0%2C179%2C0%2C179"/><br/><br/><p>Facial recognition technology (FRT) dates back 60 years. Just over a decade ago, deep-learning methods tipped the technology into more useful—<a href="https://spectrum.ieee.org/china-facial-recognition" target="_blank">and menacing</a>—territory. Now, retailers, your neighbors, and law enforcement are all storing your face and building up a fragmentary photo album of your life.</p><p>Yet the story those photos can tell inevitably has errors. FRT makers, like those of any diagnostic technology, must balance two types of errors: false positives and false negatives. There are three possible outcomes.</p><div class="ieee-sidebar-medium"><h3>Three Possible Outcomes</h3><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="White figures and an orange hooded figure, focusing on the hooded figure in a split design." class="rm-shortcode" data-rm-shortcode-id="8a762ebf2761a791f12500ed10596cc3" data-rm-shortcode-name="rebelmouse-image" id="f4d64" loading="lazy" src="https://spectrum.ieee.org/media-library/white-figures-and-an-orange-hooded-figure-focusing-on-the-hooded-figure-in-a-split-design.png?id=65407894&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">a) identifies the suspect, since the two images are of the same person, according to the software. Success!</small></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Abstract figures: orange hoodie enlarged, white, yellow, and orange on left, black background." class="rm-shortcode" data-rm-shortcode-id="3d130b8e4c73ee49898645524cecd1f6" data-rm-shortcode-name="rebelmouse-image" id="30881" loading="lazy" src="https://spectrum.ieee.org/media-library/abstract-figures-orange-hoodie-enlarged-white-yellow-and-orange-on-left-black-background.png?id=65407867&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">b) matches another person in the footage with the suspect’s probe image. A false positive, coupled with sloppy verification, could put the wrong person behind bars and lets the real criminal escape justice.</small></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Three white icons and one orange hoodie icon on left, large orange hoodie icon on right." class="rm-shortcode" data-rm-shortcode-id="4cdaa23680c5144a5c284fcd8cb6f3df" data-rm-shortcode-name="rebelmouse-image" id="fbc8f" loading="lazy" src="https://spectrum.ieee.org/media-library/three-white-icons-and-one-orange-hoodie-icon-on-left-large-orange-hoodie-icon-on-right.png?id=65407858&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">c) fails to find a match at all. The suspect may be evading cameras, but if cameras just have low-light or bad-angle images, this creates a false negative. This type of error might let a suspect off and raise the cost of the manhunt.</small></p></div><p>In best-case scenarios—such as comparing someone’s passport photo to a photo taken by a border agent—false-negative rates are <a href="https://face.nist.gov/frte/reportcards/11/clearviewai_003.html" target="_blank">around two in 1,000 and false positives are less than one in 1 million</a>.</p><p>In the rare event you’re one of those false negatives, a border agent might ask you to show your passport and take a second look at your face. But as people ask more of the technology, more ambitious applications could lead to more catastrophic errors. Let’s say that police are searching for a suspect, and they’re comparing an image taken with a security camera with a previous “mug shot” of the suspect.</p><p>Training-data composition, differences in how sensors detect faces, and intrinsic differences between groups, such as age, all affect an algorithm’s performance. The <a href="https://assets.publishing.service.gov.uk/media/693002a4cdec734f4dff4149/1a_Cognitec_NPL_Equitability_Report_October_25.pdf&sa=D&source=docs&ust=1774557264829489&usg=AOvVaw13R0ue8NITZ-0tPVLcJ8S-" target="_blank">United Kingdom estimated</a> that its FRT exposed some groups, such as women and darker-skinned people, to risks of misidentification as high as two orders of magnitude greater than it did to others.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Five faces arranged left to right, from easy to hard to recognize." class="rm-shortcode" data-rm-shortcode-id="ce19d3eb3745de15489274ebe5083f06" data-rm-shortcode-name="rebelmouse-image" id="3ab1e" loading="lazy" src="https://spectrum.ieee.org/media-library/five-faces-arranged-left-to-right-from-easy-to-hard-to-recognize.png?id=65407777&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Less clear photographs are harder for FRT to process.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">iStock</small></p><p>What happens with photos of people who aren’t cooperating, or vendors that train algorithms on biased datasets, or field agents who demand a swift match from a huge dataset? Here, things get murky.</p><div class="ieee-sidebar-medium"><h3>Facial Recognition Gone Wrong</h3><p><strong>THE NEGATIVES OF FALSE POSITIVES</strong></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Detroit Police SUV with American flag decal on side under bright sunlight." class="rm-shortcode" data-rm-shortcode-id="1a424f342f44dff48e8b6b05c79f5032" data-rm-shortcode-name="rebelmouse-image" id="c102c" loading="lazy" src="https://spectrum.ieee.org/media-library/detroit-police-suv-with-american-flag-decal-on-side-under-bright-sunlight.png?id=65407650&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">2020: <a href="https://quadrangle.michigan.law.umich.edu/issues/winter-2024-2025/flawed-facial-recognition-technology-leads-wrongful-arrest-and-historic&sa=D&source=docs&ust=1774557264902408&usg=AOvVaw3xUv5_o_zg1Fh0EScZ9lTW" target="_blank">Robert Williams’s wrongful arrest</a> cost him detention. The ensuing settlement requires Detroit police to enact policies that recognize FRT’s limits. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">iStock</small></p><p><strong>ALGORITHMIC BIAS</strong></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt='Red sign reads "Security cameras in use" with camera graphic.' class="rm-shortcode" data-rm-shortcode-id="014ac05f2fe587ca01643c64c750e331" data-rm-shortcode-name="rebelmouse-image" id="f4f1f" loading="lazy" src="https://spectrum.ieee.org/media-library/red-sign-reads-security-cameras-in-use-with-camera-graphic.png?id=65407620&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">2023: <a href="https://incidentdatabase.ai/cite/619/&sa=D&source=docs&ust=1774557264903427&usg=AOvVaw3fBw_78OyUB3Sa_cPpxmCi" target="_blank">Court bans Rite Aid from using facial recognition for five years</a> over its use of a racially biased algorithm. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">iStock</small></p><p><strong>TOO FAST, TOO FURIOUS?</strong></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Back of ICE officer in tactical gear facing a house." class="rm-shortcode" data-rm-shortcode-id="0004b023a075c21698cdf88cfd0b4106" data-rm-shortcode-name="rebelmouse-image" id="889f9" loading="lazy" src="https://spectrum.ieee.org/media-library/back-of-ice-officer-in-tactical-gear-facing-a-house.png?id=65407619&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">2026: U.S. immigration agents <a href="https://www.404media.co/ices-facial-recognition-app-misidentified-a-woman-twice/&sa=D&source=docs&ust=1774557264904407&usg=AOvVaw03DUrBl3YxN6c3uhHa611f" target="_blank">misidentify a woman they’d detained as two different women</a>. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">VICTOR J. BLUE/BLOOMBERG/GETTY IMAGES </small></p></div><p><span>Consider a busy trade fair using FRT to check attendees against a database, or gallery, of images of the 10,000 registrants, for example. Even at 99.9 percent accuracy you’ll get about a dozen false positives or negatives, which may be worth the trade-off to the fair organizers. But if police start using something like that across a city of 1 million people, the number of potential victims of mistaken identity rises, as do the stakes.</span></p><p><span>What if we ask FRT to tell us if the government has ever recorded and stored an image of a given person? That’s what U.S. Immigration and Customs Enforcement <a href="https://illinoisattorneygeneral.gov/News-Room/Current-News/001%20-%20Complaint%201.12.26.pdf?language_id=1" target="_blank">agents have done since June 2025</a>, using the Mobile Fortify app. The agency conducted more than 100,000 FRT searches in the first six months. The size of the potential gallery is at least <a href="https://sam.gov/opp/b016354c5bd045fa92e4886878747dc8/view" target="_blank">1.2 billion images</a>.</span></p><p><span>At that size, assuming even best-case images, the system is likely to return around 1 million false matches, but at a rate at least 10 times as high for darker-skinned people, depending on the subgroup.</span></p><p>Responsible use of this powerful technology would involve independent identity checks, multiple sources of data, and a clear understanding of the error thresholds, says computer scientist <a href="https://www.cics.umass.edu/about/directory/erik-learned-miller" target="_blank">Erik Learned-Miller</a> of the University of Massachusetts Amherst: “<a href="https://spectrum.ieee.org/joy-buolamwini" target="_blank">The care we take</a> in deploying such systems should be proportional to the stakes.”</p>]]></description><pubDate>Mon, 30 Mar 2026 13:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/facial-recognition-gone-wrong</guid><category>Facial-recognition</category><category>Privacy</category><category>Surveillance</category><category>Machine-vision</category><category>Computer-vision</category><dc:creator>Lucas Laursen</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/illustration-34-orange-women-icons-1-blue-man-icon-labels-for-skin-tone-and-gender-comparisons.jpg?id=65407585&amp;width=980"></media:content></item><item><title>How 5G Non-Terrestrial Networks Enable Ubiquitous Global Connectivity</title><link>https://content.knowledgehub.wiley.com/5g-ntn-takes-flight-technical-overview-of-5g-non-terrestrial-networks/</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/rohde-schwarz-logo.png?id=26851523&width=980"/><br/><br/><p><span>5G covers under 40% of landmass. This Whitepaper details how 3GPP Release 17 addresses six satellite challenges: delay, Doppler, path loss, polarization, spectrum, and architecture.</span></p><p><span></span><strong><span>What Attendees will Learn</span></strong></p><ol><li><span>Why non-terrestrial networks are now integral to the 5G roadmap — Understand how the Third Generation Partnership Project (3GPP) Release 17 incorporates satellite-based connectivity into the 5G system, targeting ubiquitous coverage across maritime, remote, and polar regions where terrestrial networks reach less than 40% of the world’s landmass. Learn the distinction between New Radio non-terrestrial networks for mobile broadband and Internet of Things non-terrestrial networks for low-power machine-type communications.</span></li><li>How satellite constellation design shapes coverage, capacity, and latency — Examine how orbit altitude (low earth orbit, medium earth orbit, geostationary earth orbit), beam footprint geometry, elevation angle, and inclination determine coverage area, round-trip time, and differential delay across user equipment within a single beam. Explore the trade-offs between transparent bent-pipe and regenerative onboard-processing payload architectures.</li><li>What radio frequency challenges distinguish satellite links from terrestrial propagation — Explore the six major technical challenges: high free-space path loss, time-variant Doppler, differential delay across large beam footprints, Faraday rotation of polarization through the ionosphere, and spectrum coexistence between terrestrial and non-terrestrial bands in the S-band and L-band.</li><li>How 5G protocols must adapt to support non-terrestrial connectivity — Learn the specific amendments to hybrid automatic repeat request operation, timing advance control (split into common and user-equipment-specific components), random access procedure timing extensions, discontinuous reception power saving adaptations, earth-fixed tracking area management, conditional handover mechanisms, and feeder link switching for service continuity in a unique propagation environment.</li></ol><p><a href="https://content.knowledgehub.wiley.com/5g-ntn-takes-flight-technical-overview-of-5g-non-terrestrial-networks/" target="_blank">Download this free whitepaper now!</a></p>]]></description><pubDate>Mon, 30 Mar 2026 10:00:03 +0000</pubDate><guid>https://content.knowledgehub.wiley.com/5g-ntn-takes-flight-technical-overview-of-5g-non-terrestrial-networks/</guid><category>Satellites</category><category>Nonterrestrial-networks</category><category>5g</category><category>Radio-frequencies</category><category>Type-whitepaper</category><dc:creator>Rohde &amp; Schwarz</dc:creator><media:content medium="image" type="image/png" url="https://assets.rbl.ms/26851523/origin.png"></media:content></item><item><title>Social Media Addiction Trial Should Lead to Platform Redesigns</title><link>https://spectrum.ieee.org/social-media-trial</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-woman-in-a-pink-jacket-with-a-large-button-of-a-teenage-girl-affixed-to-it-stands-in-front-of-a-large-banner-with-the-names-an.jpg?id=65404697&width=1200&height=600&coordinates=0%2C294%2C0%2C295"/><br/><br/><p>In a landmark case, a jury found this week that Meta and YouTube negligently designed their platforms and harmed the plaintiff, a 20-year-old woman referred to as Kaley G.M. The jury agreed with the plaintiff that <a href="https://spectrum.ieee.org/medical-experts-say-addiction-to-technology-is-a-growing-concern" target="_blank">social media is addictive</a> and harmful and was deliberately designed to be that way. This finding aligns with my view as a clinical psychologist: that social media addiction is not a failure of users, but a feature of the platforms themselves. I believe that accountability must extend beyond individuals to the systems and incentives that shape their behavior.</p><p>In my clinical practice, I regularly see patients struggling with compulsive social media use. Many describe a pattern of “doomscrolling,” often using social media to numb themselves after a long day. Afterwards, they feel guilty and stressed about the time lost yet have had limited success changing this pattern on their own.</p><p><span>It’s easy to understand why scrolling can be so addictive. Social media interfaces are built around a powerful behavioral mechanism known as intermittent reinforcement, says </span><a href="https://vivo.brown.edu/display/jbrewer2" target="_blank">Judson Brewer</a><span>, an addiction researcher at Brown University, which is the strongest and most effective type of reinforcement learning. This is the same mechanism that slot machines rely on: Users never know when the next reward—a shower of quarters, or a slew of likes and comments—will appear. Not all the videos in our feeds captivate us, but if we scroll long enough, we are bound to arrive at one that does. The ongoing search for rewards ensnares us and reinforces itself.</span></p><h2>Why Social Media Feels Addictive </h2><p>Individuals typically struggle on their own to address compulsive social media use. This should be no surprise, as habits are not typically broken through sheer discipline but rather by altering the reinforcement loops that sustain them. Brewer argues that “there’s actually no neuroscientific evidence for the presence of willpower.” Placing the burden to self-regulate solely on users misses the deeper issue: These platforms are engineered to override individual control.</p><p><a href="https://www.hhs.gov/surgeongeneral/reports-and-publications/youth-mental-health/social-media/index.html?utm_source=chatgpt.com" target="_blank">A growing body of research</a> identifies social media use and constant digital connectivity as important influences on the growing incidence of adolescent mental health problems. Brewer notes that adolescents are particularly vulnerable, as they are in a “developmental phase” in which reinforcement learning processes are especially strong. This vulnerability can be exploited by the design features of large social media platforms.</p><h2>How Platforms Are Designed to Maximize Engagement </h2><p><a href="https://www.npr.org/2024/10/11/nx-s1-5150088/the-biggest-findings-from-uncensored-tiktok-lawsuit-documents" rel="noopener noreferrer" target="_blank">NPR uncovered records</a> from a recent lawsuit filed by Kentucky’s attorney general against TikTok. According to these documents, TikTok implemented interface mechanisms such as autoplay, infinite scrolling, and a highly personalized recommendation algorithm that were systematically optimized to maximize user engagement. </p><p>TikTok’s algorithmically tailored “For You” content continuously tracks user behaviors, such as how long a video is watched, whether it is replayed, or quickly skipped. The feed then curates short videos, or reels, for the user based on past scrolling behavior and what is most likely to hold attention.</p><p>These documents show one example of a tech company knowingly designing products to maximize attention. I believe social media companies also have the capacity to reduce addictiveness through intentional design choices.</p><h2>How Governments Are Regulating Social Media</h2><p>The good news is we are not helpless. There are multiple levers for change: how we collectively talk about social media, how our governments regulate its design and access, and how we hold companies accountable for practices that shape user behavior.</p><p>Some countries are moving quickly to set policy around social media use. Australia has imposed a minimum age of 16 for social media accounts, with similar bans <a href="https://techcrunch.com/2026/03/06/social-media-ban-children-countries-list" rel="noopener noreferrer" target="_blank">pending</a> in Denmark, France, and Malaysia.</p><p>These bans typically rely on age verification. Users without verified accounts can still passively watch videos on platforms like YouTube, but this approach removes many of the most addictive features, including infinite scroll, personalized feeds, notifications, and systems for followers and likes. At the same time, <a href="https://spectrum.ieee.org/age-verification" target="_self">age verification may cause different problems</a> in the online ecosystem.</p><p>Other countries are targeting social media use in specific contexts. South Korea, for example, <a href="https://www.bbc.com/news/articles/c776ye6lrvzo" rel="noopener noreferrer" target="_blank">banned smartphone use in classrooms</a>. And the United Kingdom is taking a different approach; its <a href="https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/childrens-information/childrens-code-guidance-and-resources/age-appropriate-design-a-code-of-practice-for-online-services/" rel="noopener noreferrer" target="_blank">Age Appropriate Design Code</a> instructs platforms to prioritize children’s safety while designing products. The code includes strong privacy defaults, limits on data collection, and constraints on features that nudge users toward greater engagement.</p><h2>How Social Media Platforms Could Be Redesigned</h2><p>A <a href="https://mhanational.org/wp-content/uploads/2025/03/Breaking-the-Algorithm-report.pdf)." rel="noopener noreferrer" target="_blank">report</a> called <em>Breaking the Algorithm</em>, from Mental Health America, argues that social media platforms should shift from maximizing engagement to supporting well-being. It calls for revamping recommendation systems to spot patterns of unhealthy use and adjusting feeds accordingly—for example, by limiting extreme or distressing content. </p><p>The report also argues that users should not have to intentionally opt out of harmful design features. Instead, the safest settings should be the default. The report supports regulatory measures aimed at limiting features such as autoplay and infinite scroll while enforcing privacy and safety settings. </p><p>Platforms could also give users more control by adding natural speed bumps, such as stopping points or break reminders during scrolling. <a href="https://dl.acm.org/doi/fullHtml/10.1145/3334480.3382810" rel="noopener noreferrer" target="_blank">Research</a> shows that interrupting infinite scroll with prompts such as “Do you want to keep going?” substantially reduces mindless scrolling and improves memory of content.</p><p>Some social media platforms are already experimenting with more ethical engagement. <a href="https://mastodon.social/explore" rel="noopener noreferrer" target="_blank">Mastodon</a>, an open-source, decentralized platform, displays posts chronologically rather than ranking them for engagement, and does not offer algorithmically generated feeds like “For You.” <a href="https://bsky.app/" rel="noopener noreferrer" target="_blank">Bluesky</a> gives users control by letting them customize their own algorithms and toggle between different feed types, such as chronological or topic-based filters.</p><p>In light of the recent verdict, it is time for a national conversation about accountability for social media companies. Individual responsibility will always be important, but so are the mechanisms employed by big tech to shape user behavior. If social media platforms are currently designed to capture attention, they can also be designed to give some of it back. </p>]]></description><pubDate>Fri, 27 Mar 2026 19:05:59 +0000</pubDate><guid>https://spectrum.ieee.org/social-media-trial</guid><category>Addiction</category><category>Screen-addiction</category><category>Internet-addiction</category><category>Facebook</category><category>Google</category><category>Social-media</category><dc:creator>Daniel Katz</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-woman-in-a-pink-jacket-with-a-large-button-of-a-teenage-girl-affixed-to-it-stands-in-front-of-a-large-banner-with-the-names-an.jpg?id=65404697&amp;width=980"></media:content></item><item><title>IEEE Professional Development Suite Teaches In-Demand Skills</title><link>https://spectrum.ieee.org/ieee-professional-development-suite</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-woman-in-a-cleansuit-carefully-inspecting-a-semiconductor-wafer-in-a-lab.jpg?id=65416186&width=1200&height=600&coordinates=0%2C625%2C0%2C625"/><br/><br/><p>In today’s technological landscape, the only constant is the rate of obsolescence. As engineers move deeper into the eras of 6G, ubiquitous artificial intelligence, and hyper-miniaturized electronics, a traditional degree is only a starting point.</p><p>To remain competitive in today’s job market, technical specialists must evolve into future-ready professionals by cultivating more than just niche expertise. Success now demands a high degree of adaptive intelligence and strategic communication, allowing specialists to translate complex data into actionable business decisions as industry shifts accelerate.</p><p>To bridge the gap between technical proficiency and organizational leadership, the <a href="https://innovationatwork.ieee.org/professional-development/" rel="noopener noreferrer" target="_blank">IEEE Professional Development Suite</a> offers training on programs designed to build the strategic competencies required to navigate today’s complex landscape. The suite provides deep technical dives into domains such as telecommunications connectivity and microelectronics reliability. Organizations can stay ahead of the curve through informed decision-making and a future-ready workforce.</p><h2>Mastery of electrostatic discharge and 5G networks</h2><p>Within the semiconductor sector, which is <a href="https://www.mckinsey.com/industries/semiconductors/our-insights/semiconductors-have-a-big-opportunity-but-barriers-to-scale-remain" rel="noopener noreferrer" target="_blank">projected to become a US $1 billion industry by 2030</a>, electrostatic discharge (ESD) is a major reliability challenge. Because even a microscopic, unnoticed discharge can compromise a semiconductor, ESD issues account for <a href="https://www.escatec.com/blog/esd-electronics-manufacturing" rel="noopener noreferrer" target="_blank">up to one-third of all field failures</a>, according to the <a href="https://www.esda.org/about-us/" rel="noopener noreferrer" target="_blank">EOS/ESD Association</a>.</p><p>IEEE’s targeted training—the online <a href="https://innovationatwork.ieee.org/professional-development/ieee-practical-esd-protection-design/" rel="noopener noreferrer" target="_blank">Practical ESD Protection Design certificate program</a>—equips teams with technical protocols to mitigate the risks and ensure long-term hardware reliability. Specialized ESD <a href="https://spectrum.ieee.org/electrostatic-discharge" target="_self">training</a> has become essential for chip designers and manufacturing professionals seeking to improve discharge control.</p><p>The interactive modules cover theory, real-world case studies, and practical mitigation techniques. The standards-based instruction is aligned with <a href="https://blog.ansi.org/ansi/ansi-esd-s20-20-2021-protection-electronic-parts/" rel="noopener noreferrer" target="_blank">ANSI/ESD S20.20–21: Protection of Electrical and Electronic Parts</a> and other industry guidelines.</p><p>As 5G network capabilities expand globally, so does the demand for engineers who can master the protocols and procedures required to manage complex telecommunications systems. The IEEE <a href="https://innovationatwork.ieee.org/professional-development/5g-6g-essential-protocols-and-procedures-training-and-innovation-testbed/" rel="noopener noreferrer" target="_blank">5G/6G Essential Protocols and Procedures Training and Innovation Testbed</a>, in partnership with <a href="https://wraycastle.com/" rel="noopener noreferrer" target="_blank">Wray Castle</a>, takes a deep dive into the 5G network function framework, registration processes, and packet data unit session establishment. The <a href="https://spectrum.ieee.org/ieee-5g-and-6g-training" target="_self">program</a> is designed for system engineers, integrators, and technical professionals responsible for 5G signaling. Stakeholders such as network operators, equipment vendors, regulators, and handset manufacturers could find the program to be beneficial as well.</p><p class="pull-quote"><span>“The IEEE Professional Development Suite ensures that learners are not just keeping pace with change but helping to drive it.”</span></p><p>To bridge the gap between theory and practice, the course includes three months of free access to the <a href="https://spectrum.ieee.org/ieee-5g-and-6g-training" target="_self">IEEE 5G/6G Innovation Testbed</a>. The secure, cloud-based platform offers a private, end-to-end 5G network environment where individuals and teams can gain hands-on experience with critical system signaling and troubleshooting.</p><h2>Leadership training programs</h2><p>Technical knowledge alone is not enough to climb the corporate ladder. To thrive today, engineering leaders must have a strategic vision and people-centric leadership skills.</p><p>The <a href="https://innovationatwork.ieee.org/professional-development/leading-technical-teams/" target="_blank">IEEE Leading Technical Teams</a> training program focuses on the challenges of managing engineers in R&D environments and fostering creative problem-solving through an immersive learning experience. It’s designed for professionals who have been in a leadership position for at least six months. Participants can gain self-awareness.</p><p>The program includes a 360-degree assessment that gathers feedback about the individual from peers and direct reports to build a personalized development plan. The goal is to help technical professionals transition from high-performing individual contributors into leaders who drive innovation by inspiring their teams rather than just managing tasks.</p><p>Organizations can enroll groups of 10 or more to learn as a cohort—which can ensure that everyone stays on the same page while setting a training schedule that fits the team’s deadlines.</p><p>In collaboration with the <a href="https://www.business.rutgers.edu/" target="_blank">Rutgers Business School</a>, IEEE offers two mini MBA programs to bridge the gap between technical expertise and executive leadership. The programs offer flexibility to fit the demanding schedules of senior professionals. The online format lets participants engage with content as their time permits, while live virtual office hours with faculty provide opportunities for real-time interaction.</p><p>During the <a href="https://innovationatwork.ieee.org/professional-development/rutgers-online-mini-mba-for-engineers/" rel="noopener noreferrer" target="_blank">mini MBA for engineers</a> 12-week curriculum, technical professionals master core competencies such as financial analysis, business strategy, and negotiation to effectively transition into management roles.</p><p>The <a href="https://innovationatwork.ieee.org/professional-development/rutgers-online-mini-mba-artificial-intelligence/" rel="noopener noreferrer" target="_blank">mini MBA in artificial intelligence</a> embeds AI literacy directly into business strategy rather than treating the technology as a standalone subject. Participants learn to evaluate AI through financial modeling and governance frameworks, gaining a practical foundation to lead initiatives that incorporate the technology.</p><p>The programs are offered to individuals as well as to organizations interested in training groups of 10 employees or more.</p><h2>Earning credits that count</h2><p>All the programs within the IEEE Professional Development Suite offer continuing education units and professional development hours.</p><p>Earning globally recognized credits provides a professional advantage, signaling a commitment to growth that often serves as a prerequisite for advancing into senior, lead, or principal roles. Additionally, the credits satisfy annual professional engineering license renewal requirements, ensuring practitioners remain compliant while expanding their capabilities.</p><h2>Why curated content matters</h2><p>Developed by <a href="https://ea.ieee.org" rel="noopener noreferrer" target="_blank">IEEE Educational Activities</a>, the training programs are peer-reviewed and built to align with industry needs. By focusing on upskilling (improving current skills) and reskilling (learning new ones), the IEEE Professional Development Suite ensures that learners are not just keeping pace with change but helping to drive it.</p>]]></description><pubDate>Fri, 27 Mar 2026 18:00:04 +0000</pubDate><guid>https://spectrum.ieee.org/ieee-professional-development-suite</guid><category>Ieee-products-and-services</category><category>Education</category><category>Training</category><category>Ieee-educational-activities</category><category>Careers</category><category>Ieee-professional-development-suite</category><category>Type-ti</category><dc:creator>Angelique Parashis</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-woman-in-a-cleansuit-carefully-inspecting-a-semiconductor-wafer-in-a-lab.jpg?id=65416186&amp;width=980"></media:content></item><item><title>Video Friday: Beep! Beep! Roadrunner Bipedal Bot Breaks the Mold</title><link>https://spectrum.ieee.org/roadrunner-bipedal-robot</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/two-wheeled-balancing-robot-leans-while-rolling-in-an-indoor-testing-lab.png?id=65415603&width=1200&height=600&coordinates=0%2C60%2C0%2C61"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://2026.ieee-icra.org/">ICRA 2026</a>: 1–5 June 2026, VIENNA</h5><h5><a href="https://roboticsconference.org/">RSS 2026</a>: 13–17 July 2026, SYDNEY</h5><h5><a href="https://mrs.fel.cvut.cz/summer-school-2026/">Summer School on Multi-Robot Systems</a>: 29 July–4 August 2026, PRAGUE</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="9kae-uame1u"><em>“Roadrunner” is a new bipedal wheeled robot prototype designed for multimodal locomotion. It weighs around 15 kg (33 lb) and can seamlessly switch between its side-by-side and in-line wheel modes and stepping configurations depending on what is required for navigating its environment. The robot’s legs are entirely symmetric, allowing it to point its knees forward or backward, which can be used to avoid obstacles or manage specific movements. A single control policy was trained to handle both side-by-side and in-line driving. Several behaviors, including standing up from various ground configurations and balancing on one wheel, were successfully deployed zero-shot on the hardware.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="76bd6c7edd7ff24700dad004edd086aa" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/9kae-UAME1U?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://rai-inst.com/">Robotics and AI Institute</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="tyasuwrkv4e">Incredibly (INCREDIBLY!) <a data-linked-post="2657767692" href="https://spectrum.ieee.org/nasa-mars-sample-return" target="_blank">NASA</a> says that this is actually happening.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="bc72d2ac20028faf8c32287c722f0ce9" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/TYasUWRkv4E?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><blockquote><em>NASA’s SkyFall mission will build on the success of the Ingenuity Mars helicopter, which achieved the first powered, controlled flight on another planet. Using a daring midair deployment, SkyFall will deliver a team of next-gen Mars helicopters to scout human landing sites and map subsurface water ice.</em></blockquote><p>[ <a href="https://www.nasa.gov/news-release/nasa-unveils-initiatives-to-achieve-americas-national-space-policy/">NASA</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="jsk-ff2mycg"><em>NASA’s MoonFall mission will blaze a path for future <a data-linked-post="2662067231" href="https://spectrum.ieee.org/video-friday-training-artemis" target="_blank">Artemis</a> missions by sending four highly mobile drones to survey the lunar surface around the Moon’s South Pole ahead of astronauts’ arrival there. MoonFall is built on the legacy of NASA’s Ingenuity Mars Helicopter. The drones will be launched together and released during descent to the surface. They will land and operate independently over the course of a lunar day (14 Earth days) and will be able to explore hard-to-reach areas, including permanently shadowed regions (PSRs), surveying terrain with high-definition optical cameras and other potential instruments.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="24cd6ef18a5608c71e3afdc55a0d2507" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/JsK-ff2Mycg?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>For what it’s worth, <a data-linked-post="2671177906" href="https://spectrum.ieee.org/moon-landing-2025" target="_blank">Moon landings</a> have a success rate well under 50%. So let’s send some robots there to land over and over!</p><p>[ <a href="https://www.nasa.gov/news-release/nasa-unveils-initiatives-to-achieve-americas-national-space-policy/">NASA</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="hdjiukrfvca"><em>In Science Robotics, researchers from the Tangible Media group led by Professor Hiroshi Ishii, together with colleagues from Politecnico di Bari, present Electrofluidic Fiber Muscles: a new class of artificial muscle fibers for robots and wearables. Unlike the rigid servo motors used in most robots, these fiber-shaped muscles are soft and flexible. They combine electrohydrodynamic (EHD) fiber pumps—slender tubes that move liquid using electric fields to generate pressure silently, with no moving parts—with fluid-filled fiber actuators. These artificial muscles could enable more agile untethered robots, as well as wearable assistive systems with compact actuation integrated directly into textiles.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="401e33c5be7f9feea5a4219dd786d2ab" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/HdjIukrfvcA?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.media.mit.edu/projects/electrofluidicmuscle/overview/">MIT Media Lab</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="xzfzkmq2rrq"><em>In this study, we developed MEVIUS2, an open-source quadruped robot. It is comparable in size to the Boston Dynamics Spot, equipped with two lidars and a C1 camera, and can freely climb stairs and steep slopes! All hardware, software, and learning environments are released as open source.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ef4dd2071d09d4ac4c97d9e6993be2ea" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/xzfZkmQ2rrQ?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://github.com/haraduka/mevius2">MEVIUS2</a> ]</p><p>Thanks, Kento!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="zj07hhjnrto"><em>What goes into preparing for a live performance? Arun highlights the reliability testing that goes into trying a new behavior for Spot.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="075596c69914e064444994a7d74fe2dc" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/zj07hHJnrto?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://bostondynamics.com/">Boston Dynamics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="41kpw6jwxty"><em>In this work, a multirobot planning and control framework is presented and demonstrated with a team of 40 indoor robots, including both ground and aerial robots.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="e8811d7981e9be82f23859aafea31249" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/41kPW6JwXtY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>That soundtrack, though.</p><p>[ <a href="https://proroklab.github.io/agile-mapf/">GitHub</a> ]</p><p>Thanks, Keisuke!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="img5a_ykjms"><em>Quadrupedal robots can navigate cluttered environments like their animal counterparts, but their floating-base configuration makes them vulnerable to real-world uncertainties. Controllers that rely only on proprioception (body sensing) must physically collide with obstacles to detect them. Those that add exteroception (vision) need precisely modeled terrain maps that are hard to maintain in the wild. DreamWaQ++ bridges this gap by fusing both modalities through a resilient multimodal reinforcement learning framework. The result: a single controller that handles rough terrains, steep slopes, and high-rise stairs—while gracefully recovering from sensor failures and situations it has never seen before.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="61dd08501e1c8f10d63a43acb5bb2911" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Img5a_yKjMs?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>That cliff behavior is slightly uncanny.</p><p>[ <a href="https://dreamwaqpp.github.io/">DreamWaQ++</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="toh8pd4o34u">I take issue with this from iRobot:</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="1d86fae43d52011c45db0102b9fdc86b" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/tOH8pD4O34U?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>While the <a data-linked-post="2650276443" href="https://spectrum.ieee.org/robotic-blimp-could-explore-hidden-chambers-of-great-pyramid" target="_blank">pyramid exploration</a> that iRobot did was very cool, they did it with a custom-made robot designed for a very specific environment. Cleaning your floors is way, way harder. Here’s a bit more detail on the pyramids thing:</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="1b4538cb0137311b0b433425e56096f0" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Pts3w2Pw8F4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.youtube.com/watch?v=Pts3w2Pw8F4">iRobot</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="t1vub0knci4">More robots in the circus, please!</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="89aa286bf5c7d16563d9223df6cc3d2b" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/T1VUb0kncI4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://danielsimu.com/acrobot/">Daniel Simu</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="f2hasoladgm"><em>MIT engineers have designed a wristband that lets wearers control a robotic hand with their own movements. By moving their hands and fingers, users can direct a robot to perform specific tasks, or they can manipulate objects in a virtual environment with high-dexterity control.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="88281d6e7db31cc58ef4b327756809b2" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/F2HaSoladgM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://news.mit.edu/2026/wristband-enables-wearers-control-robotic-hand-with-own-movements-0325">MIT</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="0ozaw6rryie"><em>At <a data-linked-post="2676218078" href="https://spectrum.ieee.org/nvidia-groq-3" target="_blank">Nvidia GTC 2026</a>, we showcased how AI is moving into the physical world. Visitors interacted with robots using voice commands, watching them interpret intent and act in real time—powered by our KinetIQ AI brain.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="95460eeec4fec87fd729fe5aa4314531" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/0oZAw6rryIE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://thehumanoid.ai/">Humanoid</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="7sl93jl8_o8">Props to Sony for its continued support and updates for <a data-linked-post="2670284977" href="https://spectrum.ieee.org/aibo" target="_blank">Aibo</a>!</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="f05e5074c48cd251f832782efa434226" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/7sL93Jl8_O8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://us.aibo.com/myaibo/">Aibo</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="yd7enmgniei">This robot looks like it could be a little curvier than normal?</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="3be1fe9e24c6ee745f0f1fa7a2a1b201" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Yd7eNmGNIeI?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.limxdynamics.com/en">LimX Dynamics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="dncww0qmkce"><em>Developed by Zhejiang Humanoid Robot Innovation Center Co., Ltd., the Naviai Robot is an intelligent cooking device. It can autonomously process ingredients, perform cooking tasks with high accuracy, adjust smart kitchen equipment in real time, and complete postcooking cleaning. Equipped with multimodal perception technology, it adapts to daily kitchen environments and ensures safe and stable operation.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="f58863823d5082a3e5e104c47b9e68f6" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/dNcWW0qMkcE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>That 7x is doing some heavy lifting.</p><p>[ <a href="https://en.zhejianglab.com/institutescenters/researchunits/interdisciplinaryresearchcenters/researchcenterforintelligentrobot/">Zhejiang Lab</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="gthxsfhdt8q">This CMU RI Seminar is by Hadas Kress-Gazit from Cornell, on “Formal Methods for Robotics in the Age of Big Data.”</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="a0150919b813daa034367d7a41c9d68e" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/gthXSFhDt8Q?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><blockquote><em>Formal methods—mathematical techniques for describing systems, capturing requirements, and providing guarantees—have been used to synthesize robot control from high-level specification, and to verify robot behavior. Given the recent advances in robot learning and data-driven models, what role can, and should, formal methods play in advancing robotics? In this talk I will give a few examples for what we can do with formal methods, discuss their promise and challenges, and describe the synergies I see with data-driven approaches.</em></blockquote><p>[ <a href="https://www.ri.cmu.edu/event/formal-methods-for-robotics-in-the-age-of-big-data/">Carnegie Mellon University Robotics Institute</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 27 Mar 2026 16:30:03 +0000</pubDate><guid>https://spectrum.ieee.org/roadrunner-bipedal-robot</guid><category>Video-friday</category><category>Nasa</category><category>Bipedal-robots</category><category>Quadruped-robots</category><category>Artificial-muscles</category><category>Humanoid-robots</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/two-wheeled-balancing-robot-leans-while-rolling-in-an-indoor-testing-lab.png?id=65415603&amp;width=980"></media:content></item><item><title>A New Way to Spray Paint Color</title><link>https://spectrum.ieee.org/spray-paint-color-creator</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-portable-device-with-four-spray-paint-canisters-at-the-bottom-and-tubing-and-electronics-mounted-on-a-frame-above-them.png?id=65404484&width=1200&height=600&coordinates=0%2C481%2C0%2C481"/><br/><br/><p>We’re all familiar with mixing red, yellow, and blue paint in various ratios to instantly make all kinds of colors. This works great for oils or watercolors, but fails when it comes to cans of spray paint. The paint droplets can’t be blended once they are aerosolized. Consequently, although spray cans are great for applying even coats of paint to large areas very quickly, spray-paint artists need a separate can for every color they want to use—until now.</p><p>Back in 2018, when I first saw professional spray artists lugging dozens to hundreds of cans to their work sites, I was inspired to start noodling on a solution. I’ve worked at Google X, Alphabet’s “<a href="https://spectrum.ieee.org/astro-teller-captain-of-moonshots-at-x" target="_blank">moonshot factory</a>,” as a hardware engineer, and I’m now building a startup in mechanical-design software. I’m no painter, but I know my way around mechatronics.</p><p>I wanted my solution to be inexpensive and simple enough to build as a DIY project and functional enough for an artist to use, without breaking their flow. So I began prototyping a system that combines base colors while they are still in pressurized form from off-the-shelf cans.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="An illustration of how a spring-loaded arm driven by a stepper motor with a roller bearing at one end opens and closes a tube by pressing down on it. " class="rm-shortcode" data-rm-shortcode-id="0b3ebec76b6afd77a69158d3844d4e11" data-rm-shortcode-name="rebelmouse-image" id="ac492" loading="lazy" src="https://spectrum.ieee.org/media-library/an-illustration-of-how-a-spring-loaded-arm-driven-by-a-stepper-motor-with-a-roller-bearing-at-one-end-opens-and-closes-a-tube-by.png?id=65404489&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">This new rotary pinch valve can be opened and closed in tens of milliseconds and prevents backpressure from clogging lines.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">James Provost</small></p><p><span>I tried a few approaches where pres-surized paint from the base-color cansfed through tubes into a mixing channel, before emerging from a spray head. To control the ratios, I decided to borrow a trick that would be familiar to anyone who’s ever had to control the bright-ness of an LED using a microcontroller: <a href="https://en.wikipedia.org/wiki/Pulse-width_modulation" target="_blank">pulse-width modulation</a>. Initially, I used electronically controlled solenoid valves to release the paint from the cans. The paint would flow into a mixing channel for a relative duration that corresponded to the ratio of the base colors required to make a given hue. However, this failed because different cans never have the same internal pressure. Whenever two valves were open at the same time, the pressure difference would make paint flow backward into the lower-pressure can. </span></p><p><span></span><span>As an alternative, I removed the mixing channel and tried making the paint pulses from each can sequentially converge into a tube so that no more than one valve would ever be open at a time. Surprisingly, this worked perfectly. The backflow was eliminated, and it turned out that the natural turbulence of the flow was sufficient to mix the paints. Let’s say you want to produce a clementine orange color. This requires yellow and red paint in a ratio of 1:2, so the yellow valve opens for a period of time, and then the red valve opens for twice as long. The system then keeps repeating this cycle of pulses in a rapid pace to instantly create the spray-paint color you want.</span></p><h3></h3><br/><div class="rblad-ieee_in_content"></div><p><span>The theory is straightforward, but making this work in practice took quite a bit of experimentation. First, I had to determine the actual durations of pulses that would produce evenly mixed colors, not just their ratios. I also needed to work out the size of the tubing (too narrow and you’d get low spray force; too wide and you’d have paint accumulating in the tubes). Eventually I settled on a maximum pulse duration of 250 milliseconds and a tube diameter of 1 millimeter.</span></p><h2>Inventing A New Valve</h2><p>Even though the system worked, the solenoid valves I used constantly clogged up. Designed for water purifiers, the valves didn’t prevent paint from entering the mechanism, where the paint would harden. Moreover, when the valves were turned off, they could stop backflow only if the inlet remained pressurized. So disconnecting a paint can from the system would cause instant leaking. Other off-the-shelf valves I tried couldn’t cycle fast enough and were too expensive.</p><p class="pull-quote"><span>I had some spectacular failures along the way of the sort that only pressurized paint can provide.</span></p><p>So I created my own mechanism: a high-speed, electronically controlled, rotary pinch valve. It has a stepper motor that rotates a lever with a rolling bearing to constrict fluid flow inside a flexible tube. This concept isn’t new—there’s something like them in every peristaltic pump. But I added a spring to firmly hold the lever in the closed position against any back pressure when the motor isn’t powered, making it a normally closed valve that isolates the attached can. Additionally, the valve is fast enough to be open for as little as 30 milliseconds.</p><p>I went through four major prototypes of the system before reaching a working version, and I had some spectacular failures along the way of the sort that only pressurized paint can provide. The final version uses four base colors—red, yellow, blue, and white—with the color mix controlled by four knobs attached to an <a href="https://spectrum.ieee.org/the-making-of-arduino" target="_blank">Arduino</a> Nano and a small display. The flow of paint is triggered by a push button placed above the spray head, similar to a spray can’s nozzle.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A diagram showing the arrangement of valves and control wires, along with a timing diagram of valves opening and closing, showing the red paint open for twice as long as the yellow paint in a continuous cycle." class="rm-shortcode" data-rm-shortcode-id="73eb055784062c7a49358216d6805cd4" data-rm-shortcode-name="rebelmouse-image" id="cd414" loading="lazy" src="https://spectrum.ieee.org/media-library/a-diagram-showing-the-arrangement-of-valves-and-control-wires-along-with-a-timing-diagram-of-valves-opening-and-closing-showin.png?id=65404474&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Cans holding base colors (A) are attached to valves (B). An Arduino-based control panel (C) opens and closes valves to mix paint before it is aerosolized (E). By quickly opening and closing valves with varying durations in sequence (D), you can mix paint in specific ratios to create desired colors.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">James Provost</small></p><p>The length of time a base color’s paint valve can be open is one of eight values between 30 and 250 ms. This means that the entire system—which I coincidentally dubbed Spectrum—can create hundreds of distinct spray-paint colors instantly. It produces less than 8<span><sup>4</sup></span> (or 4,096) colors because duration ratios that are a multiple of each other will produce the same color—for example, 2:3 and 4:6. I added a force sensor to the push button, which allows for a gradient: Two color mixes can be dialed in, and as I increase my thumb’s pressure on the button, the paint mix shifts from one color to the other.</p><p>Spectrum’s various fixtures are 3D-printed, and project files and videos are available through my website at<a href="https://www.sandeshmanik.com/projects/spectrum" target="_blank"> https://www.sandeshmanik.com/projects/spectrum</a>. Preprints of technical descriptions of the <a href="https://www.techrxiv.org/doi/full/10.36227/techrxiv.176366531.13333886" target="_blank">rotary pinch valve</a> and <a href="https://www.techrxiv.org/doi/full/10.36227/techrxiv.176462677.71158790" target="_blank">mixing methodology</a> are available on TechRxiv. The total cost for the bill of materials is less than US $150.</p><p>Working on and off on the side for about seven years, I finally finished developing my system and writing the documentation in late 2025. After I posted a video to social media, I was heartened by the immediate positive response from spray-paint artists around the world. I’m now creating step-by-step instructions so that nontechnical people can build their own Spectrum paint sprayer. I look forward to seeing what creations artists out in the wild make!</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="407b6954fd324d9042a7d7a7de438050" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/IicmGjPu4J0?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> </p> ]]></description><pubDate>Fri, 27 Mar 2026 14:41:42 +0000</pubDate><guid>https://spectrum.ieee.org/spray-paint-color-creator</guid><category>Spray-paint</category><category>Arduino</category><category>Mechatronics</category><dc:creator>Sandesh Manik</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/a-portable-device-with-four-spray-paint-canisters-at-the-bottom-and-tubing-and-electronics-mounted-on-a-frame-above-them.png?id=65404484&amp;width=980"></media:content></item><item><title>How NYU’s Quantum Institute Bridges Science and Application</title><link>https://spectrum.ieee.org/nyu-quantum-institute</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/person-in-white-suit-working-with-semiconductor-equipment-in-a-lab.jpg?id=65322091&width=1200&height=600&coordinates=0%2C140%2C0%2C140"/><br/><br/><p><em>This sponsored article is brought to you by <a href="https://engineering.nyu.edu/" rel="noopener noreferrer" target="_blank">NYU Tandon School of Engineering</a>.</em></p><p>Within a 6 mile radius of New York University’s (NYU) campus, there are more than 500 tech industry giants, banks, and hospitals. This isn’t just a fact about real estate, it’s the foundation for advancing quantum discovery and application.</p><p>While the world races to harness quantum technology, NYU is betting that the ultimate advantage lies not solely in a lab, but in the dense, demanding, and hyper-connected urban ecosystem that surrounds it. With the launch of its <a href="https://www.nyu.edu/about/news-publications/news/2025/october/nyu-launches-quantum-institute-.html" rel="noopener noreferrer" target="_blank"><span>NYU Quantum Institute</span></a> (NYUQI), NYU is positioning itself as <a href="https://www.nyu.edu/about/news-publications/news/2025/october/top-quantum-scientists-convene-at-nyu.html" target="_blank">the central node</a> in this network; a “full stack” powerhouse built on the conviction that it has found the right place, and the right time, to turn quantum science into tangible reality.</p><p>Proximity advantage is essential because quantum science demands it. Globally, the quest for practical quantum solutions — whether for computing, sensing, or secure communications — has been stalled, in part, by fragmentation. Physicists and chemical engineers invent new materials, computer scientists develop new algorithms, and electrical engineers build new devices, but all three often work in isolated academic silos.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Three men pose at the 4th Annual NYC Quantum Summit 2025; attendees converse in the background." class="rm-shortcode" data-rm-shortcode-id="1dd6dfe45b73630bb9040545fcdfae7d" data-rm-shortcode-name="rebelmouse-image" id="33e2d" loading="lazy" src="https://spectrum.ieee.org/media-library/three-men-pose-at-the-4th-annual-nyc-quantum-summit-2025-attendees-converse-in-the-background.jpg?id=65322345&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Gregory Gabadadze, NYU’s dean for science, NYU physicist and Quantum Institute Director Javad Shabani, and Juan de Pablo, Anne and Joel Ehrenkranz Executive Vice President for Global Science and Technology and executive dean of the Tandon School of Engineering.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Veselin Cuparić/NYU</small></p><p><span>NYUQI’s premise is that breakthroughs happen “at the interfaces between different domains,” according to </span><a href="https://engineering.nyu.edu/faculty/juan-de-pablo" target="_blank"><span>Juan de Pablo</span></a><span>, Executive Vice President for Global Science and Technology at NYU and Executive Dean of the NYU Tandon School of Engineering. The Institute is built to actively force those necessary collisions — to integrate the physicists, engineers, materials scientists, computer scientists, biologists, and chemists vital to quantum research into one holistic operation. This institutional design ensures that the hardware built by one team can be immediately tested by software developed by another, accelerating progress in a way that isolated departments never could.</span></p><p class="pull-quote"><span>NYUQI’s premise is that breakthroughs happen at the interfaces between different domains. <strong>—Juan de Pablo, NYU Tandon School of Engineering</strong></span></p><p>NYUQI’s integrated vision is backed by a massive physical commitment to the city. The NYUQI is not just a theoretical concept; its collaborators will be housed in a renovated, <a href="https://www.nyu.edu/about/news-publications/news/2025/may/nyu-entering-long-term-lease-at-770-broadway.html" target="_blank"><span>million-square-foot facility</span></a> in the heart of Manhattan’s West Village, backed by a state-of-the-art <a href="https://engineering.nyu.edu/research/nanofab" target="_blank">Nanofabrication Cleanroom</a> in Brooklyn serving as a high-tech foundry. This is where the theoretical meets physical devices, allowing the Institute to test and refine the process from materials science to deployment.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt='NYU building exterior with "Science + Tech" signage, flags, and a passing yellow taxi.' class="rm-shortcode" data-rm-shortcode-id="605cc71d844927d3fb0a05fb086fedcf" data-rm-shortcode-name="rebelmouse-image" id="bceaa" loading="lazy" src="https://spectrum.ieee.org/media-library/nyu-building-exterior-with-science-tech-signage-flags-and-a-passing-yellow-taxi.jpg?id=65322352&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">NYUQI will be housed in a renovated, million-square-foot facility in the heart of Manhattan’s West Village.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Tracey Friedman/NYU</small></p><p><span>Leading this effort is NYUQI Director </span><a href="https://as.nyu.edu/faculty/javad-shabani.html" target="_blank"><span>Javad Shabani</span></a><span>, who, along with the other members, is turning the Institute into a hub for collaboration with private and public sector partners with quantum challenges that need solving. As de Pablo explains, “Anybody who wants to work on quantum with NYU, you come in through that door, and we’ll send you to the right place.” For New York’s vast ecosystem of tech giants and financial institutions, the NYUQI offers a resource they can’t build on their own: a cohesive team of experts in quantum phenomena, quantum information theory, communication, computing, materials, and optics, and a structured path to applying theoretical discoveries to advanced quantum technologies.</span></p><h2>Solving the Challenge of Quantum Research</h2><p><span>The NYUQI’s integrated structure is less about organizational management, and more about scientific requirement. </span><span>The challenge of quantum is that the hardware, the software, and the programming are inherently interconnected — each must be designed to work with the other. To solve this, the Institute focuses on three applications of quantum science: Quantum Computing, Quantum Sensing, and Quantum Communications.</span></p><p>For Shabani, this means creating an integrated environment that bridges discovery with experimentation, starting with the physical components all the way to quantum algorithm centers. That will include a fabrication facility in the new building in Manhattan, as well as the <a href="https://engineering.nyu.edu/news/chips-and-science-act-spurs-nanofab-cleanroom-ribbon-cutting-nyu-tandon-school-engineering" target="_blank"><span>NYU Nanofab</span></a> in Brooklyn directed by Davood Shahjerdi. New York Senators Charles Schumer and Kirsten Gillibrand recently secured <a href="https://www.nyu.edu/about/news-publications/news/2026/february/nyu-receives--1-million-in-funding-from-senators-schumer-and-gil.html" target="_blank">$1 million in congressionally-directed spending</a> to bring Thermal Laser Epitaxy (TLE) technology — which allows for atomic-level purity, minimal defects, and streamlined application of a diverse range of quantum materials — to NYU, marking the first time the equipment will be used in the U.S.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Two people hold semiconductor wafers during a presentation with audience taking photos." class="rm-shortcode" data-rm-shortcode-id="1a0dbca6c6bb8fb7dbf4d399689b2922" data-rm-shortcode-name="rebelmouse-image" id="d434c" loading="lazy" src="https://spectrum.ieee.org/media-library/two-people-hold-semiconductor-wafers-during-a-presentation-with-audience-taking-photos.jpg?id=65322354&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">NYU Nanofab manager Smiti Bhattacharya and Nanofab Director Davood Shahjerdi at the nanofab ribbon-cutting in 2023. The nanofab is the first academic cleanroom in Brooklyn, and serves as a prototyping facility for the NORDTECH Microelectronics Commons consortium.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">NYU WIRELESS</small></p><p>Tight control over fabrication, and can allow researchers to pivot quickly when a breakthrough in one area — say, finding a cheaper, more reliable material like silicon carbide — can be explored for use across all three applications, and offers unique access to academics and the private sector alike to sophisticated pieces of specialty equipment whose maintenance knowledge and costs make them all-but-impossible to maintain outside of the right staffing and environment.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="3D model of a laboratory layout, highlighting the Yellow Room in bright yellow." class="rm-shortcode" data-rm-shortcode-id="e7c1128703d96de919ed2ce440a97416" data-rm-shortcode-name="rebelmouse-image" id="62d58" loading="lazy" src="https://spectrum.ieee.org/media-library/3d-model-of-a-laboratory-layout-highlighting-the-yellow-room-in-bright-yellow.png?id=65322596&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The NYU Nanofab is Brooklyn’s first academic cleanroom, with a strategic focus on superconducting quantum technologies, advanced semiconductor electronics, and devices built from quantum heterostructures and other next-generation materials.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">NYU Nanofab</small></p><p><span>That speed and adaptability is the NYUQI’s competitive edge. It turns fragmented challenges into holistic solutions, positioning the Institute to solve real-world problems for its New York neighbors—from highly secure data transmission to next-generation drug discovery.</span></p><h2>Testing Quantum Communication in NYC</h2><p>The integrated approach also makes the NYUQI a testbed for the most critical near-term applications. Take Quantum Communications, which is essential for creating an “unhackable” quantum internet. In an industry first, NYU worked with the quantum start-up Qunnect to <a href="https://www.nyu.edu/about/news-publications/news/2023/september/nyu-takes-quantum-step-in-establishing-cutting-edge-tech-hub-in-.html" target="_blank"><span>send quantum information through standard telecom fiber</span></a> in New York City between Manhattan and Brooklyn through a 10-mile quantum networking link. Instead of simulating communication challenges in a lab, the NYUQI team is already leveraging NYU’s city-wide campus by utilizing existing infrastructure to test secure quantum transmission between Manhattan and Brooklyn. </p><p class="pull-quote">The NYUQI team is already leveraging NYU’s city-wide campus by utilizing existing infrastructure to test secure quantum transmission between Manhattan and Brooklyn.</p><p>This isn’t just theory; it is building a functioning prototype in the most demanding, dense urban environment  in the world. Real-time, real-world deployment is a critical component missing in other isolated institutions. When the NYUQI achieves results, the technology will be that much more readily available to the massive financial, tech, and communications organizations operating right outside their door.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Scientist in protective gear working in a laboratory with samples." class="rm-shortcode" data-rm-shortcode-id="d644b791788af64769a853d0516834e6" data-rm-shortcode-name="rebelmouse-image" id="dc2fb" loading="lazy" src="https://spectrum.ieee.org/media-library/scientist-in-protective-gear-working-in-a-laboratory-with-samples.jpg?id=65322378&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">NYUQI includes a state-of-the-art Nanofabrication Cleanroom in Brooklyn serving as a high-tech foundry.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">NYU Tandon</small></p><p><span>While the Institute has built the physical infrastructure and designed the necessary scientific architecture, its enduring contribution will be the specialized workforce it creates for the new quantum economy. This addresses the market’s greatest deficit: a lack of individuals trained not just in physics, but in the integrated, full-stack approach that quantum demands.</span></p><p>By creating a pipeline of 100 to 200 graduate and doctoral students who are encouraged to collaborate across Computing, Sensing, and Communications, the NYUQI is narrowing the skills gap. These will be future leaders who can speak the language of the physicist, the materials scientist, and the engineer simultaneously. This commitment to interdisciplinary talent is also fueled by the launch of the new Master of Science in Quantum Science & Technology program at NYU Tandon, positioning the university among a select group worldwide offering such a specialized degree.</p><p>Interdisciplinary education creates the shared language and understanding poised to make graduates coming from collaborations in the NYUQI extremely valuable in the current landscape. Quantum challenges are not just technical; they are managerial and philosophical as well. An engineer working with the NYUQI will understand the requirements of the nanofabrication cleanroom and the foundations of superconducting qubits for quantum computing, just as a physicist will understand the application needs of an industry partner like a large financial institution. In a field where the entire team must be able to communicate seamlessly, these are professionals truly equipped to rapidly translate discovery into deployable technology. Creating a talent pipeline at scale will provide a missing link that converts New York’s vast commercial energy into genuine quantum advantage.</p><h2>NYUQI: Building Talent, Technology, and Structure</h2><p><span>The vision for the NYUQI </span><span>is an act of strategic geography that plays directly into the sheer volume of opportunity and demand right outside their new facility. </span><span>By building the talent, the technology, and the structure necessary to capitalize on this dense environment, NYU is not just participating in the quantum race, it is actively steering it.</span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Conference room with attendees seated at round tables, facing a presenter on stage." class="rm-shortcode" data-rm-shortcode-id="f5e2ae16e0c5ebc4f0828d52ed639115" data-rm-shortcode-name="rebelmouse-image" id="02b7e" loading="lazy" src="https://spectrum.ieee.org/media-library/conference-room-with-attendees-seated-at-round-tables-facing-a-presenter-on-stage.jpg?id=65322370&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Attendees of NYU’s 2025 Quantum Summit.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Tracey Friedman/NYU</small></p><p>The initial hypothesis for the NYUQI was simple: the ultimate advantage lies in pursuing the science in the right place at the right time. Now, the institute will ensure that the next wave of scientific discovery, capable of solving previously intractable problems in finance, medicine, and security, will be conceived, built, and tested in the heart of New York City.</p>]]></description><pubDate>Fri, 27 Mar 2026 10:02:05 +0000</pubDate><guid>https://spectrum.ieee.org/nyu-quantum-institute</guid><category>Nyu-tandon</category><category>Quantum-computing</category><category>Quantum-internet</category><category>Semiconductors</category><category>Quantum-communications</category><dc:creator>Wiley</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/person-in-white-suit-working-with-semiconductor-equipment-in-a-lab.jpg?id=65322091&amp;width=980"></media:content></item><item><title>Improve Engineering Communication by Translating Technical Detail</title><link>https://spectrum.ieee.org/engineering-communication</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/an-illustration-of-stylized-people-wearing-business-casual-clothing.webp?id=65257424&width=1200&height=600&coordinates=0%2C150%2C0%2C150"/><br/><br/><p><em>This article is crossposted from </em>IEEE Spectrum<em>’s careers newsletter. <a href="https://engage.ieee.org/Career-Alert-Sign-Up.html" rel="noopener noreferrer" target="_blank"><em>Sign up now</em></a><em> to get insider tips, expert advice, and practical strategies, <em><em>written i<em>n partnership with tech career development company <a href="https://www.parsity.io/" rel="noopener noreferrer" target="_blank">Parsity</a> and </em></em></em>delivered to your inbox for free!</em></em></p><h2>Engineers Aren’t Bad at Communication. They’re Just Speaking to the Wrong Audience.</h2><p>There’s a persistent myth that engineers are bad communicators. In my experience, that’s not true.</p><p>Engineers are often excellent communicators—inside their domain. We’re precise. We’re logical. We structure arguments clearly. We define terms. We reason from constraints.</p><p>The breakdown happens when the audience changes.</p><p>We’re used to speaking in highly technical language, surrounded by people who share our vocabulary. In that environment, shorthand and jargon are efficient. But outside that bubble, when talking to executives, product managers, marketing teams, or customers, that same precision can be confusing.</p><p>The problem isn’t that we can’t communicate. It’s that we forget to translate.</p><p>If you’ve ever explained a critical issue or error to a non-technical stakeholder, you’ve probably experienced this: You give a technically accurate explanation. They leave either more confused than before, or more alarmed than necessary.</p><p>Suddenly you’re spending more time clarifying your explanation than fixing the issue.</p><p>Under pressure, we default to what we know best—technical detail. But detail without context creates cognitive overload. The listener can’t tell what matters, what’s normal, and what’s dangerous.</p><p>That’s when the “engineers can’t communicate” narrative shows up.</p><p>In reality, we just skipped the translation step.</p><h2>The Writing Shortcut </h2><p>One of the simplest ways to improve written communication today is surprisingly easy: Run your explanation through an AI model and ask, “would this make sense to a non-technical audience? Where would someone get confused?”</p><p>You can also say:</p><ul><li>“Rewrite this for an executive audience.”</li><li>“What analogy would help explain this?”</li><li>“Simplify this without losing accuracy.”</li></ul><p>Large language models are particularly good at identifying jargon and offering alternative framings. They’re essentially translation assistants.</p><p>Analogies are especially powerful. If you’re explaining system latency, compare it to traffic congestion. If you’re describing technical debt, compare it to skipping maintenance on a house. If you’re explaining distributed systems, try using supply chain examples.</p><p>The goal isn’t to “dumb it down.” It’s to map the unfamiliar onto something familiar.</p><p>Before sending an email or report, ask yourself:</p><ul><li>Does this audience need to understand the mechanism, or just impact?</li><li>Does this explanation help them make a decision?</li><li>Have I defined terms they might not know?</li></ul><h2>Translation When Speaking </h2><p>When speaking—especially in meetings or presentations—most engineers have one predictable habit: We speak too fast.</p><p>Nerves speed us up. Speed causes filler words. Filler words dilute authority.</p><p>To prevent that, follow a simple rule: Speak 10 to 15 percent slower than feels natural.</p><p>Slowing down cuts down the number of times you say “um” and “uh”, gives you time to think, makes you sound more confident, and gives the listener time to process.</p><p>Another rule: Say only what the audience needs to move forward.</p><p>Explain just enough for the person to make a decision. If you overload someone with implementation details when they only need tradeoffs, you’ve made their job harder.</p><h2>The Real Skill</h2><p>The key skill in communication is audience awareness.</p><p>The same engineer who can clearly explain a concurrency bug to a peer can absolutely explain system risk to an executive. The difference is framing, vocabulary, and context. Not intelligence.</p><p>In the age of AI, where code generation is increasingly commoditized, the ability to translate complexity into clarity is becoming a defining advantage.</p><p>Engineers aren’t bad communicators. We just have to remember that outside our bubble, translation is part of the job.</p><p>—Brian</p><h2><a href="https://spectrum.ieee.org/robert-goddard-leadership" target="_self">How Robert Goddard’s Self-Reliance Crashed His Dreams</a></h2><p>Robert Goddard launched the first liquid-fueled rocket 100 years ago, but his legacy still has relevant lessons for today’s engineers. Although Goddard’s headstrong confidence in his ideas helped bring about the breakthrough, it later became an obstacle in what systems engineer Guru Madhavan calls “the alpha trap.” Madhavan writes: “We love to celebrate the lone genius, yet we depend on teams to bring the flame of genius to the people.”</p><p><a href="https://spectrum.ieee.org/robert-goddard-leadership" target="_blank">Read more here. </a></p><h2><a href="https://cacm.acm.org/opinion/redefining-the-software-engineering-profession-for-ai/" rel="noopener noreferrer" target="_blank">Redefining the Software Engineering Profession for AI</a></h2><p>For <em><em>Communications of the ACM</em></em>, two Microsoft engineers propose a model for software engineering in the age of AI: Making the growth of early-in-career developers an explicit organizational goal. Without hiring early-career workers, the profession’s talent pipeline will eventually dry up. So, they argue, companies must hire them and develop talent, even if that comes with a short-term dip in productivity. </p><p><a href="https://cacm.acm.org/opinion/redefining-the-software-engineering-profession-for-ai/" target="_blank">Read more here. </a></p><h2><a href="https://spectrum.ieee.org/ieee-global-virtual-career-fairs" target="_self">IEEE Launches Global Virtual Career Fairs</a></h2><p>Looking for a job? Last year, IEEE Industry Engagement hosted its first virtual career fair to connect recruiters and young professionals. Several more career fairs are now planned, including two upcoming regional events and a global career fair in June. At these fairs, you can participate in interactive sessions, chat with recruiters, and experience video interviews. <br/></p><p><a href="https://spectrum.ieee.org/ieee-global-virtual-career-fairs" target="_blank">Read more here. </a></p>]]></description><pubDate>Wed, 25 Mar 2026 19:03:20 +0000</pubDate><guid>https://spectrum.ieee.org/engineering-communication</guid><category>Tech-careers</category><category>Practical-strategies</category><category>Careers-newsletter</category><dc:creator>Brian Jenney</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/an-illustration-of-stylized-people-wearing-business-casual-clothing.webp?id=65257424&amp;width=980"></media:content></item><item><title>Training Driving AI at 50,000× Real Time</title><link>https://spectrum.ieee.org/gm-scalable-driving-ai</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/sleek-suv-driving-on-a-highway-surrounded-by-trees-under-a-clear-blue-sky.png?id=65321052&width=1200&height=600&coordinates=0%2C62%2C0%2C63"/><br/><br/><p><em>This is a sponsored article brought to you by General Motors. Visit their new </em><em><a href="https://engineering.gm.com/home.html" rel="noopener noreferrer" target="_blank">Engineering Blog</a></em><em> for more insights.</em></p><p>Autonomous driving is one of the most demanding problems in physical AI. An automated system must interpret a chaotic, ever-changing world in real time—navigating uncertainty, predicting human behavior, and operating safely across an immense range of environments and edge cases.</p><p>At General Motors, we approach this problem from a simple premise: while most moments on the road are predictable, the rare, ambiguous, and unexpected events — the long tail — are what ultimately defines whether an autonomous system is safe, reliable, and ready for deployment at scale. (Note: While here we discuss research and emerging technologies to solve the long tail required for full general autonomy, we also discuss our current approach or solving 99% of everyday autonomous driving in a deep dive on Compound AI.)</p><p>As GM advances toward <a href="https://news.gm.com/home.detail.html/Pages/news/us/en/2025/oct/1022-AI-GM-launch-eyes-off-driving-conversational-AI.html" target="_blank">eyes-off</a> highway driving, and ultimately toward fully autonomous vehicles, solving the long tail becomes the central engineering challenge. It requires developing systems that can be counted on to behave sensibly in the most unexpected conditions.</p><p>GM is <a href="https://neurips.cc/virtual/2025/loc/san-diego/128661" rel="noopener noreferrer" target="_blank">building scalable driving AI</a> to meet that challenge — combining large-scale simulation, reinforcement learning, and foundation-model-based reasoning to train autonomous systems at a scale and speed that would be impossible in the real world alone.</p><h2>Stress-testing for the long tail</h2><p>Long-tail scenarios of autonomous driving come in a few varieties.</p><p>Some are notable for their rareness. There’s a mattress on the road. A fire hydrant bursts. A massive power outage in San Francisco that disabled traffic lights required <a href="https://waymo.com/blog/2025/12/autonomously-navigating-the-real-world" rel="noopener noreferrer" target="_blank">driverless vehicles to navigate</a> never-before experienced challenges. These rare system-level interactions, especially in dense urban environments, show how unexpected edge cases can cascade at scale.</p><p>But long-tail challenges don’t just come in the form of once-in-a-lifetime rarities. They also manifest as everyday scenarios that require characteristically human courtesy or common sense. How do you queue up for a spot without blocking traffic in a crowded parking lot? Or navigate a construction zone, guided by gesturing workers and ad-hoc signs? These are simple challenges for a human driver but require inventive engineering to handle flawlessly with a machine.</p><h3>Autonomous driving scenario demand curve</h3><br/><img alt="Graph showing scenario complexity: Predictable, everyday, and rare long-tail events." class="rm-shortcode" data-rm-shortcode-id="363e07d5da15d3b590d5cf1f9c13ba02" data-rm-shortcode-name="rebelmouse-image" id="a0363" loading="lazy" src="https://spectrum.ieee.org/media-library/graph-showing-scenario-complexity-predictable-everyday-and-rare-long-tail-events.png?id=65321037&width=980"/><h3></h3><br/><h2>Deploying vision language models</h2><p>One tool GM is developing to tackle these nuanced scenarios is the use of Vision Language Action (VLA) models. Starting with a standard Vision Language Model, which leverages internet-scale knowledge to make sense of images, GM engineers use specialized decoding heads to fine-tune for distinct driving-related tasks. The resulting VLA can make sense of vehicle trajectories and detect 3D objects on top of its general image-recognition capabilities.</p><p>These tuned models enable a vehicle to recognize that a police officer’s hand gesture overrides a red traffic light or to identify what a “loading zone” at a busy airport terminal might look like.</p><p>These models can also generate reasoning traces that help engineers and safety operators understand why a maneuver occurred — an important tool for debugging, validation, and trust.</p><h2>Testing hazardous scenarios in high-fidelity simulations</h2><p>The trouble is: driving requires split-second reaction times so any excess latency poses an especially critical problem. To solve this, GM is developing a “Dual Frequency VLA.” This large-scale model runs at a lower frequency to make high-level semantic decisions (“Is that object in the road a branch or a cinder block?”), while a smaller, highly efficient model handles the immediate, high-frequency spatial control (steering and braking).</p><p>This hybrid approach allows the vehicle to benefit from deep semantic reasoning without sacrificing the split-second reaction times required for safe driving.</p><p>But dealing with an edge case safely requires that the model not only understand what it is looking at but also understand how to sensibly <em>drive through</em> the challenge it’s identified. For that, there is no substitute for experience.</p><p>Which is why, each day, <a href="https://news.gm.com/home.detail.html/Pages/topic/us/en/2025/oct/1009-GMs-path-full-autonomy-Building-trust-step-by-step.html%29" rel="noopener noreferrer" target="_blank">we run millions of high-fidelity closed loop simulations</a>, equivalent to tens of thousands of human driving days, compressed into hours of simulation. We can replay actual events, modify real-world data to create new virtual scenarios, or design new ones entirely from scratch. This allows us to regularly test the system against hazardous scenarios that would be nearly impossible to encounter safely in the real world.</p><h2>Synthetic data for the hardest cases</h2><p>Where do these simulated scenarios come from? GM engineers employ a whole host of AI technologies to produce novel training data that can model extreme situations while remaining grounded in reality.</p><p>GM’s <a href="https://bmvc2025.bmva.org/proceedings/154/" rel="noopener noreferrer" target="_blank">“Seed-to-Seed Translation” research</a>, for instance, leverages diffusion models to transform existing real-world data, allowing a researcher to turn a clear-day recording into a rainy or foggy night while perfectly preserving the scene’s geometry. The result? A “domain change”—clear becomes rainy, but everything else remains the same.</p><p>In addition, our GM World diffusion-based simulator allows us to synthesize entirely new traffic scenarios using natural language and spatial bounding boxes. We can summon entirely new scenarios with different weather patterns. We can also take an existing road scene and add challenging new elements, such as a vehicle cutting into our path.</p><h3></h3><br/><img alt='Comparison of a 3D model and street view with a vehicle removed, labeled "Original" and "Edited".' class="rm-shortcode" data-rm-shortcode-id="0d17abe7a8791b531b5951439023ffa9" data-rm-shortcode-name="rebelmouse-image" id="f3e89" loading="lazy" src="https://spectrum.ieee.org/media-library/comparison-of-a-3d-model-and-street-view-with-a-vehicle-removed-labeled-original-and-edited.gif?id=65321060&width=980"/><h3></h3><br/><img alt="Street with several cars parked, partially flooded after heavy rain; blue geometric markings overlay." class="rm-shortcode" data-rm-shortcode-id="79227a75d41c9ee4e257dd3cd21a80e7" data-rm-shortcode-name="rebelmouse-image" id="c1ebf" loading="lazy" src="https://spectrum.ieee.org/media-library/street-with-several-cars-parked-partially-flooded-after-heavy-rain-blue-geometric-markings-overlay.gif?id=65321061&width=980"/><h3></h3><br/><img alt="Winter street with cars; blue 3D wireframe shapes overlay." class="rm-shortcode" data-rm-shortcode-id="94864ff674e3b311d384ec0114587d8d" data-rm-shortcode-name="rebelmouse-image" id="55554" loading="lazy" src="https://spectrum.ieee.org/media-library/winter-street-with-cars-blue-3d-wireframe-shapes-overlay.gif?id=65321063&width=980"/><h2></h2><p>High-fidelity simulation isn’t always the best tool for every learning task. Photorealistic rendering is essential for training perception systems to recognize objects in varied conditions. But when the goal is teaching decision-making and tactical planning—when to merge, or how to navigate an intersection—the computationally expensive details matter less than spatial relationships and traffic dynamics. AI systems may need billions or even trillions of lightweight examples to support reinforcement learning, where models learn the rules of sensible driving through rapid trial and error rather than relying on imitation alone.</p><p>To this end, General Motors has developed a proprietary, multi-agent reinforcement learning simulator, GM Gym, to serve as a closed-loop simulation environment that can both simulate high-fidelity sensor data, and model thousands of drivers per second in an abstract environment known as “Boxworld.”</p><p>By focusing on essentials like spatial positioning, velocity and rules of the road while stripping away details like puddles and potholes, Boxworld creates a high-speed training environment for reinforcement learning models at incredible speeds, operating 50,000 times faster than real-time and simulating 1,000 km of driving per second of GPU time. It’s a method that allows us to not just imitate humans, but to develop driving models that have verifiable objective outcomes, like safety and progress.</p><h2>From abstract policy to real-world driving</h2><p>Of course, the route from your home to your office does not run through Boxworld. It passes through a world of asphalt, shadows, and weather. So, to bring that conceptual expertise into the real world, GM is one of the first to employ a technique called “On Policy Distillation,” where engineers run their simulator in both modes simultaneously: the abstract, high-speed Boxworld and the high-fidelity sensor mode.</p><p>Here, the reinforcement learning model—which has practiced countless abstract miles to develop a perfect “policy,” or driving strategy—acts as a teacher. It guides its “student,” the model that will eventually live in the car. This transfer of wisdom is incredibly efficient; just 30 minutes of distillation can capture the equivalent of 12 hours of raw reinforcement learning, allowing the real-world model to rapidly inherit the safety instincts its cousin painstakingly honed in simulation.</p><h2>Designing failures before they happen</h2><p>Simulation isn’t just about training the model to drive well, though; it’s also about trying to make it fail. To rigorously stress-test the system, GM utilizes <a href="https://arxiv.org/abs/2309.05810" target="_blank">a differentiable pipeline called SHIFT3D</a>. Instead of just recreating the world, SHIFT3D actively modifies it to create “adversarial” objects designed to trick the perception system. The pipeline takes a standard object, like a sedan, and subtly morphs its shape and pose until it becomes a “challenging”, fun-house version that is harder for the AI to detect. Optimizing these failure modes is what allows engineers to preemptively discover safety risks before they ever appear on the road. Iteratively retraining the model on these generated “hard” objects has been shown to reduce near-miss collisions by over 30%, closing the safety gap on edge cases that might otherwise be missed.</p><p>Even with advanced simulation and adversarial testing, a truly robust system must know its own limits. To enable safety in the face of the unknown, GM researchers add a specialized “Epistemic uncertainty head” to their models. This architectural addition allows the AI to distinguish between standard noise and genuine confusion. When the model encounters a scenario it doesn’t understand—a true “long tail” event—it signals high epistemic uncertainty. This acts as a principled proxy for data mining, automatically flagging the most confusing and high-value examples for engineers to analyze and add to the training set.</p><p>This rigorous, multi-faceted approach—from “Boxworld” strategy to adversarial stress-testing—is General Motors’ proposed framework for solving the final 1% of autonomy. And while it serves as the foundation for future development, it also surfaces new research challenges that engineers must address.</p><p>How do we balance the essentially unlimited data from Reinforcement Learning with the finite but richer data we get from real-world driving? How close can we get to full, human-like driving by writing down a reward function? Can we go beyond domain change to generate completely new scenarios with novel objects?</p><h2>Solving the long tail at scale</h2><p>Working toward solving the long tail of autonomy is not about a single model or technique. It requires an ecosystem — one that combines high-fidelity simulation with abstract learning environments, reinforcement learning with imitation, and semantic reasoning with split-second control.</p><p>This approach does more than improve performance on average cases. It is designed to surface the rare, ambiguous, and difficult scenarios that determine whether autonomy is truly ready to operate without human supervision.</p><p>There are still open research questions. How human-like can a driving policy become when optimized through reward functions? How do we best combine unlimited simulated experience with the richer priors embedded in real human driving? And how far can generative world models take us in creating meaningful, safety-critical edge cases?</p><p>Answering these questions is central to the future of autonomous driving. At GM, we are building the tools, infrastructure, and research culture needed to address them — not at small scale, but at the scale required for real vehicles, real customers, and real roads.</p>]]></description><pubDate>Wed, 25 Mar 2026 19:00:05 +0000</pubDate><guid>https://spectrum.ieee.org/gm-scalable-driving-ai</guid><category>Autonomous-vehicles</category><category>Self-driving-cars</category><category>Gm</category><dc:creator>Ben Snyder</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/sleek-suv-driving-on-a-highway-surrounded-by-trees-under-a-clear-blue-sky.png?id=65321052&amp;width=980"></media:content></item><item><title>30 Years Ago, Robots Learned to Walk Without Falling</title><link>https://spectrum.ieee.org/honda-p2-robot-ieee-milestone</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/collage-of-hondas-p2-humanoid-robot-from-1996-against-a-background-of-figures-related-to-its-technical-features.jpg?id=65402169&width=1200&height=600&coordinates=0%2C250%2C0%2C250"/><br/><br/><p>When you hear the term <a href="https://spectrum.ieee.org/search/?q=humanoid+robot" target="_self"><em><em>humanoid robot</em></em></a>, you may think of <a href="https://starwars.fandom.com/wiki/C-3PO" rel="noopener noreferrer" target="_blank">C-3PO</a>, the human-cyborg-relations android from <a href="https://www.starwars.com/" rel="noopener noreferrer" target="_blank"><em><em>Star Wars</em></em></a><em><em>.</em></em> C-3PO was designed to assist humans in communicating with robots and alien species. The droid, which first appeared on screen in 1977, joined the characters on their adventures, walking, talking, and interacting with the environment like a human. It was ahead of its time.</p><p>Before the release of <em><em>Star Wars</em></em>, a few androids did exist and could move and interact with their environment, but none could do so without losing its balance.</p><p>It wasn’t until 1996 that the first autonomous robot capable of walking without falling was developed in Japan. <a href="https://www.google.com/aclk?sa=L&ai=DChsSEwjfwP2lmviSAxUdF60GHVa5APUYACICCAEQARoCcHY&ae=2&co=1&ase=2&gclid=CjwKCAiA-__MBhAKEiwASBmsBMZ3C7eg4qpf1gS-s4hmogZZL-Tr0YQ7T1h4mn0IoFztQ7NVCqHCjhoCXqoQAvD_BwE&cid=CAASZuRo0CEpRkUaLKjRvxVglDhyNNQqb9IGBGToAJFwXbXIyMx3bZTVg0T8ishwxc5PTKrYMjYnaSzvAx3ewj0dizuR563LtzuoBcRH9l0T-TNDiYKEN25LZQWjdGD6NduB7UgbPw6wRg&cce=2&category=acrcp_v1_71&sig=AOD64_0hkFjU2fo-VGEWLhz4zejdBOhDxw&q&nis=4&adurl&ved=2ahUKEwif9vWlmviSAxU-ITQIHZAXPSIQ0Qx6BAg8EAE" rel="noopener noreferrer" target="_blank">Honda</a>’s <a href="https://hondanews.com/en-US/photos/p2-robot" rel="noopener noreferrer" target="_blank">Prototype 2</a> (P2) was nearly 183 centimeters tall and weighed 210 kilograms. It could control its posture to maintain balance, and it could move multiple joints simultaneously.</p><p>In recognition of that decades-old feat, P2 has been honored as an <a href="https://ieeemilestones.ethw.org/Main_Page" rel="noopener noreferrer" target="_blank">IEEE Milestone</a>. The dedication ceremony is scheduled for 28 April at the <a href="https://www.mr-motegi.jp/eng/collection-hall/?from=navi_header_drawer_global_en" rel="noopener noreferrer" target="_blank">Honda Collection Hall</a>, located on the grounds of the <a href="https://en.wikipedia.org/wiki/Mobility_Resort_Motegi" rel="noopener noreferrer" target="_blank">Mobility Resort Motegi</a>, in Japan. The machine is on display in the hall’s robotics exhibit, which showcases the evolution of Honda’s humanoid technology.</p><p>In support of the Milestone nomination, members of the <a href="https://ieee-jp.org/section/nagoya/" rel="noopener noreferrer" target="_blank">IEEE Nagoya (Japan) Section</a> wrote: “This milestone demonstrated the feasibility of humanlike locomotion in machines, setting a new standard in robotics.” The <a href="https://ieeemilestones.ethw.org/Milestone-Proposal:Honda%27s_P2,_First_Bipedal_Robot,_1996" rel="noopener noreferrer" target="_blank">Milestone proposal</a> is available on the <a href="https://ethw.org/Main_Page" rel="noopener noreferrer" target="_blank">Engineering Technology and History Wiki</a>.</p><h2>Developing a domestic android</h2><p>In 1986 Honda researchers Kazuo Hirai, Masato Hirose, Yuji Haikawa, and <a href="https://research.com/u/toru-takenaka" rel="noopener noreferrer" target="_blank">Toru Takenaka</a> set out to develop what they called a “domestic robot” to collaborate with humans. It would be able to climb stairs, remove impediments in its path, and tighten a nut with a wrench, according to their <a href="https://www.cs.cmu.edu/~cga/humanoids/honda.pdf" rel="noopener noreferrer" target="_blank">research paper on the project</a>.</p><p>“We believe that a robot working within a household is the type of robot that consumers may find useful,” the authors wrote.</p><p>But to create a machine that would do household chores, it had to be able to move around obstacles such as furniture, stairs, and doorways. It needed to autonomously walk and read its environment like a human, according to the researchers.</p><p>But no robot could do that at the time. The closest technologists got was the <a href="https://www.humanoid.waseda.ac.jp/booklet/kato_2.html" rel="noopener noreferrer" target="_blank">WABOT-1</a>. Built in 1973 at <a href="https://www.waseda.jp/top/en" rel="noopener noreferrer" target="_blank">Waseda University</a>, in Tokyo, the WABOT had eyes and ears, could speak Japanese, and used tactile sensors embedded on its hands as it gripped and moved objects. Although the WABOT could walk, albeit unsteadily, it couldn’t maneuver around obstacles or maintain its balance. It was powered by an external battery and computer.</p><p>To build an android, the Honda team began by analyzing how people move, using themselves as models.</p><p>That led to specifications for the robot that gave it humanlike dimensions, including the location of the leg joints and how far the legs could rotate.</p><p>Once they began building the machine, though, the engineers found it difficult to satisfy every specification. Adjustments were made to the number of joints in the robot’s hips, knees, and ankles, according to the research paper. Humans have four hip, two knee, and three ankle joints; P2’s predecessor had three hip, one knee, and two ankle joints. The arms were treated similarly. A human’s four shoulder and three elbow joints became three shoulder joints and one elbow joint in the robot.</p><p>The researchers installed existing Honda motors and hydraulics in the hips, knees, and ankles to enable the robot to walk. Each joint was operated by a DC motor with a harmonic-drive reduction gear system, which is compact and offered high torque capacity.</p><p>To test their ideas, the engineers built what they called E0. The robot, which was just a pair of connected legs, successfully walked. It took about 15 seconds to take each step, however, and it moved using static walking in a straight line, according to <a href="https://global.honda/en/ASIMO/history/" rel="noopener noreferrer" target="_blank">a post about the project</a> on Honda’s website. (Static walking is when the body’s center of mass is always within the foot’s sole. Humans walk with their center of mass below their navel.)</p><p>The researchers created several algorithms to enable the robot to walk like a human, according to the Honda website. The codes allowed the robot to use a locomotion mechanism, dynamic walking, whereby the robot stays upright by constantly moving and adjusting its balance, rather than keeping its center of mass over its feet, according to a video on the YouTube channel <a href="https://www.youtube.com/watch?v=BCAZkjXgBE4" rel="noopener noreferrer" target="_blank">Everything About Robotics Explained</a>.</p><p class="pull-quote">“P2 was not just a technical achievement; it was a catalyst that propelled the field of humanoid robotics forward, demonstrating the potential for robots to interact with and assist humans in meaningful ways.” <strong>—IEEE Nagoya Section</strong></p><p>The Honda team installed rubber brushes on the bottom of the machine’s feet to reduce vibrations from the landing impacts (the force experienced when its feet touch the ground)—which had made the robot lose its balance.</p><p>Between 1987 and 1991, three more prototypes (E1, E2, and E3) were built, each testing a new algorithm. E3 was a success.</p><p>With the dynamic walking mechanism complete, the researchers continued their quest to make the robot stable. The team added 6-axis sensors to detect the force at which the ground pushed back against the robot’s feet and the movements of each foot and ankle, allowing the robot to adjust its gait in real time for stability.</p><p>The team also developed a posture-stabilizing control system to help the robot stay upright. A local controller directed how the electric motor actuators needed to move so the robot could follow the leg joint angles when walking, according to the research paper.</p><p>During the next three years, the team tested the systems and built three more prototypes (E4, E5, and E6), which had boxlike torsos atop the legs.</p><p>In 1993 the team was finally ready to build an android with arms and a head that looked more like C-3PO, dubbed <em><em>Prototype 1</em></em> (P1). Because the machine was meant to help people at home, the researchers determined its height and limb proportions based on the typical measurements of doorways and stairs. The arm length was based on the ability of the robot to pick up an object when squatting.</p><p>When they finished building P1, it was 191.5 cm tall, weighed 175 kg, and used an external power source and computer. It could turn a switch on and off, grab a doorknob, and carry a 70 kg object.</p><p>P1 was not launched publicly but instead used to conduct research on how to further improve the design. The engineers looked at how to install an internal power source and computer, for example, as well as how to coordinate the movement of the arms and legs, according to Honda.</p><p>For P2, four video cameras were installed in its head—two for vision processing and the other two for remote operation. The head was 60 cm wide and connected to the torso, which was 75.6 cm deep.</p><p>A computer with four <a href="https://en.wikipedia.org/wiki/MicroSPARC" target="_blank">microSparc II</a> processors running a real-time operating system was added into the robot’s torso. The processors were used to control the arms, legs, joints, and vision-processing cameras.</p><p>Also within the body were DC servo amplifiers, a 20-kg nickel-zinc battery, and a wireless Ethernet modem, according to the research paper. The battery lasted for about 15 minutes; the machine also could be charged by an external power supply.</p><p>The hardware was enclosed in white-and-gray casing.</p><p>P2, which was launched publicly in 1996, could walk freely, climb up and down stairs, push carts, and perform some actions wirelessly.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="4c1ac513d31347c699292e05c673df46" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/FEXSqsW6rMM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">P2, which was launched publicly in 1996, could walk freely, climb up and down stairs, push carts, and perform some actions wirelessly.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">King Rose Archives</small></p><p><span>The following year, Honda’s engineers released the smaller and lighter </span><a href="https://www.youtube.com/watch?v=hS82TL73V3E" target="_blank">P3</a><span>. It was 160 cm tall and weighed 130 kg.</span></p><p>In 2000 the popular <a href="https://spectrum.ieee.org/honda-asimo" target="_self">ASIMO robot</a> was introduced. Although shorter than its predecessors at 130 cm, it could walk, run, climb stairs, and recognize voices and faces. The <a href="https://spectrum.ieee.org/honda-robotics-unveils-next-generation-asimo-robot" target="_self">most recent version</a> was released in 2011. Honda has retired the robot.</p><h2>Honda P2’s influence</h2><p>Thanks to P2, today’s androids are not just ideas in a laboratory. Robots have been deployed to work in factories and, increasingly, at <a href="https://spectrum.ieee.org/home-humanoid-robots-survey" target="_self">home</a>.</p><p>The machines are even being used for entertainment. During this year’s <a href="https://www.cgtn.com/specials/2026/spring-festival.html" target="_blank">Spring Festival</a> gala in Beijing, machines developed by Chinese startups <a href="https://www.unitree.com/" target="_blank">Unitree Robotics</a>, <a href="https://www.galbot.com/" rel="noopener noreferrer" target="_blank">Galbot</a>, <a href="https://en.noetixrobotics.com/" rel="noopener noreferrer" target="_blank">Noetix</a>, and <a href="https://www.magiclab.top/en" rel="noopener noreferrer" target="_blank">MagicLab</a><a href="https://spectrum.ieee.org/robot-martial-arts" target="_self"> performed synchronized dances, martial arts, and backflips</a> alongside human performers.</p><p>“P2’s development shifted the focus of robotics from industrial applications to human-centric designs,” the Milestone sponsors explained in the wiki entry. “It inspired subsequent advancements in humanoid robots and influenced research in fields like biomechanics and artificial intelligence.</p><p>“It was not just a technical achievement; it was a catalyst that propelled the field of humanoid robotics forward, demonstrating the potential for robots to interact with and assist humans in meaningful ways.”</p><p>To learn more about robots, check out <a href="https://spectrum.ieee.org/" target="_self"><em><em>IEEE Spectrum</em></em></a>’s <a href="https://robotsguide.com/about" rel="noopener noreferrer" target="_blank">guide</a>.</p><h2>Recognition as an IEEE Milestone</h2><p>A plaque recognizing Honda’s P2 robot as an IEEE Milestone is to be installed at the <a href="https://www.mr-motegi.jp/eng/collection-hall/?from=navi_header_drawer_global_en" rel="noopener noreferrer" target="_blank">Honda Collection Hall</a>. The plaque is to read:</p><p><em><em>In 1996 Prototype 2 (P2), a self-contained autonomous bipedal humanoid robot capable of stable dynamic walking and stair-climbing, was introduced by Honda. Its legged robotics incorporated real-time posture control, dynamic balance, gait generation, and multijoint coordination. Honda’s mechatronics and control algorithms set technical benchmarks in mobility, autonomy, and human-robot interaction. P2 inspired new research in humanoid robot development, leading to increasingly sophisticated successors.</em></em></p><p>Administered by the <a href="https://www.ieee.org/about/history-center" rel="noopener noreferrer" target="_blank">IEEE History Center</a> and supported by <a href="https://secure.ieeefoundation.org/site/Donation2?df_id=1680&mfc_pref=T&1680.donation=form1" rel="noopener noreferrer" target="_blank">donors</a>, the Milestone program recognizes outstanding technical developments around the world.</p>]]></description><pubDate>Wed, 25 Mar 2026 18:00:05 +0000</pubDate><guid>https://spectrum.ieee.org/honda-p2-robot-ieee-milestone</guid><category>Ieee-history</category><category>Ieee-milestone</category><category>Honda</category><category>Robotics</category><category>Asimo</category><category>Type-ti</category><dc:creator>Joanna Goodrich</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/collage-of-hondas-p2-humanoid-robot-from-1996-against-a-background-of-figures-related-to-its-technical-features.jpg?id=65402169&amp;width=980"></media:content></item><item><title>How IEEE 802.11bn Delivers Ultra-High Reliability for Wi-Fi 8</title><link>https://content.knowledgehub.wiley.com/setting-new-performance-standards-with-ieee-802-11bn-an-in-depth-overview-of-wi-fi-8/</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/logo-of-rohde-schwarz-with-slogan-make-ideas-real-and-stylized-rs-in-a-diamond-shape.png?id=65355284&width=980"/><br/><br/><p><span>A technical exploration of IEEE 802.11bn’s physical and MAC layer enhancements — including distributed resource units, enhanced long range, multi-AP coordination, and seamless roaming — that define Wi-Fi 8.</span></p><p><strong><span>What Attendees will Learn</span></strong></p><ol><li><span>Why Wi-Fi 8 prioritizes reliability over raw throughput — Understand how IEEE 802.11bn shifts the design philosophy from peak data-rate gains to ultra-high reliability.</span></li><li>How new physical layer features overcome uplink power limitations — Learn how distributed resource units spread tones across wider distribution bandwidths to boost per-tone transmit power, and how enhanced long range protocol data units use power-boosted preamble fields and frequency-domain duplication to extend uplink coverage.</li><li>How advanced MAC coordination reduces interference and latency — Examine multi-access point coordination schemes — coordinated beamforming, spatial reuse, time division multiple access, and restricted target wake time — alongside non-primary channel access and priority enhanced distributed channel access.</li><li>What seamless roaming and power management mean for next-generation deployments — Discover how seamless mobility domains eliminate reassociation delays during access point transitions, and how dynamic power save and multi-link power management let devices trade capability for battery life without sacrificing connectivity.</li></ol><p><a href="https://content.knowledgehub.wiley.com/setting-new-performance-standards-with-ieee-802-11bn-an-in-depth-overview-of-wi-fi-8/" target="_blank">Download this free whitepaper now!</a></p>]]></description><pubDate>Wed, 25 Mar 2026 14:22:07 +0000</pubDate><guid>https://content.knowledgehub.wiley.com/setting-new-performance-standards-with-ieee-802-11bn-an-in-depth-overview-of-wi-fi-8/</guid><category>Wifi</category><category>Internet</category><category>Standards</category><category>Transmission</category><category>Type-whitepaper</category><dc:creator>Rohde &amp; Schwarz</dc:creator><media:content medium="image" type="image/png" url="https://assets.rbl.ms/65355284/origin.png"></media:content></item><item><title>What Happens When You Host an AI Café</title><link>https://spectrum.ieee.org/ai-community-engagement</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/hands-hold-a-coffee-cup-with-the-letters-ai-in-white-decorative-foam.jpg?id=65351357&width=1200&height=600&coordinates=0%2C417%2C0%2C418"/><br/><br/><p>“Can I get an interview?” “Can I get a job when I graduate?” Those questions came from students during a candid discussion about artificial intelligence, capturing the anxiety many young people feel today. As companies adopt AI-driven interview screeners, restructure their workforces, and redirect billions of dollars toward <a href="https://spectrum.ieee.org/ai-data-centers-engineers-jobs" target="_blank">AI infrastructure</a>, students are increasingly unsure of what the future of work will look like.</p><p>We had gathered people together at a coffee shop in Auburn, Alabama, for what we called an AI Café. The event was designed to confront concerns about AI directly, demystifying the technology while pushing back against the growing narrative of technological doom. </p><p>AI is reshaping society at breathtaking speed. Yet the trajectory of this transformation is being charted primarily by for-profit tech companies, whose priorities revolve around market dominance rather than public welfare. Many people feel that AI is something being done <em><em>to</em></em> them rather than developed <em><em>with</em></em> them.</p><p>As computer science and liberal arts faculty at <a href="https://www.auburn.edu/" target="_blank">Auburn University</a>, we believe there is another path forward: one where scholars engage their communities in genuine dialogue about AI. Not to lecture about technical capabilities, but to listen, learn, and co-create a vision for AI that serves the public interest.</p><h2>The AI Café Model</h2><p>Last November, we ran<strong> </strong>two public <a href="https://cla.auburn.edu/news/articles/auburn-faculty-lead-community-conversations-about-ai/" target="_blank">AI Cafés</a> in Auburn. These were informal, 90-minute conversations between faculty, students, and community members about their experiences with AI.<strong> </strong>In these conversational forums, participants sat in clusters, questions flowed in multiple directions, and lived experience carried as much weight as technical expertise.</p><p>We avoided jargon and resisted attempts to “correct” misconceptions, welcoming whatever emotions emerged. One ground rule proved crucial: keeping discussions in the present, asking participants where they encounter AI today. Without that focus, conversations could easily drift to <a href="https://spectrum.ieee.org/artificial-general-intelligence" target="_blank">sci-fi speculation</a>. Historical analogies—to the printing press, electricity, and smartphones—helped people contextualize their reactions. And we found that without shared definitions of AI, people talked past each other; we learned to ask participants to name specific tools they were concerned about.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A pair of photos show people in chairs in a cafe raising their hands, and 3 people smiling in front of the audience." class="rm-shortcode" data-rm-shortcode-id="f35dab7bb7c94eb3c1ec083a27997de2" data-rm-shortcode-name="rebelmouse-image" id="2956f" loading="lazy" src="https://spectrum.ieee.org/media-library/a-pair-of-photos-show-people-in-chairs-in-a-cafe-raising-their-hands-and-3-people-smiling-in-front-of-the-audience.jpg?id=65352141&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Organizers Xaq Frohlich, Cheryl Seals, and Joan Harrell (right) held their first AI Café in a welcoming coffee shop and bookstore. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit..."><a href="https://www.wellredau.com/" target="_blank">Well Red</a></small></p><p>Most important, we approached these events not as experts enlightening the masses, but as community members navigating complex change together.</p><h2>What We Learned by Listening</h2><p>Participants arrived with significant frustration. They felt that commercial interests were driving AI development “without consideration of public needs,” as one attendee put it. This echoed deeper anxieties about technology, from <a href="https://spectrum.ieee.org/tag/social-media" target="_blank">social media</a> algorithms that amplify division to devices that profit from “engagement” and replace meaningful face-to-face connection. People aren’t simply “afraid of AI.” They’re weary of a pattern where powerful technologies reshape their lives while they have little say.</p><p>Yet when given space to voice concerns without dismissal, something shifted. Participants didn’t want to stop AI development; they wanted to have a voice in it. When we asked “What would a human-centered AI future look like?” the conversation became constructive. People articulated priorities: fairness over efficiency, creativity over automation, dignity over convenience, community over individualism.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Three people standing together in front of a yellow curtain at an indoor event." class="rm-shortcode" data-rm-shortcode-id="26cf47b8431459d9c9ed0bf5069d1f90" data-rm-shortcode-name="rebelmouse-image" id="db5c6" loading="lazy" src="https://spectrum.ieee.org/media-library/three-people-standing-together-in-front-of-a-yellow-curtain-at-an-indoor-event.jpg?id=65357899&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The three organizers, all professors at Alabama’s Auburn University, say that including people from the liberal arts fields brought new perspectives to the discussions about AI. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit..."><a href="https://www.wellredau.com/" target="_blank">Well Red</a></small></p><p>For us as organizers, the experience was transformative. Hearing how AI affected people’s work, their children’s education, and their trust in information prompted us to consider dimensions we hadn’t fully grasped. Perhaps most striking was the gratitude participants expressed for being heard. It wasn’t about filling knowledge deficits; it was about mutual learning. The trust generated created a spillover effect, renewing faith that AI could serve the public interest if shaped through inclusive processes.</p><h2>How to Start Your Own AI Café</h2><p>The “deficit model” of science communication—where experts transmit knowledge to an uninformed public—has been discredited. Public resistance to emerging technologies reflects legitimate concerns about values, risks, and who controls decision-making. Our events point toward a better model.</p><p>We urge engineering and liberal arts departments, professional societies, and community organizations worldwide to organize dialogues similar to our AI Cafés.</p><p>We found that a few simple design choices made these conversations far more productive.<strong> </strong>Informal and welcoming spaces such as coffee shops, libraries, and community centers helped participants feel comfortable (and serving food and drinks helped too!). Starting with small-group discussions, where<strong> </strong>people talked with neighbors, produced more honest thinking and greater participation. Partnering with colleagues in the liberal arts brought additional perspectives on technology’s social dimensions. And by making a commitment to an ongoing series of events, we built trust.</p><p>Facilitation also matters. Rather than leading with technical expertise, we began with values: We asked what kind of world participants wanted, and how AI might help or hinder that vision. We used analogies to earlier technologies to help people situate their reactions and grounded discussions in present realities, asking participants where they have encountered AI in their daily lives. We welcomed emotions constructively, transforming worry into problem solving by<strong> </strong>asking questions like: “What would you do about that?”</p><h2>Why Engineers Should Engage the Public</h2><p>Professional <a href="https://techethics.ieee.org/" target="_blank">ethics codes</a> remain abstract unless grounded in dialogue with affected communities. Conversations about what “responsible AI” means will look different in São Paulo than in Seoul, in Vienna than in Nairobi. What makes the AI Café model portable is its general principles: informal settings, values-first questions, present-tense focus, genuine listening.</p><p>Without such engagement, ethical accountability quietly shifts to technical experts rather than remaining a shared public concern. If we let commercial interests define AI’s trajectory with minimal public input, it will only deepen divides and <a href="https://spectrum.ieee.org/joy-buolamwini/joy-buolamwini" target="_blank">entrench inequities</a>.</p><p>AI will continue advancing whether or not we have public trust. But AI shaped through dialogue with communities will look fundamentally different from AI developed solely to pursue what’s technically possible or commercially profitable.</p><p>The tools for this work aren’t technical; they’re social, requiring humility, patience, and genuine curiosity. The question isn’t whether AI will transform society. It’s whether that transformation will be done <em><em>to</em></em> people or <em><em>with</em></em> them. We believe scholars must choose the latter, and that starts with showing up in coffee shops and community centers to have conversations where we do less talking and more listening.</p><p>The future of AI depends on it.</p><em><em><br/></em></em>]]></description><pubDate>Wed, 25 Mar 2026 14:00:05 +0000</pubDate><guid>https://spectrum.ieee.org/ai-community-engagement</guid><category>Ethics</category><category>Community-values</category><category>Responsible-ai</category><category>Algorithmic-bias</category><dc:creator>Xaq Frohlich</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/hands-hold-a-coffee-cup-with-the-letters-ai-in-white-decorative-foam.jpg?id=65351357&amp;width=980"></media:content></item><item><title>Are U.S. Engineering Ph.D. Programs Losing Students?</title><link>https://spectrum.ieee.org/us-engineering-phd-enrollment-drop</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/two-students-standing-at-a-desk-with-electronic-parts-for-a-microfluidics-test-setup.png?id=65398910&width=1200&height=600&coordinates=0%2C137%2C0%2C138"/><br/><br/><p>U.S. doctoral programs in electrical engineering form the foundation of technological advancement, training the brightest minds in the world to research, develop, and design next-generation electronics, software, electrical infrastructure, and other high-tech products and systems. Elite institutions have long served as launchpads for the engineers behind tomorrow’s technology. </p><p>Now that foundation is under strain.</p><p>With U.S. universities increasingly entangled in political battles under the second Trump administration, uncertainty is beginning to ripple through doctoral admissions for electrical engineering programs. While some departments are reducing the number of spots available in anticipation of potential federal <a href="https://spectrum.ieee.org/harvard-funding-cuts" target="_self">funding cuts</a>, others are seeing their applicant pools shrink, particularly among international students, who make up a significant portion of their programs. </p><p>In 2024 alone, U.S. universities awarded more than 2,000 doctorates in electrical and computer engineering, according to <a href="https://ncses.nsf.gov/surveys/earned-doctorates/2024#" rel="noopener noreferrer" target="_blank">data from the National Center for Science and Engineering Statistics</a>. The number of computing Ph.D.s grew significantly in the 2010s, according to <a href="https://www.nationalacademies.org/read/27862/chapter/3" rel="noopener noreferrer" target="_blank">data from the National Academies</a>, but there is still high demand for those with advanced degrees across academia, government, and industry. Now, some universities point to warning signs of waning enrollment. </p><p>Though not all engineers have Ph.D.s, if enrollment continues to shrink, fewer doctoral students could mean fewer engineers developing cutting-edge technology and training the next generation, potentially exacerbating existing <a href="https://spectrum.ieee.org/ai-data-centers-engineers-jobs" target="_self">labor shortages</a> as global competition for tech talent intensifies.</p><h2>Federal funding cuts affect admissions</h2><p>Public universities in particular are feeling the strain because they rely heavily on federal grants to support doctoral students.</p><p>The University of California, Los Angeles, for instance, must fund Ph.D. students for the duration of a degree—typically five years. In August 2025, the U.S. government pulled more than US $580 million in federal grants over <a href="https://www.justice.gov/opa/pr/justice-department-finds-university-california-los-angeles-violation-federal-civil-rights" rel="noopener noreferrer" target="_blank">allegations</a> that the university failed to adequately address antisemitism on campus during student protests. A federal judge has since <a href="https://www.npr.org/2025/09/23/nx-s1-5550852/trump-restore-grant-funding-ucla" rel="noopener noreferrer" target="_blank">ordered the funding to be restored</a>, but faculty began to worry that research support could be clawed back without notice, says <a href="https://www.ee.ucla.edu/subramanian-s-iyer/" rel="noopener noreferrer" target="_blank">Subramanian Iyer</a>, distinguished professor at UC Los Angeles’s department of electrical and computer engineering.</p><p>According to Iyer, departments across UC Los Angeles, including engineering, plan to scale back Ph.D. admissions this year. “The fear is that at some point, all this government money will be taken away,” Iyer says. “Lowering the admissions rate is just a way to prepare for that reality.”</p><p>In response to a request for comment, a spokesperson for the U.S. National Science Foundation—a major source of federal research funding at UC Los Angeles and elsewhere—said, “NSF recognizes the essential role doctoral trainees play in the nation’s engineering and STEM enterprise” and noted several of the foundation’s awards and programs that support graduate research. </p><p>Funding shocks may also force Pennsylvania State University to reshape future admissions decisions, according to <a href="https://www.eecs.psu.edu/departments/directory-detail-g.aspx?q=mvs7249" rel="noopener noreferrer" target="_blank">Madhavan Swaminathan</a>, head of Penn State’s electrical engineering department and director of the Center for Heterogeneous Integration of Micro Electronic Systems (CHIMES), a semiconductor research lab. </p><p>In 2023, the Defense Advanced Research Projects Agency (DARPA) and industry partners awarded CHIMES a five-year $32.7 million grant. But in late 2025, the agency pulled its final year of funding from the center, citing a shift in priorities from microelectronics to photonics, Swaminathan says. As a result, CHIMES’ annual budget, which supports research assistantships for roughly 100 engineering graduate students, the majority pursuing Ph.D.s, will fall from $7 million in 2026 to $3.5 million in 2027. If these constraints persist, Penn State’s engineering department may reduce the number of doctoral students it supports. </p><p>In a statement, a DARPA spokesperson told<em><em> IEEE Spectrum</em></em>: “Basic research is central to identifying world-changing technologies, and DARPA remains committed to engaging academic institutions in our program research. By design, a DARPA program typically lasts about 3 to 5 years. Once we establish proof of concept, we transition the technology for further development and turn our attention to other challenging areas of research.” </p><p>Penn State’s enrollment numbers reflect Swaminathan’s caution. He says the electrical engineering Ph.D. cohort shrank from 28 students in 2024 to 15 students in 2025. Applications show a similar pattern. After rising from 195 in 2024 to 247 in 2025, Ph.D. applications fell roughly 30 percent to 174 for the upcoming 2026 cohort, a sign that prospective students may be wary of applying to U.S. programs. </p><h2>Immigration restrictions and application declines</h2><p>In late January, the Trump administration <a href="https://travel.state.gov/content/travel/en/News/visas-news/immigrant-visa-processing-updates-for-nationalities-at-high-risk-of-public-benefits-usage.html" rel="noopener noreferrer" target="_blank">announced it had paused</a> visa approvals for citizens of 75 countries. Months earlier, the administration <a href="https://www.dhs.gov/news/2025/08/27/trump-administration-proposes-new-rule-end-foreign-student-visa-abuse" rel="noopener noreferrer" target="_blank">proposed new restrictions</a> on student visas, including a four-year cap. </p><p>For Texas A&M University’s graduate electrical and computer engineering programs, up to 80 percent of applicants each year are international students, according to <a href="https://experts.tamu.edu/expert/narasimha-annapareddy/" rel="noopener noreferrer" target="_blank">Narasimha Annapareddy</a>, professor and head of the department. Annapareddy says applications for the fall 2026 Ph.D. cohort have dropped by roughly 50 percent. </p><p>Annapareddy says the United States is “sending a message that migration is going to be more difficult in the future.” Foreign students often pursue degrees in the U.S. not only for academic training, he says, but to build long-term careers and lives in the country. Fewer applications from international students mean that the university forgoes a “driven and hungry” segment of the applicant pool who are highly qualified in technical fields. </p><p class="pull-quote">“The fear is that at some point, all this government money will be taken away.”<span><strong>— Subramanian Iyer, UC Los Angeles</strong></span></p><p><span></span>At the University of Southern California, the decline is more moderate. The freshman Ph.D. class fell from about 90 students in 2024 to roughly 70 in 2025, a reduction of 22 percent, according to <a href="https://viterbi.usc.edu/directory/faculty/Leahy/Richard" target="_blank">Richard Leahy</a>, department chair of USC’s Ming Hsieh Department of Electrical and Computer Engineering. </p><p>While Leahy says applications are down modestly overall, domestic applications have increased by roughly 15 percent. Beyond immigration restrictions, international students, particularly from countries such as India and China, may be staying in their home countries as their technology sectors expand.</p><p>“A lot of those students that would normally have come to the U.S. are now taking very good jobs working in the AI industry and other areas,” Leahy says. “There are a lot more opportunities now.”</p><h2>Workforce pipeline strains</h2><p>Some faculty say shrinking cohorts could erode the tech workforce if the pattern continues.</p><p>At UC Los Angeles, Iyer describes a doctoral ecosystem built on a chain of mentorship. Among the roughly 25 students in his lab, senior doctoral students mentor junior Ph.D. candidates, who in turn guide master’s students and undergraduates. The system depends on overlapping cohorts. Reducing the number of students hired weakens that overlap and the trickle-down benefits of the mentorship model that keeps labs functioning.</p><p>The real benefit of the university system isn’t just the teaching but also “the community that you build,” Iyer says. “As you decrease admissions, this will disappear.”</p><p>At Penn State, Swaminathan points to specialization as key to a strong workforce. Many doctoral students train in semiconductor engineering, feeding expert talent into the domestic chip industry. If enrollment continues to shrink over the next few years, Swaminathan says, companies may need to hire students with bachelor’s or master’s degrees, who might lack the necessary skills required to design and innovate new chips. </p><p>“Without that specialization, there’s only so much one can do,” Swaminathan says. </p><h2>The industry–academia gap </h2><p>Not all departments are shrinking. At the University of Texas at Austin, overall enrollment has remained relatively steady, according to <a href="https://www.ece.utexas.edu/people/faculty/diana-marculescu" rel="noopener noreferrer" target="_blank">Diana Marculescu</a>, chair of UT Austin’s Chandra Family Department of Electrical and Computer Engineering. </p><p>While she says recent fluctuations aren’t raising alarms, her concern lies more with alignment between research and industry. Doctoral students often train according to current grant priorities, she says. But by the time graduates enter the job market four to six years later, their specialization may not align neatly with open roles. That creates friction in the talent pipeline.</p><p>“That lack of connection might be problematic,” Marculescu says. She argues that closer collaboration between universities and the private sector could help create stronger feedback loops between hiring needs and academic research priorities.</p><p>For now, USC’s Leahy says Ph.D. graduates remain in high demand, and the current shifts have not yet translated into measurable workforce shortages. “We should be concerned about the number of Ph.D.s,” he says. “But there isn’t a crisis at this point.”</p>]]></description><pubDate>Wed, 25 Mar 2026 13:00:05 +0000</pubDate><guid>https://spectrum.ieee.org/us-engineering-phd-enrollment-drop</guid><category>Higher-education</category><category>Workforce-development</category><category>Universities</category><category>Semiconductor-research</category><category>Global-competition</category><dc:creator>Aaron Mok</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/two-students-standing-at-a-desk-with-electronic-parts-for-a-microfluidics-test-setup.png?id=65398910&amp;width=980"></media:content></item><item><title>Data Centers Are Transitioning From AC to DC</title><link>https://spectrum.ieee.org/data-center-dc</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/nvidia-s-high-compute-density-racks.jpg?id=65397940&width=1200&height=600&coordinates=0%2C625%2C0%2C625"/><br/><br/><p>Last week’s <a href="https://www.nvidia.com/gtc/" target="_blank">Nvidia GTC</a> conference highlighted new <a href="https://spectrum.ieee.org/nvidia-groq-3" target="_blank">chip</a> architectures to power AI. But as the chips become faster and more powerful, the remainder of data center <a data-linked-post="2674166715" href="https://spectrum.ieee.org/data-center-liquid-cooling" target="_blank">infrastructure</a> is playing catch-up. The power-delivery community  is responding: Announcements from <a href="https://www.prnewswire.com/news-releases/delta-exhibits-energy-saving-solutions-for-800-vdc-in-next-gen-ai-factories-and-digital-twin-applications-built-on-omniverse-at-nvidia-gtc-2026-302715850.html" rel="noopener noreferrer" target="_blank">Delta</a>,  <a href="https://www.eaton.com/us/en-us/company/news-insights/news-releases/2026/eaton-collaborates-with-nvidia-to-unveil-its-beam-rubin-dsx-platform.html" rel="noopener noreferrer" target="_blank">Eaton</a>, <a href="https://www.se.com/us/en/about-us/newsroom/news/press-releases/Schneider-Electric-teams-with-NVIDIA-to-develop-validated-blueprints-to-design-simulate-build-operate-and-maintain-gigawattscale-AI-Factories-69b82f61aa1027e04205d273/" target="_blank">Schneider Electric</a>, and <a href="https://www.vertiv.com/en-us/about/news-and-insights/corporate-news/2026/vertiv-brings-converged-physical-infrastructure-to-nvidia-vera-rubin-dsx-ai-factories/" rel="noopener noreferrer" target="_blank">Vertiv</a> showcased new designs for the AI era. Complex and inefficient AC-to-DC power conversions are gradually being replaced by DC configurations, at least in hyperscale data centers.</p><p>“While AC distribution remains deeply entrenched, advances in power electronics and the rising demands of AI infrastructure are accelerating interest in DC architectures,” says <a href="https://www.linkedin.com/in/solarchris/" target="_blank">Chris Thompson</a>, vice president of advanced technology and global microgrids at Vertiv.</p><h2>AC-to-DC Conversion Challenges</h2><p>Today, nearly all data centers are designed around AC utility power. The electrical path includes multiple conversions before power reaches the compute load. Power typically enters the data center as medium-voltage AC (1 to 35 kilovolts), is stepped down to low-voltage AC (480 or 415 volts) using a transformer, converted to DC inside an uninterruptible power supply (UPS) for battery storage, converted back to AC, and converted again to low-voltage DC (typically 54 V DC) at the server, supplying the DC power computing chips actually require.</p><p>“The double conversion process ensures the output AC is clean, stable, and suitable for data center servers,” says <a href="https://www.linkedin.com/in/luiz-fernando-huet-de-bacellar-b2112117/" target="_blank">Luiz Fernando Huet de Bacellar,</a> vice president of engineering and technology at Eaton.</p><p>That setup worked well enough for the amounts of power required by traditional data centers. Traditional data center computational racks draw on the order of 10 kW each. For AI, that is starting to approach 1 megawatt.  At that scale, the energy losses, current levels, and copper requirements of AC-to-DC conversions become increasingly difficult to justify. Every conversion incurs some power loss. On top of that, as the amount of power that needs to be delivered grows, the sheer size of the convertors, as well as the connector requirements of copper busbars, becomes untenable.<span> According to an Nvidia <a href="https://developer.nvidia.com/blog/nvidia-800-v-hvdc-architecture-will-power-the-next-generation-of-ai-factories/" target="_blank">blog</a>, a 1-MW rack</span><span> could require as much as 200 kilograms of copper busbar. For a 1-gigawatt data center, it could amount to 200,000 kg of copper. </span></p><h2>Benefits of High-Voltage DC Power</h2><p>By converting 13.8-kV AC grid power directly to 800 V DC at the data center perimeter, most intermediate conversion steps are eliminated. This reduces the number of fans and power-supply units, and leads to higher system reliability, lower heat dissipation, improved energy efficiency, and a smaller equipment footprint.</p><p>“Each power conversion between the electric grid or power source and the silicon chips inside the servers causes some energy loss,” says Bacellar.</p><p>Switching from 415-V AC to 800-V DC in electrical distribution enables 85 percent more power to be transmitted through the same conductor size. This happens because higher voltage reduces current demand, lowering resistive losses and making power transfer more efficient. Thinner conductors can handle the same load, reducing copper requirements by 45 percent, a 5 percent improvement in efficiency, and 30 percent lower total cost of ownership for gigawatt-scale facilities.</p><p>“In a high-voltage DC architecture, power from the grid is converted from medium-voltage AC to roughly 800-V DC and then distributed throughout the facility on a DC bus,” said Vertiv’s Thompson. “At the rack, compact DC-to-DC converters step that voltage down for GPUs and CPUs.”</p><p>A <a href="https://www.datacenter-asia.com/wp-content/uploads/2025/08/Omdia-Analysts-Summit-Omdia%E5%88%86%E6%9E%90%E5%B8%88%E5%B3%B0%E4%BC%9A.pdf" target="_blank">report</a> from technology advisory group <a href="https://omdia.tech.informa.com/" target="_blank">Omdia</a> claims that higher voltage DC data centers have already appeared in China. In the Americas, the <a href="https://www.linkedin.com/posts/sharada-yeluri_microsoft-meta-google-activity-7367974455052017666-nXV5/" target="_blank">Mt. Diablo Initiative</a> (a collaboration among <a href="https://www.meta.com/about/?srsltid=AfmBOoq7uBjCU2oG3oI6Ti8VQaMdaxhAcxXmXD-twy9OTi0cbmTqGKVQ" target="_blank">Meta</a>, <a href="https://www.microsoft.com/en-us" target="_blank">Microsoft</a>, and the <a href="https://www.opencompute.org/" target="_blank">Open Compute Project</a>) is a 400-V DC rack power distribution experiment.</p><h2>Innovations in DC Power Systems</h2><p>A handful of vendors are trying to get ahead of the game. Vertiv’s 800-V DC ecosystem that integrates with <a href="https://www.vertiv.com/en-us/about/news-and-insights/corporate-news/vertiv-develops-energy-efficient-cooling-and-power-reference-architecture-for-the-nvidia-gb300-nvl72/" target="_blank">Nvidia Vera Rubin Ultra Kyber platforms</a> will be commercially available in the second half of 2026. Eaton, too, is well advanced in its 800-V DC systems innovation courtesy of a medium-voltage solid-state transformer (SST) that will sit at the heart of DC power distribution system. Meanwhile Delta, has released 800-V DC in-row 660-kW power racks with a total of 480 kW of embedded battery backup units. And, <a href="https://www.solaredge.com/us/" target="_blank">SolarEdge</a> is hard at work on a 99%-efficient SST that will be paired with a native DC UPS and a DC power distribution layer.</p><p>But much of the industry is far behind. <a href="https://www.linkedin.com/in/pehughes/" target="_blank">Patrick Hughes</a>, senior vice president of strategy, technical, and industry affairs for the <a href="https://www.makeitelectric.org/" target="_blank">National Electrical Manufacturers Association</a>, says most innovation is happening at the 400-V DC level, though some are preparing 800-V DC. He believes the industry needs a complete, coordinated ecosystem, including power electronics, protection, connectors, sensing, and service‑safe components that scale together rather than in isolation. That, in turn, requires retooling manufacturing capacity for DC‑specific equipment, expanding semiconductor and materials supply, and clear, long‑term demand commitments that justify major capital investment across the value chain.</p><p>“Many are taking a cautious approach, offering limited or adapted solutions while waiting for clearer standards, safety frameworks, and customer commitments,” said Hughes. “Building the supply chain will hinge on stabilizing standards and safety frameworks so suppliers can design, certify, manufacture, and install equipment with confidence.”</p>]]></description><pubDate>Tue, 24 Mar 2026 16:00:05 +0000</pubDate><guid>https://spectrum.ieee.org/data-center-dc</guid><category>Data-centers</category><category>Power-electronics</category><category>Ai</category><dc:creator>Drew Robb</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/nvidia-s-high-compute-density-racks.jpg?id=65397940&amp;width=980"></media:content></item><item><title>What Will It Take to Build the World’s Largest Data Center?</title><link>https://spectrum.ieee.org/5gw-data-center</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/construction-symbols-on-yellow-background.png?id=65356154&width=1200&height=600&coordinates=0%2C1036%2C0%2C1036"/><br/><br/><p><strong>The undying thirst for </strong>smarter (historically, that means larger) AI models and greater adoption of the ones we already have has led to an explosion in <a href="https://epoch.ai/data/data-centers#data-insights" rel="noopener noreferrer" target="_blank">data-center construction projects</a>, unparalleled both in number and scale. Chief among them is Meta’s planned 5-gigawatt data center in Louisiana, called Hyperion, announced in June of 2025. Meta CEO Mark Zuckerberg said Hyperion will “cover a significant part of the footprint of Manhattan,” and the first phase—a 2-GW version—will be completed by 2030.</p><p>Though the project’s stated 5-GW scale is the largest among its peers, it’s just one of several dozen similar projects now underway. According to Michael Guckes, chief economist at construction-software company <a href="https://www.constructconnect.com/preconstruction-software?campaign=21011210878&group=161161401080&target=kwd-337013613104&matchtype=e&creative=760058507701&device=c&se_kw=constructconnect&utm_medium=ppc&utm_campaign=CC+Brand+2&utm_term=constructconnect&utm_source=adwords&hsa_ad=760058507701&hsa_kw=constructconnect&hsa_net=adwords&hsa_tgt=kwd-337013613104&hsa_grp=161161401080&hsa_src=g&hsa_ver=3&hsa_cam=21011210878&hsa_mt=e&hsa_acc=3324869874&gad_source=1&gad_campaignid=21011210878&gbraid=0AAAAADccs_biRlt8tR8-qu3h7Kja1Tzte&gclid=CjwKCAiA3-3KBhBiEiwA2x7FdCQc4sQOa0YZVFnCW9RF1tGkH2hDiowNrjM587XsXAv6Fb7Sdr1hgBoCNjEQAvD_BwE" rel="noopener noreferrer" target="_blank">ConstructConnect</a>, spending on data centers topped US $27 billion by July of 2025 and, once the full-year figures are tallied, will easily exceed $60 billion. Hyperion alone accounts for about a quarter of that.</p><p>For the engineers assigned to bring these projects to life, the mix of challenges involved represent a unique moment. The world’s largest tech companies are opening their wallets to pay for new innovations in compute, cooling, and <a data-linked-post="2674861846" href="https://spectrum.ieee.org/nvidia-rubin-networking" target="_blank">network</a> technology designed to operate at a scale that would’ve seemed absurd five years ago.</p><p>At the same time, the breakneck pace of building comes paired with serious problems. Modern data-center construction frequently requires an influx of temporary workers and sharply increases noise, traffic, pollution, and often local electricity prices. And the environmental toll remains a concern long after facilities are built due to the unprecedented 24/7 energy demands of AI data centers which, according to one recent study, <a href="https://www.nature.com/articles/s41893-025-01681-y" rel="noopener noreferrer" target="_blank">could emit the equivalent of tens of millions of tonnes of CO<span><sub>2</sub></span> annually</a> in the United States alone.</p><p>Regardless of these issues, large AI companies, and the engineers they hire, are going full steam ahead on giant data-center construction. So, what does it really take to build an unprecedentedly large data center?</p><h2>AI Rewrites Building Design</h2><p>The stereotypical data-center building rests on a reinforced concrete slab foundation. That’s paired with a steel skeleton and poured concrete wall panels. The finished building is called a “shell,” a term that implies the structure itself is a secondary concern. Meta has <a href="https://www.datacenterdynamics.com/en/news/meta-brings-data-centers-in-tents-to-gallatin-tennessee/" target="_blank">even used gigantic tents</a> to throw up temporary data centers.</p><p>Still, the scale of the largest AI data centers brings unique challenges. “The biggest challenge is often what’s under the surface. Unstable, corrosive, or expansive soils can lead to delays and require serious intervention,” says <a href="https://www.jacobs.com/our-people/meet-bob-haley" target="_blank">Robert Haley</a>, vice president at construction consulting firm <a href="https://www.jacobs.com/" target="_blank">Jacobs</a>.<a href="https://www.stantec.com/en/people/c/carter-amanda" target="_blank"> Amanda Carter</a>, a senior technical lead at <a href="https://www.stantec.com/en" target="_blank">Stantec</a>, said a soil’s thermal conductivity is also important, as most electrical infrastructure is placed underground. “If the soil has high thermal resistivity, it’s going to be difficult to dissipate [heat].” Engineers may take hundreds or thousands of soil samples before construction can begin.</p><h3>GPUs</h3><br/><img alt="Yellow microchip icon on a black background." class="rm-shortcode" data-rm-shortcode-id="9612db5baec52cce6fe11d703e52c7bc" data-rm-shortcode-name="rebelmouse-image" id="af54d" loading="lazy" src="https://spectrum.ieee.org/media-library/yellow-microchip-icon-on-a-black-background.png?id=65347639&width=980"/><p>Modern AI data centers often use <em><em>rack-scale</em></em> systems, such as the Nvidia GB200 NVL72, which occupy a single data-center rack. Each rack contains 72 GPUs, 36 CPUs, and up to 13.4 terabytes of GPU memory. The racks measure over 2.2 meters tall and weigh over one and a half tonnes, forcing AI data centers to use thicker concrete with more reinforcement to bear the load.</p><p>A single GB200 rack can use up to 120 kilowatts. If Hyperion meets its 5-gigawatt goals, the data-center campus could include over 41,000 rack-scale systems, for a total of more than 3 million GPUs. The final number of GPUs used by Hyperion is likely to be less than that, though only because future GPUs will be larger, more capable, and use more power.</p><h3>Money</h3><br/><img alt="Black hand and dollar symbol combined on an orange background." class="rm-shortcode" data-rm-shortcode-id="2ef34f3679a3b3135244243e46ae5630" data-rm-shortcode-name="rebelmouse-image" id="248eb" loading="lazy" src="https://spectrum.ieee.org/media-library/black-hand-and-dollar-symbol-combined-on-an-orange-background.png?id=65347751&width=980"/><p>According to ConstructConnect, spending on data centers neared US $27 billion through July of 2025 and, according to the latest data, will tally close to $60 billion through the end of the year. Meta’s Hyperion project is a big slice of the pie, at $10 billion.</p><p>Data-center spending has become an important prop for the construction industry, which is seeing reduced demand in other areas, such as residential construction and public infrastructure. ConstructConnect’s third quarter 2025 financial report stated that the quarter’s decline “would have been far more severe without an $11 billion surge in data center starts.”</p><h3></h3><br/><p>There’s apparently no shortage of eligible sites, however, as both the number of data centers under construction, and the money spent on them, has skyrocketed. The spending has allowed companies building data centers to throw out the rule book. Prior to the AI boom, most data centers relied on tried-and-true designs that prioritized inexpensive and efficient construction. Big tech’s willingness to spend has shifted the focus to speed and scale.</p><p>The loose purse strings open the door to larger and more robust prefabricated concrete wall and floor panels. <a href="https://www.linkedin.com/in/dougbevier/" target="_blank">Doug Bevier</a>, director of development at <a href="https://www.clarkpacific.com/" rel="noopener noreferrer" target="_blank">Clark Pacific</a>, says some concrete floor panels may now span up to 23 meters and need to handle floor loads up to 3,000 kilograms per square meter, <a href="https://codes.iccsafe.org/s/IBC2018/chapter-16-structural-design/IBC2018-Ch16-Sec1607.1" rel="noopener noreferrer" target="_blank">which is more than twice the load international building codes normally define for manufacturing and industry</a>. In some cases, the concrete panels must be custom-made for a project, an expensive step that the economics of pre-AI data centers rarely justified.</p><p>Simultaneously, the time scale for projects is also compressed: <a href="https://www.linkedin.com/in/jamiemcgrath365/" rel="noopener noreferrer" target="_blank">Jamie McGrath</a>, senior vice president of data-center operations at<a href="https://www.crusoe.ai/" rel="noopener noreferrer" target="_blank"> Crusoe</a>, says the company is delivering projects in “about 12 months,” compared to 30 to 36 months before. Not all projects are proceeding at that pace, but speed is universally a priority.</p><p>That makes it difficult to coordinate the labor and materials required. Meta’s Hyperion site, located in rural Richland Parish, Louisiana, is emblematic of this challenge. <a href="https://www.nola.com/news/business/meta-louisiana-ai-data-center/article_77f553ff-c272-4e6c-a775-60bbbee0b065.html" rel="noopener noreferrer" target="_blank">As reported by NOLA.com</a>, at least 5,000 temporary workers have flocked to the area, which has only about 20,000 permanent residents. These <a href="https://www.wsj.com/business/data-centers-are-a-gold-rush-for-construction-workers-6e3c5ce0?st=jr1y94" rel="noopener noreferrer" target="_blank">workers earn above-average wages</a> and bring a short-term boost for some local businesses, such as restaurants and convenience stores. However, they have also spurred complaints from residents about traffic and construction noise and pollution.</p><p>This friction with residents includes not only these obvious impacts, but <a href="https://youtu.be/DGjj7wDYaiI?si=aZocXHJe0IYUkJcl&t=175" rel="noopener noreferrer" target="_blank">also things you might not immediately suspect</a>, such as light pollution caused by around-the-clock schedules. Also significant are changes to local water tables and runoff, which can reduce water quality for neighbors who rely on well water. These issues have motivated a few U.S. cities <a href="https://www.atlantanewsfirst.com/2025/06/04/atlanta-tightens-restrictions-data-centers-bans-them-some-neighborhoods/" rel="noopener noreferrer" target="_blank">to enact data-center bans</a>.</p><h2>Data Centers Often Go BYOP (bring your own power)</h2><p>Meta’s Richland Parish site also highlights a problem that’s priority No. 1 for both AI data centers and their critics: power.</p><p>Data centers have always drawn large amounts of power, which nudged data-center construction to cluster in hubs where local utilities were responsive to their demands. Virginia’s electric utility, Dominion Energy, met demand with agreements to build new infrastructure, <a href="https://rmi.org/amazon-dominion-virginia-power-reach-breakthrough-renewable-energy-agreement/" rel="noopener noreferrer" target="_blank">often with a focus on renewable energy</a>.</p><p>The power demands of the largest AI data centers, though, have caught even the most responsive utilities off guard. A report from the Lawrence Berkeley National Laboratory, in California, estimated the entire U.S. data-center industry <a href="https://eta-publications.lbl.gov/sites/default/files/lbnl-1005775_v2.pdf" rel="noopener noreferrer" target="_blank">consumed an average load of roughly 8 GW of power in 2014</a>. Today, the largest AI data-center campuses are built to handle up to a gigawatt each, and Meta’s Hyperion is projected to require 5 GW.</p><p>“Data centers are exasperating issues for a lot of utilities,” says <a href="https://www.cleanegroup.org/staff/abbe-ramanan/" rel="noopener noreferrer" target="_blank">Abbe Ramanan</a>, project director at the Clean Energy Group, a Vermont-based nonprofit.</p><p>Ramanan explains that utilities often use “peaker plants” to cope with extra demand. They’re usually older, less efficient fossil-fuel plants which, because of their high cost to operate and carbon output, were due for retirement. But Ramanan says increased electricity demand <a href="https://www.eia.gov/todayinenergy/detail.php?id=61425" rel="noopener noreferrer" target="_blank">has kept them in service</a>.</p><p>Meta secured power for Hyperion by negotiating with Entergy, Louisiana’s electric utility, for construction of three new gas-turbine power plants. Two will be located near the Richland Parish site, while a third will be located in southeast Louisiana.</p><p>Entergy frames the new plants as a win for the state. “A core pillar of Entergy and Meta’s agreement is that Meta pays for the full cost of the utility infrastructure,” says <a href="https://www.linkedin.com/in/daniel-kline-068356ba/" rel="noopener noreferrer" target="_blank">Daniel Kline</a>, director of power-delivery planning and policy at Entergy. The utility expects that “customer bills will be lower than they otherwise would have been.” That would prove an exception, as <a href="https://www.bloomberg.com/graphics/2025-ai-data-centers-electricity-prices/?embedded-checkout=true" rel="noopener noreferrer" target="_blank">a recent report from Bloomberg found</a> electricity rates in regions with data centers are more likely to increase than in regions without.</p><h3>CO2</h3><br/><img alt="Diagram of CO2 molecule with black carbon and red oxygen atoms connected by lines." class="rm-shortcode" data-rm-shortcode-id="c9cf38ac7004d413b7fe5b8b577a3d3d" data-rm-shortcode-name="rebelmouse-image" id="3b1b0" loading="lazy" src="https://spectrum.ieee.org/media-library/diagram-of-co2-molecule-with-black-carbon-and-red-oxygen-atoms-connected-by-lines.png?id=65348689&width=980"/><p>Research <a href="https://www.nature.com/articles/s41893-025-01681-y" target="_blank">published in Nature</a> in 2025 projects that data-center emissions will range from 24 million to 44 million CO2-equivalent metric tonnes annually through 2030 in the United States alone. While some materials used in data centers, such as concrete, lead to significant emissions, the majority of these emissions will result from the high energy demands of AI servers.</p><p>Estimating the carbon emissions of Hyperion is difficult, as the project won’t be completed until 2030. Assuming that the three new natural gas plants that are planned for construction as part of the project produce emissions typical for their type, however, the plants could lead to full life-cycle emissions of between 4 million and 10 million metric tons of CO2 annually—roughly equivalent to the annual emissions of a country like <a href="https://www.worldometers.info/co2-emissions/co2-emissions-by-country/" target="_blank">Latvia</a>.</p><h3>Concrete</h3><br/><img alt="Silhouette of a cement truck on an orange background." class="rm-shortcode" data-rm-shortcode-id="060b1cd238b9de45274d6766069f3a14" data-rm-shortcode-name="rebelmouse-image" id="e6d68" loading="lazy" src="https://spectrum.ieee.org/media-library/silhouette-of-a-cement-truck-on-an-orange-background.png?id=65348696&width=980"/><p>Data centers are typically built from concrete, with steel used as a skeleton to reinforce and shape the concrete shell. While the foundation is often poured concrete, the walls and floors are most often built from prefabricated concrete panels that can span up to 23 meters. Floors use a reinforced T-shape, similar to a steel girder, measuring up to 1.2 meters across at its thickest point. The largest data centers include hundreds of these concrete panels.</p><p>The America Cement Association projects that the current surge in building<a href="https://mi.cement.org/PDF/Data_Center_Cement_Consumption.pdf" rel="noopener noreferrer" target="_blank"> will require 1 million tonnes of cement over the next three years</a>, though that’s still a tiny fraction of the overall cement industry,<a href="https://d9-wret.s3.us-west-2.amazonaws.com/assets/palladium/production/s3fs-public/media/files/mis-202507-cemen.pdf" rel="noopener noreferrer" target="_blank"> which weighed in at roughly 103 million tonnes in 2024</a>.</p><h3></h3><br/><p>The plants, which will generate a combined 2.26 GW, will use combined-cycle gas turbines that recapture waste heat from exhaust.<a href="https://www.ge.com/news/press-releases/ha-technology-now-available-industry-first-64-percent-efficiency" target="_blank"> This boosts thermal efficiency to 60 percent and beyond,</a> meaning more fuel is converted to useful energy. Simple-cycle turbines, by contrast, vent the exhaust, which lowers efficiency to around 40 percent.</p><p>Even so, total life-cycle emissions for the Hyperion plants could range from 4 million to over 10 million tonnes of CO2 each year, depending on how frequently the plants are put in use and the final efficiency benchmarks once built. On the high end, that’s as much CO2 as produced by over 2 million passenger cars. Fortunately, not all of Meta’s data centers take the same approach to power. The company has announced a plan to power Prometheus, a large data-center project in Ohio scheduled to come online before the end of 2026, <a href="https://about.fb.com/news/2026/01/meta-nuclear-energy-projects-power-american-ai-leadership/" target="_blank">with nuclear energy</a>.</p><p>But other big tech companies, spurred by the need to build data centers quickly, are taking a less efficient approach.</p><p>xAI’s Colossus 2, located in Memphis, is the most extreme example. <a href="https://www.climateandcapitalmedia.com/35-gas-turbines-no-permits-elon-musks-dirty-xai-secret/" rel="noopener noreferrer" target="_blank">The company trucked dozens of temporary gas-turbine generators to power the site</a> located in a suburban neighborhood. OpenAI, meanwhile, has gas turbines capable of generating up to 300 megawatts <a href="https://www.timesrecordnews.com/story/news/2025/10/14/water-electricity-concerns-addressed-by-stargate-data-center-leaders-in-abilene-texas/86585222007/" rel="noopener noreferrer" target="_blank">at its new Stargate data center in Abilene, Texas</a>, slated to open later in 2026. Both use simple-cycle turbines with a much lower efficiency rating than the combined-cycle plants Entergy will build to power Hyperion.</p><p>Demand for gas turbines is so intense, in fact, that <a href="https://www.spglobal.com/commodity-insights/en/news-research/latest-news/electric-power/052025-us-gas-fired-turbine-wait-times-as-much-as-seven-years-costs-up-sharply" rel="noopener noreferrer" target="_blank">wait times for new turbines are up to seven years</a>. Some data centers <a href="https://spectrum.ieee.org/ai-data-centers" target="_self">are turning toward refurbished jet engines</a> to obtain the turbines they need.</p><h2>AI Racks Tip the Scales</h2><p>The demand for new, reliable power is driven by the power-hungry GPUs inside modern AI data centers.</p><p>In January of 2025, Mark Zuckerberg announced in a post on Facebook that Meta planned to end 2025 <a href="https://techcrunch.com/2025/01/24/mark-zuckerberg-says-meta-will-have-1-3m-gpus-for-ai-by-year-end/" rel="noopener noreferrer" target="_blank">with at least 1.3 million GPUs in service</a>. OpenAI’s Stargate data center <a href="https://www.datacenterdynamics.com/en/news/openai-and-oracle-to-deploy-450000-gb200-gpus-at-stargate-abilene-data-center/" rel="noopener noreferrer" target="_blank">plans to use over 450,000 Nvidia GB200 GPUs</a>, and xAI’s Colossus 2, an expansion of Colossus, <a href="https://www.nextbigfuture.com/2025/09/xai-colossus-2-first-gigawatt-ai-training-data-center.html" rel="noopener noreferrer" target="_blank">is built to accommodate over 550,000 GPUs</a>.</p><p>GPUs, which remain by far the most popular for AI workloads, are bundled into human-scale monoliths of steel and silicon which, much like the data centers built to house them, are rapidly growing in weight, complexity, and power consumption.</p><h3>Memory</h3><br/><img alt="Outlined head with a microchip brain on blue background, symbolizing AI and technology." class="rm-shortcode" data-rm-shortcode-id="7cd8d3faff2d24fa591295b9efd9b1ba" data-rm-shortcode-name="rebelmouse-image" id="70372" loading="lazy" src="https://spectrum.ieee.org/media-library/outlined-head-with-a-microchip-brain-on-blue-background-symbolizing-ai-and-technology.png?id=65350865&width=980"/><p>In addition to raw compute performance, Nvidia GB200 NVL72 racks also require huge amounts of memory. An Nvidia GB200 NVL72 rack may include up to 13.4 terabytes of high-bandwidth memory, which implies a data-center campus at Hyperion’s scale will require at least several dozen petabytes.</p><p>The immense demand has sent memory prices soaring:<a href="https://wccftech.com/dram-prices-have-risen-by-a-whopping-172-this-year-alone/" rel="noopener noreferrer" target="_blank"> The price of DRAM, specifically DDR5, has increased 172 percent in 2025</a>.</p><h3>Power</h3><br/><img alt="" class="rm-shortcode" data-rm-shortcode-id="eaf0380400ba03875bf2ee910f35ab5d" data-rm-shortcode-name="rebelmouse-image" id="5bd7d" loading="lazy" src="https://spectrum.ieee.org/media-library/image.png?id=65350873&width=980"/><p>Hyperion is expected to use 5 gigawatts of power across 11 buildings, which works out to just under 500 megawatts per building, assuming each will be similar to its siblings. That’s enough to power roughly 4.2 million U.S. homes.</p><p>Just one Hyperion data center built at the Richland Parish site will consume twice as much power as xAI’s Colossus which, at the time of its completion in the summer of 2024, was among the largest data centers yet built.</p><h3></h3><br/><p>Nvidia’s <a href="https://www.nvidia.com/en-us/data-center/gb200-nvl72/" target="_blank">GB200 NVL72</a>—a rack-scale system—is currently a leading choice for AI data centers. A single GB200 rack contains 72 GPUs, 36 CPUs, and up to 17 terabytes of memory. It measures 2.2 meters tall, <a href="https://aivres.com/wp-content/uploads/KRS8000v3.1.pdf" target="_blank">tips the scales at up to </a>1,553 kilograms, and consumes about 120 kilowatts—as much as around 100 U.S. homes. And this, according to Nvidia, is just the beginning. The company anticipates future racks could <a href="https://www.tomshardware.com/tech-industry/nvidia-to-boost-ai-server-racks-to-megawatt-scale-increasing-power-delivery-by-five-times-or-more" target="_blank">consume up to a megawatt each</a>.</p><p><a href="https://www.linkedin.com/in/viktorpetik/?originalSubdomain=hr" target="_blank">Viktor Petik</a>, senior vice president of infrastructure solutions at<a href="https://www.vertiv.com/en-us/" rel="noopener noreferrer" target="_blank"> Vertiv</a>, says the rapid change in rack-scale AI systems has forced data centers to adapt. “AI racks consume far more power and weigh more than their predecessors,” says Petik. He adds that data centers must supply racks with multiple power feeds, without taking up extra space.</p><p>The new power demands from rack-scale systems have consequences that are reflected in the design of the data center—even its footprint.</p><p>In 2022 Meta broke ground on a new data center at a campus in Temple, Texas. According to <a href="https://semianalysis.com/" rel="noopener noreferrer" target="_blank">SemiAnalysis</a>, which studies AI data centers, construction began with the intent <a href="https://newsletter.semianalysis.com/p/datacenter-anatomy-part-1-electrical" rel="noopener noreferrer" target="_blank">to build the data center in an H-shaped configuration common to other Meta data centers</a>.</p><h3>LAND</h3><br/><img alt="Black location pin icon on orange background." class="rm-shortcode" data-rm-shortcode-id="a2b2e04f07bd0ed3f60e1f86029497af" data-rm-shortcode-name="rebelmouse-image" id="248cd" loading="lazy" src="https://spectrum.ieee.org/media-library/black-location-pin-icon-on-orange-background.png?id=65351137&width=980"/><h3></h3><br/><p>Meta CEO Mark Zuckerberg kicked off the buzz around Hyperion by saying it would cover a large chunk of Manhattan. Many took that to mean Hyperion would be a single building of that size, which isn’t correct. Hyperion will actually be a cluster of data centers—11 are currently planned—with over 370,000 square meters of floor space. That’s a lot smaller even than New York City’s Central Park, which covers 6 percent of Manhattan.</p><p>Meta has room to grow, however. The Richland Parish site spans 14.7 million square meters in total, which is about a quarter the area of Manhattan. And the 370,000 square meters of floor space Hyperion is expected to provide doesn’t include external infrastructure, such as the three new combined-cycle gas power plants Louisiana utility Entergy is building to power the project.</p><h3></h3><br/><img alt="Map with site layout and regional location in Louisiana, showing roads and distances." class="rm-shortcode" data-rm-shortcode-id="b0cc9253de57aefb96d39a9892c95fe5" data-rm-shortcode-name="rebelmouse-image" id="a41a4" loading="lazy" src="https://spectrum.ieee.org/media-library/map-with-site-layout-and-regional-location-in-louisiana-showing-roads-and-distances.png?id=65352088&width=980"/><h3></h3><br/><p><span>Construction was paused midway in December of 2022, however, </span><a href="https://www.datacenterdynamics.com/en/news/exclusive-after-meta-cancels-odense-data-center-expansion-other-projects-are-being-rescoped/" target="_blank">as part of a company-wide review of its data-center infrastructure</a><span>. Meta decided to knock down the structure it had built and start from scratch. The reasons for this decision were never made public, but analysts believe it was due to the old design’s inability to deliver sufficient electricity to new, power-hungry AI racks. Construction resumed in 2023.</span></p><p>Meta’s replacement ditches the H-shaped building for simple, long, rectangular structures, each flanked by rows of gas-turbine generators. While Meta’s plans are subject to change, Hyperion is currently expected to comprise 11 rectangular data centers, each packed with hundreds of thousands of GPUs, spread across the 13.6-square-kilometer Richland Parish campus.</p><h2>Cooling, and Connecting, at Scale</h2><p>Nvidia’s ultradense AI GPU racks are changing data centers not only with their weight, and power draw, but also with their intense cooling and bandwidth requirements.</p><p>Data centers traditionally use air cooling, but that approach has reached its limits. “Air as a cooling medium is inherently inferior,” says<a href="https://cde.nus.edu.sg/me/staff/lee-poh-seng/" target="_blank"> Poh Seng Lee</a>, head of <a href="https://blog.nus.edu.sg/coolestlab/" rel="noopener noreferrer" target="_blank">CoolestLAB</a>, a cooling research group at the National University of Singapore.</p><p>Instead, going forward, GPUs will rely on liquid cooling. However, that adds a new layer of complexity. “It’s all the way to the facilities level,” says Lee. “You need pumps, which we call a coolant distribution unit. The CDU will be connected to racks using an elaborate piping network. And it needs to be designed for redundancy.” On the rack, pipes connect to cold plates mounted atop every GPU; outside the data-center shell, pipes route through evaporation cooling units. Lee says retrofitting an air-cooled data center is possible but expensive.</p><p>The networking used by AI data centers is also changing to cope with new requirements. Traditional data centers were positioned near network hubs for easy access to the global internet. AI data centers, though, are more concerned with networks of GPUs.</p><p>These connections must sustain high bandwidth with impeccable reliability. Mark Bieberich, a vice president at network infrastructure company Ciena, says its latest fiber-optic transceiver technology,<a href="https://www.ciena.com/products/wavelogic/wavelogic-6" rel="noopener noreferrer" target="_blank"> WaveLogic 6</a>, can provide up to 1.6 terabytes per second of bandwidth per wavelength. A single fiber can support 48 wavelengths in total, and Ciena’s largest customers have hundreds of fiber pairs, placing total bandwidth in the thousands of terabits per second.</p><h3></h3><br/><img alt="a piece of land with a big platform in the middle." class="rm-shortcode" data-rm-shortcode-id="fb6adbcb1ff833934363d6f6ce9cf993" data-rm-shortcode-name="rebelmouse-image" id="63272" loading="lazy" src="https://spectrum.ieee.org/media-library/a-piece-of-land-with-a-big-platform-in-the-middle.jpg?id=65343457&width=980"/><p><span>This is a point where the scale of Meta’s Hyperion, and other large AI data centers, can be deceptive. It seems to imply the physical size of a single data center is what matters. But rather than being a single building,</span><a href="https://datacenters.atmeta.com/richland-parish-data-center/" target="_blank"> Hyperion is actually a set of buildings</a><span> connected by high-speed fiber-optics.</span></p><p>“Interconnecting data centers is absolutely essential,” says Bieberich. “You could think about it as one logical AI training facility, but with geographically distributed facilities.” Nvidia has taken to calling this “scale across,” to contrast it with the idea that data centers must “scale up” to larger singular buildings.</p><h2>The Big but Hazy Future</h2><p>The full scale of the challenges that face Hyperion, and other future AI data centers of similar scale, remain hazy. Nvidia has yet to introduce the rack-scale AI GPU systems it will host. How much power will it demand? What type of cooling will it require? How much bandwidth must be provided? These can only be estimated.</p><p>In the absence of details, the gravity of AI data-center design is pulled toward one certainty: It must be big. New data-center designers are rewriting their rule book to handle power, cooling, and network infrastructure at a scale that would’ve seemed ridiculous five years ago.</p><p>This innovation is fueled by big tech’s fat wallet, which shelled out tens of billions of dollars in 2025 alone, leading to<a href="https://hbr.org/2025/10/is-ai-a-boom-or-a-bubble" target="_blank"> questions about whether the spending is sustainable</a>. For the engineers in the trenches of data-center design, though, it’s viewed as an opportunity to make the impossible possible.</p><p> “I tell my engineers, this is peak. We’re being engineers. We’re being asked complicated questions,” says Stantec’s Carter. “We haven’t got to do that in a long time.” <span class="ieee-end-mark"></span></p><p><em>This article appears in the April 2026 print issue.</em></p>]]></description><pubDate>Tue, 24 Mar 2026 15:00:05 +0000</pubDate><guid>https://spectrum.ieee.org/5gw-data-center</guid><category>Ai</category><category>Power</category><category>Construction</category><category>Data-centers</category><category>Type-cover</category><dc:creator>Matthew S. Smith</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/construction-symbols-on-yellow-background.png?id=65356154&amp;width=980"></media:content></item><item><title>The Coming Drone-War Inflection in Ukraine</title><link>https://spectrum.ieee.org/autonomous-drone-warfare</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/person-holding-a-large-drone-outdoors-under-a-sunny-partly-cloudy-sky.jpg?id=65327386&width=1200&height=600&coordinates=0%2C166%2C0%2C167"/><br/><br/><p><strong>WHEN</strong><strong> </strong><strong>KYIV-BORN</strong><strong> </strong><strong>ENGINEER </strong><a href="https://www.instagram.com/yaroslavazhnyuk/?hl=en" rel="noopener noreferrer" target="_blank">Yaroslav Azhnyuk</a> thinks about the future, his mind conjures up dystopian images. He talks about “swarms of autonomous drones carrying other autonomous drones to protect them against autonomous drones, which are trying to intercept them, controlled by <a href="https://spectrum.ieee.org/ai-agents" target="_self">AI</a> <a href="https://spectrum.ieee.org/ai-agents" target="_self">agents</a> overseen by a human general somewhere.” He also imagines flotillas of autonomous submarines, each carrying hundreds of drones, suddenly emerging off the coast of California or Great Britain and discharging their cargoes en masse to the sky.</p><p>“How do you protect from that?” he asks as we speak in late December 2025; me at my quiet home office in London, he in Kyiv, which is bracing for another wave of <a href="https://spectrum.ieee.org/ukraine-air-defense" target="_self">missile attacks</a>.</p><p>Azhnyuk is not an alarmist. He cofounded and was formerly CEO of <a href="https://petcube.com/" rel="noopener noreferrer" target="_blank">Petcube</a>, a California-based company that uses smart cameras and an app to let pet owners keep an eye on their beloved creatures left alone at home. A self-described “liberal guy who didn’t even receive military training,” Azhnyuk changed his mind about developing military tech in the months following the <a href="https://commonslibrary.parliament.uk/research-briefings/cbp-9847/" rel="noopener noreferrer" target="_blank">Russian invasion of</a> <a href="https://commonslibrary.parliament.uk/research-briefings/cbp-9847/" rel="noopener noreferrer" target="_blank">Ukraine</a> in February 2022. By 2023, he had relinquished his CEO role at Petcube to do what many Ukrainian technologists have done—to help defend his country against a mightier aggressor.</p><p>It took a while for him to figure out what, exactly, he should be doing. He didn’t join the military, but through friends on the front line, he witnessed how, out of desperation, Ukrainian troops turned to off-the-shelf consumer drones to make up for their country’s lack of artillery.</p><p>Ukrainian troops first began using drones for battlefield surveillance, but within a few months they figured out how to strap explosives onto them and turn them into effective, <a href="https://spectrum.ieee.org/ukraine-hackers-war" target="_self">low-cost killing</a> <a href="https://spectrum.ieee.org/ukraine-hackers-war" target="_self">machines</a>. Little did they know they were fomenting a revolution in warfare.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Group observes a drone demonstration indoors, with a presenter explaining features." class="rm-shortcode" data-rm-shortcode-id="bfc4f902e7ae9ffa663bf3bcc8ff144c" data-rm-shortcode-name="rebelmouse-image" id="cc3bb" loading="lazy" src="https://spectrum.ieee.org/media-library/group-observes-a-drone-demonstration-indoors-with-a-presenter-explaining-features.jpg?id=65341730&width=980"/></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Compact black camera module with textured surface and orange ribbon cable on white background." class="rm-shortcode" data-rm-shortcode-id="e904e39e8ac7797c354a205ed343d150" data-rm-shortcode-name="rebelmouse-image" id="4d58e" loading="lazy" src="https://spectrum.ieee.org/media-library/compact-black-camera-module-with-textured-surface-and-orange-ribbon-cable-on-white-background.jpg?id=65341726&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">The Ukrainian robotics company The Fourth Law produces an autonomy module [above] that uses optics and AI to guide a drone to its target. Yaroslav Azhnyuk [top, in light shirt], founder and CEO of The Fourth Law, describes a developmental drone with autonomous capabilities to Ukrainian President Volodymyr Zelenskyy and German Chancellor Olaf Scholz.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Top: THE PRESIDENTIAL OFFICE OF UKRAINE; Bottom: THE FOURTH LAW</small></p><p>That revolution was on display last month, as the U.S. and Israel went to war with Iran. It soon became clear that attack drones are being extensively used by both sides. Iran, for example, is relying heavily on the Shahed drones that the country invented and that are now also being manufactured in Russia and launched by the thousands every month against Ukraine.</p><p>A thorough analysis of the Middle East conflict <span>will take some time to emerge. And so to understand the direction of this new way of war, look to Ukraine, where its next phase—autonomy—is already starting to come into view. Outnumbered by the Russians and facing increasingly sophisticated jamming and spoofing aimed at causing the drones to veer off course or fall out of the sky, Ukrainian technologists realized as early as 2023 that what could really win the war was autonomy. Autonomous operation means a drone isn’t being flown by a remote pilot, and therefore there’s no communications link to that pilot that can be severed or spoofed, rendering the drone useless.</span></p><p>By late 2023, <a href="https://www.linkedin.com/in/yaroslavazhnyuk/?locale=uk" target="_blank">Azhnyuk</a> set out to help make that vision a reality. He founded two companies, <a href="https://thefourthlaw.ai/blog/funding-products-video" target="_blank">The</a> <a href="https://thefourthlaw.ai/blog/funding-products-video" target="_blank">Fourth Law</a> and <a href="https://oddsystems.io/en/" target="_blank">Odd Systems</a>, the first to develop AI algorithms to help drones overcome jamming during final approach, the second to build thermal cameras to help those drones better sense their <span>surroundings.</span></p><p>“I moved from making devices that throw treats to dogs to making devices that throw explosives on Russian occupants,” Azhnyuk quips.</p><p>Since then, The Fourth Law has dispatched “more than thousands” of <a href="https://thefourthlaw.ai/#section3" target="_blank">autonomy modules</a> to troops in eastern Ukraine (it declines to give a more specific figure), which can be retrofitted on existing drones to take over navigation during the final <span>approach to the target. Azhnyuk says the autonomy modules, worth around US $50, increase the drone-strike success rate by up to four times that of purely operator-controlled drones.</span></p><p>And that is just the beginning. Azhnyuk is one of thousands of developers, including some <span>who </span>relocated from Western countries, who are applying their skills and other resources to advancing the drone technology that is the defining characteristic <span>of the war in Ukraine. This eclectic group of startups and founders includes </span><a href="https://en.wikipedia.org/wiki/Eric_Schmidt" target="_blank">Eric Schmidt</a>, the forme<a href="https://about.google/company-info/" target="_blank">r</a> <a href="https://about.google/company-info/" target="_blank">Google</a> CEO, whose company <a href="https://epravda.com.ua/oborona/milyarder-ta-ekskerivnik-google-robit-droni-dlya-ukrajini-shcho-nim-ruhaye-809495/" target="_blank">Swift Beat</a> is churning out autonomous <a href="https://www.nytimes.com/2025/12/31/magazine/ukraine-ai-drones-war-russia.html" target="_blank">drones and modules for Ukrainian</a> <a href="https://www.nytimes.com/2025/12/31/magazine/ukraine-ai-drones-war-russia.html" target="_blank">forces</a>. The frenetic pace of tech development is helping a scrappy, innovative underdog hold at bay a much larger and better-equipped foe.</p><p>All of this development is careening toward AI-based systems that enable drones to navigate by recognizing features in the terrain, lock on to and chase targets without an operator’s guidance, and eventually exchange information with each other through mesh networks, forming self-organizing robotic kamikaze swarms. Such an attack swarm would be commanded by a single operator from a safe distance.</p><p><span>According to some reports, autonomous swarming technology is also being developed <a href="https://www.usni.org/magazines/proceedings/2025/may/step-step-ukraine-built-technological-navy" target="_blank">for</a> <a href="https://www.usni.org/magazines/proceedings/2025/may/step-step-ukraine-built-technological-navy" target="_blank">sea drones</a>. Ukraine has had some notable <span>successes with sea drones, which have reportedly</span> </span><span>destroyed or damaged </span><a href="https://en.usm.media/sbu-naval-drones-hit-11-russian-ships-and-vessels-details/" target="_blank">around a dozen</a><span> Russian vessels.</span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Hand holding a drone with six rotors, outdoors against a blue sky." class="rm-shortcode" data-rm-shortcode-id="90f30978c5ba0e77e9b1873c155131d2" data-rm-shortcode-name="rebelmouse-image" id="7cf11" loading="lazy" src="https://spectrum.ieee.org/media-library/hand-holding-a-drone-with-six-rotors-outdoors-against-a-blue-sky.jpg?id=65341722&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">The Skynode X system, from Auterion, provides a degree of autonomy to a drone.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">AUTERION</small></p><p>For Ukraine, swarming can solve a major problem that puts the nation at a disadvantage against Russia—the lack of personnel. Autonomy is “the single most impactful defense technology of this century,” says Azhnyuk. “The moment this happens, you <span>shift from a manpower challenge to a production challenge, which is much more manageable,” he adds.</span></p><p>The autonomous warfare future envisioned by Azhnyuk and others is not yet a reality. But <a href="https://www.linkedin.com/in/marcclange/?skipRedirect=true" target="_blank">Marc Lange</a>, a German defense analyst and business strategist, believes that “an inflection point” is already in view. Beyond it, “things will be so dramatically different,” he says.</p><p>“Ukraine pretty rapidly realized that if the operator-to-drone ratio can be shifted from one-to-one to one-to-many, that creates great economies of scale and an amazing cost exchange ratio,” Lange adds. “The moment one operator can launch 100, 50, or even just 20 drones at once, this completely changes the economics of the war.”</p><h2>Drones With a View </h2><p>For a while, jammers that sever the radio links between drones and <span>operators or that spoof GPS receivers were able to provide fairly reliable defense against human-controlled first-person-view attack drones (FPVs). But as autonomous navigation progressed, those electronic shields have gradually become less effective. Defenders must now contend with unjammable drones—ones that are attached to hair-thin optical fibers or that are capable of </span><a href="https://spectrum.ieee.org/ukraine-killer-drones" target="_self">finding</a> <a href="https://spectrum.ieee.org/ukraine-killer-drones" target="_self">their way to their targets</a> without external guidance. In this emerging struggle, the defenders’ track records aren’t very encouraging: The typical countermeasure is to try to shoot down the attacking drone with a service weapon. It’s rarely successful.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Truck on rural road covered with camouflage netting, trees and fields in the background." class="rm-shortcode" data-rm-shortcode-id="7c7af1e395cf35752b367f8dd54130fc" data-rm-shortcode-name="rebelmouse-image" id="58155" loading="lazy" src="https://spectrum.ieee.org/media-library/truck-on-rural-road-covered-with-camouflage-netting-trees-and-fields-in-the-background.jpg?id=65341708&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">A truck outfitted with signal-jamming gear drives under antidrone nets near Oleksandriya, in eastern Ukraine, on 2 October 2025.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">ED JONES/AFP/GETTY IMAGES</small></p><p>“The attackers gain an immense advantage from unmanned systems,” says Lange. “You can have a drone pop up from anywhere and it can wreak havoc. But from autonomy, they gain even more.”</p><p>The self-navigating drones rely on image-recognition algorithms that have been around for over a decade, says Lange. And the mass deployments of drones on Ukrainian battlefields are enabling both Russian and Ukrainian technologists to create <a href="https://www.reuters.com/technology/ukraine-collects-vast-war-data-trove-train-ai-models-2024-12-20/" target="_blank">huge datasets</a> that improve the training and precision of those AI algorithms.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Six-wheeled robotic vehicle with mounted equipment in a grassy field." class="rm-shortcode" data-rm-shortcode-id="caa0a697b2d5752603687ac7f0278581" data-rm-shortcode-name="rebelmouse-image" id="1c591" loading="lazy" src="https://spectrum.ieee.org/media-library/six-wheeled-robotic-vehicle-with-mounted-equipment-in-a-grassy-field.jpg?id=65341706&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">A Ukrainian land robot, the Ravlyk, can be outfitted with a machine gun.</small></p><p>While uncrewed aerial vehicles (UAVs) have received the most attention, the Ukrainian military is also deploying dozens of different kinds of drones on land and sea. Ukraine, struggling with the shortage of infantry personnel, began working on replacing a portion of human soldiers with wheeled ground robots in 2024. As of early 2026, thousands of ground robots are crawling across the gray zone along the front line in Eastern Ukraine. Most are used to deliver supplies to the front line or to help evacuate the wounded, but some “killer” ground robots fitted with turrets and remotely controlled machine guns have also been tested.</p><p>In mid-February, Ukrainian authorities released a video of a Ukrainian ground robot using its thermal camera to detect a Russian soldier in the dark of the night and then kill the invader with a round from a heavy machine gun. So far these robots are mostly controlled <span>by a human operator, but the makers of these uncrewed ground vehicles say their systems are capable of basic autonomous operations, such as returning to base when radio connection is lost. The goal is to enable them to swarm so that one operator controls not one, but a whole herd of mesh-connected killer robots.</span></p><p>But <a href="https://www.hudson.org/experts/1303-bryan-clark" target="_blank">Bryan <span>Clark</span></a>, senior fellow and <span>director of the Center for Defense Concepts and Technology at the </span><a href="https://www.hudson.org/" target="_blank">Hudson Institute</a>, questions how quickly ground robots’ abilities can progress. “Ground environments are very difficult to navigate in because of the terrain you have to address,” he says. “The line of sight for the sensors on the ground vehicles is really constrained because of terrain, whereas an air vehicle can see everything around it.”</p><p>To achieve autonomy, <a href="https://spectrum.ieee.org/sea-drone" target="_self">maritime drones</a>, too, will require <span>naviga</span><span>tional approaches beyond AI-based image recognition, possibly based on star positions or electronic signals from radios and cell towers that are within reach, says Clark. Such technologies are still being developed or are in a relatively early operational stage.</span></p><h2>How the Shaheds Got Better</h2><p>Russia is not lagging behind. In fact, some analysts believe its autonomous systems may be slightly ahead of Ukraine’s. For a good example of the Russian military’s rapid <span>evolu</span><span>tion, they say, consider the long-range Iranian-designed Shahed drones. Since 2022, Russia has been using them to attack Ukrainian cities and other targets hundreds of kilometers from the front line. “At the beginning, Shaheds just had a frame, a </span><span>motor, and an inertial navigation system,” </span><a href="https://www.linkedin.com/in/oleksii-solntsev-aa0b72189?originalSubdomain=ua" target="_blank">Oleksii</a><span> </span><a href="https://www.linkedin.com/in/oleksii-solntsev-aa0b72189?originalSubdomain=ua" target="_blank">Solntsev</a><span>, CEO of Ukrainian defense tech startup MaXon Systems, tells me. “They used to be imprecise and pretty stupid. But they are becoming more and more autonomous.” Solntsev founded MaXon </span><span>Systems in late 2024 to help protect Ukrainian civil</span><span>ians from the growing threat of Shahed </span><span>raids.</span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Silhouette of a triangular drone flying in the sky." class="rm-shortcode" data-rm-shortcode-id="a9c89e21028ccf85e20a49ecead8309f" data-rm-shortcode-name="rebelmouse-image" id="72159" loading="lazy" src="https://spectrum.ieee.org/media-library/silhouette-of-a-triangular-drone-flying-in-the-sky.jpg?id=65341701&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">A Russian Geran-2 drone, based on the Iranian Shahed-136, flies over Kyiv during an attack on 27 December 2025.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">SERGEI SUPINSKY/AFP/GETTY IMAGES</small></p><p>First produced <a href="https://www.adaptinstitute.org/from-tehran-to-alabuga-the-evolution-of-shahed-drones-into-russias-strategic-asset/26/09/2025/" target="_blank">in Iran in the 2010s</a>, Shaheds can <span>carry 90-kilogram warheads </span><a href="https://isis-online.org/isis-reports/alabugas-shahed-136-geran-2-warheads-a-dangerous-escalation" target="_blank">up to 650 km</a> (50-kg warheads can go twice as far). <a href="https://www.csis.org/analysis/calculating-cost-effectiveness-russias-drone-strikes" target="_blank">They cost around $35,000 per unit</a><span>, compared to a couple of million dollars, at least, for a ballistic missile. The low cost </span><span>allows Russia to manufacture Shaheds in high quantities, unleashing entire fleets onto </span><a href="https://isis-online.org/isis-reports/a-comprehensive-analytical-review-of-russian-shahed-type-uavs-deployment-against-ukraine-in-2025" target="_blank">Ukrainian cities</a><span> </span><a href="https://isis-online.org/isis-reports/a-comprehensive-analytical-review-of-russian-shahed-type-uavs-deployment-against-ukraine-in-2025" target="_blank">and infrastructure almost every night</a><span>.</span></p><p>The early Shaheds were able to reach a prepro<span>grammed location based on satellite-navigation coordinates. Even one of these early models could frequently overcome the jamming of satellite-navigation signals with the help of an onboard inertial navigation unit. This was essentially a dead-reckoning system of accelerators and gyroscopes that estimate the drone’s position from continual measurements of its motions.</span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Silhouette of person with large equipment under a starry night sky." class="rm-shortcode" data-rm-shortcode-id="37186ec06b71203ba4f30db497507797" data-rm-shortcode-name="rebelmouse-image" id="1aca7" loading="lazy" src="https://spectrum.ieee.org/media-library/silhouette-of-person-with-large-equipment-under-a-starry-night-sky.jpg?id=65341699&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">In the Donetsk Region, on 15 August 2025, a Ukrainian soldier hunts for Shaheds and other drones with a thermalimaging system attached to a ZU23 23-millimeter antiaircraft gun.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">KOSTYANTYN LIBEROV/LIBKOS/GETTY IMAGES</small></p><p>Ukrainian defense forces learned to down Shaheds with heavy machine guns, but as Russia continued to innovate, the daily onslaughts started to become <a href="https://euromaidanpress.com/2025/06/29/why-cant-ukraine-stop-russias-shahed-drones-anymore/" target="_blank">increasingly effective.</a></p><p>Today’s Shaheds fly faster and higher, and therefore are more difficult to detect and take down. Between January 2024 and August 2025, the number of Shaheds and Shahed-type attack drones launched by Russia into Ukraine per month <a href="https://united24media.com/war-in-ukraine/why-russias-shahed-drones-are-now-deadlier-and-harder-than-ever-to-stop-11693" target="_blank">increased more than tenfold</a>, from 334 to more than 4,000. In 2025, Ukraine found <a href="https://www.unmannedairspace.info/counter-uas-systems-and-policies/recently-downed-russian-shahed-demonstrates-new-levels-of-autonomous-capability/" target="_blank">AI-enabling</a> <a href="https://www.unmannedairspace.info/counter-uas-systems-and-policies/recently-downed-russian-shahed-demonstrates-new-levels-of-autonomous-capability/" target="_blank">N</a><a href="https://www.unmannedairspace.info/counter-uas-systems-and-policies/recently-downed-russian-shahed-demonstrates-new-levels-of-autonomous-capability/" target="_blank">vidia</a> <a href="https://www.unmannedairspace.info/counter-uas-systems-and-policies/recently-downed-russian-shahed-demonstrates-new-levels-of-autonomous-capability/" target="_blank">chipsets in wreckages of Shaheds</a>, as well as thermal-vision modules capable of locking onto targets at night.</p><p>“Now, they are interconnected, which allows them to exchange information with each other,” Solntsev says. “They also have cameras that allow them to autonomously navigate to objects. Soon they will be able to tell each other to avoid a <span>jammed</span> <span>region or an area where one of them got </span><span>intercepted.”</span></p><p>These Russian-manufactured Shaheds, which Russian forces call Geran-2s, are thought to be more capable than the garden variety Shahed-136s that Iran has lately been launching against targets throughout the Middle East. Even the relatively primitive Shahed-136s have done considerable damage, according to <a href="https://www.theguardian.com/world/2026/mar/02/iran-unleashes-hundreds-of-drones-aimed-at-targets-across-middle-east" target="_blank">press accounts</a>.</p><p>Those Shahed successes may accrue, at least in part, from the fact that the United States and Israel <span>lack Ukraine’s long experience with fending them off. In just two days in early March, upward of a thousand drones, mostly Shaheds, were launched against U.S. and Israeli targets, with </span><a href="https://www.theguardian.com/world/2026/mar/02/iran-unleashes-hundreds-of-drones-aimed-at-targets-across-middle-east" target="_blank">hundreds of</a> <a href="https://www.theguardian.com/world/2026/mar/02/iran-unleashes-hundreds-of-drones-aimed-at-targets-across-middle-east" target="_blank">them reportedly finding their marks</a>.</p><p>One attack, caught on videotape, shows a Shahed destroying a radar dome at the U.S. navy base in <span>Manama, Bahrain. U.S. forces were understood to be </span><a href="https://carnegieendowment.org/emissary/2026/03/iran-drones-shahed-us-lessons" target="_blank">attempting to fend off the drones</a> by striking launch platforms, dispatching fighter aircraft to shoot them down, and by using some extremely costly air-defense interceptors, including ones meant to down ballistic missiles. On 4 March, <a href="https://www.cnn.com/2026/03/04/politics/us-air-defenses-iran-attack-drones-challenge" target="_blank">CNN</a> <a href="https://www.cnn.com/2026/03/04/politics/us-air-defenses-iran-attack-drones-challenge" target="_blank">reported</a> that in a congressional briefing the day before, top U.S. defense officials, including Secretary of Defense Pete Hegseth, acknowledged that U.S. air defenses weren’t keeping up with the onslaught of Shahed drones.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Broken drone on soil, cylindrical container nearby." class="rm-shortcode" data-rm-shortcode-id="769830682ff53a401780108ca11db2b6" data-rm-shortcode-name="rebelmouse-image" id="c9d58" loading="lazy" src="https://spectrum.ieee.org/media-library/broken-drone-on-soil-cylindrical-container-nearby.jpg?id=65341692&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Russian V2U attack drones are outfitted with Nvidia processors and run computer-vision software and AI algorithms to enable the drones to navigate autonomously.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">GUR OF THE MINISTRY OF DEFENSE OF UKRAINE</small></p><p>Russia is also starting to field a newer generation of attack drones. One of these, the V2U, has been used to strike targets in the Sumy region of northeastern Ukraine. <a href="https://euromaidanpress.com/2025/06/09/russias-v2u-drone-uses-ai-for-autonomous-strikes-in-ukraines-sumy-oblast/" target="_blank"><span>The V2U drones</span></a> are outfitted with Nvidia Jetson Orin processors and run <span>computer</span>-­<span>vision software and AI algorithms that allow the drones to navigate even where satellite navigation is jammed.</span></p><p>The sale of Nvidia chips to Russia is banned under U.S. sanctions against the country. However, press reports suggest that the chips are getting to Russia <a href="https://www.pravda.com.ua/eng/news/2024/10/28/7481703/" target="_blank">via intermediaries in India</a>.</p><h2>Antidrone Systems Step Up</h2><p>MaXon Systems is one of several companies working to fend off the nightly drone onslaught. Within one year, the company developed and battle-tested a Shahed interception system that hints at the sci-fi future envisioned by Azhnyuk. For a system to be capable of reliably defending against autonomous weaponry, it, too, needs to be autonomous.</p><p><span>MaXon’s solution consists of ground turrets scanning the sky with infrared sensors, with additional input from a network of radars that </span><span>detects approaching Shahed drones at distances of, typically, </span><a href="https://en.defence-ua.com/weapon_and_tech/2025_systems_to_shield_kyiv_from_shaheds_new_air_defense_details_from_maxon_where_balloons_carry_interceptor_drones-15499.html" target="_blank">12 to 16</a><span> km. The turrets fire autonomous fixed-winged interceptor drones, fitted with explosive warheads, toward the approaching Shaheds at speeds of nearly 300 km/h. To boost the chances of successful interception, MaXon </span><a href="https://en.defence-ua.com/weapon_and_tech/2025_systems_to_shield_kyiv_from_shaheds_new_air_defense_details_from_maxon_where_balloons_carry_interceptor_drones-15499.html" target="_blank">is also fielding</a><span> an airborne anti-Shahed fortification </span><span>system</span><span> </span><span>consisting of helium-filled </span><a href="https://spectrum.ieee.org/airships-drones-ukraine" target="_self">aerostats</a><span> hovering above the city that dispatch the interceptors from a higher altitude.</span></p><p>“We are trying to increase the level of automation of the system compared to existing solutions,” says Solntsev. “We need automatic <span>detection, automatic takeoff, and automatic mid-track guidance so that we can guide the interceptor before it can itself flock the target.”</span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Gray drone on display stand, surrounded by military personnel in camouflage uniforms." class="rm-shortcode" data-rm-shortcode-id="592b19dbfc4fe9a54033067c6169aeec" data-rm-shortcode-name="rebelmouse-image" id="ab79b" loading="lazy" src="https://spectrum.ieee.org/media-library/gray-drone-on-display-stand-surrounded-by-military-personnel-in-camouflage-uniforms.jpg?id=65341687&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">An interceptor drone, part of the U.S. MEROPS defensive system, is tested in Poland on 18 November 2025.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">WOJTEK RADWANSKI/AFP/GETTY IMAGES</small></p><p>In November 2025, the Ukrainian military announced it had been conducting successful trials of the <a href="https://www.forcesnews.com/nato/bang-your-buck-200m-worth-russian-drones-taken-out-15m-merops-uavs" target="_blank">Merops Shahed drone interceptor</a> system developed by the U.S. startup <a href="https://themerge.co/p/project-eagle" target="_blank">Project Eagle</a>, another of former <span>Google CEO Eric Schmidt’s Ukraine defense ventures. Like the MaXon gear, the system can operate largely autonomously and has so far downed over 1,000 Shaheds.</span></p><h2>What Works in the Lab Doesn’t Necessarily Fly on the Battlefield </h2>Despite the progress on both sides, analysts say that <span>the kind of robotic warfare imagined by Azhnyuk won’t be a reality for years.</span><p>“The software for drone collaboration is there,” says <a href="https://www.csis.org/people/kateryna-bondar" target="_blank">Kate Bondar</a>, a former policy advisor for the Ukrainian <span>government and currently a research fellow at the U.S. </span><a href="https://www.csis.org/" target="_blank">Center for Stra</a><a href="https://www.csis.org/" target="_blank">tegic and International Studies</a><span>. “Drones can fly in labs, but in real life, [the forces] are afraid to deploy them because the risk of a mistake is too high,” she adds.</span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Two people launching a drone in an open field using a catapult system." class="rm-shortcode" data-rm-shortcode-id="894baf9e936bef6f8c45a0363afac141" data-rm-shortcode-name="rebelmouse-image" id="7c4e9" loading="lazy" src="https://spectrum.ieee.org/media-library/two-people-launching-a-drone-in-an-open-field-using-a-catapult-system.jpg?id=65341682&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Ukrainian soldiers watch a GOR reconnaissance drone take to the sky near Pokrovsk in the Donetsk region, on 10 March 2025.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">ANDRIY DUBCHAK/FRONTLINER/GETTY IMAGES</small></p>In Bondar’s view, powerful AI-equipped drones won’t be deployed in large numbers given the current prices for high-end processors and <span>other advanced components. And, she adds, the more autonomous the system needs to be, the more expensive are the processors and sensors it must have. “For these cheap attack drones that fly only once, you don’t install a high-resolution camera that [has] the resolution for AI to see properly,” she says. “[You install] the cheapest camera. You don’t </span><span>want expensive chips that can run AI algorithms either. Until we can achieve this balance of technological sophistication, when a system can conduct a mission but at the lowest price possible, it won’t be deployed en masse.”</span><p>While existing AI systems are doing a good job recognizing and following large objects like Shaheds or tanks, experts question their ability to reliably distinguish and pursue smaller and more nimble or inconspicuous targets. “When we’re getting into more specific questions, like can it distinguish a Russian soldier from a Ukrainian soldier or at least a soldier from a civilian? The answer is no,” says Bondar. “Also, it’s one thing to track a tank, and it’s another to track infantrymen riding buggies and motorcycles that are moving very fast. That’s really challenging for AI to track and strike precisely.”</p><p>Clark, at the Hudson Institute, says that although the AI algorithms used to guide the Russian and <span>Ukrainian drones are “pretty good,” they rely on information provided bysensors that “aren’t good enough.” “You need multiphenomenology sensors that are able to look at infrared and visual and, in some cases, different parts of the infrared spectrum to be able to figure out if something is a decoy or real target,” </span><span>he </span><span>says.</span></p><p><span>German defense analyst Lange agrees that right now, battlefield AI image-recognition systems are too easily fooled. “If you compress reality into a </span><span>2D</span><span> image, a lot of things can be easily camouflaged—like what Russia did recently, when they started drawing birds on the back of their drones,” he <span>says.</span></span></p><h2>Autonomy Remains Elusive on the Ground and at Sea, Too</h2><p>To make Ukraine’s <span>emerging uncrewed ground vehicles (UGVs) equally self-sufficient will be an even greater task, in Clark’s view. Still, </span><span>Bondar expects major advances to materialize within the next several years, even if humans are still going to be part of the decision-making loop.</span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Military radar equipment in a grassy field." class="rm-shortcode" data-rm-shortcode-id="0b36a03b7582535b3d3319d7d9b74c33" data-rm-shortcode-name="rebelmouse-image" id="d65ea" loading="lazy" src="https://spectrum.ieee.org/media-library/military-radar-equipment-in-a-grassy-field.jpg?id=65341671&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">A mobile electronic-warfare system built by PiranhaTech is demonstrated near Kyiv on 21 October 2025.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">DANYLO ANTONIUK/ANADOLU/GETTY IMAGES</small></p><p>“I think in two or three years, we will have pretty good full autonomy, at least in good weather conditions,” she says, referring to aerial drones in partic<span>ular. “Humans will still be in the loop for some years, simply because there are so many unpredictable situations when you need an intervention. We won’t be able to fully rely on the machine for at least another 10 or 15 years.”</span></p><p>Ukrainian defenders are apprehensive about that autonomous future. The boom of drone inno<span>vation has come hand in hand with the development of sophisticated jamming and radio-frequency detection systems. But a lot of that innovation will become obsolete once the pendulum swings away from human control. Ukrainians got their first taste of dealing with unjammable drones in mid-2024, when Russia began rolling out fiber-optic tethered drones. Now they have to brace for a threat on a much larger scale.</span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Quadcopter drone flying with a fire extinguisher attached in a cloudy sky." class="rm-shortcode" data-rm-shortcode-id="70f326221988cb6004338272d1d8dd4d" data-rm-shortcode-name="rebelmouse-image" id="aa25d" loading="lazy" src="https://spectrum.ieee.org/media-library/quadcopter-drone-flying-with-a-fire-extinguisher-attached-in-a-cloudy-sky.jpg?id=65341673&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">An experimental drone is demonstrated at the Brave1 defense-tech incubator in Kyiv.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">DANYLO DUBCHAK/FRONTLINER/GETTY IMAGES</small></p><p>“Today, we have a situation where we have lots of signals on the battlefield, but in the near future, <span>in maybe two to five years, UAVs are not going to be sending any signals,” says Oleksandr Barabash, CTO of </span><a href="https://www.falcons.com.ua/en" target="_blank">Falcons</a>, a Ukrainian startup that has developed a smart radio-frequency detection system capable <span>of revealing precise locations of enemy radio sources such as drones, control stations, and jammers.</span></p><p>Last September, Falcons secured funding from the U.S.-based dual-use tech fund <a href="https://www.greenflag.vc/" target="_blank">Green Flag Ven</a><a href="https://www.greenflag.vc/" target="_blank">tures</a> to scale production of its technology and work toward NATO certification. But Barabash admits that its system, like all technologies fielded in <span>Ukrainian war zones, has an expiration date. Instead of radio-frequency detectors, Barabash thinks, the next R&D push needs to focus on passive radar systems capable of identifying small and fast-moving targets based on the signal from sources like TV towers or radio transmitters that propagate through the environment and are reflected by those moving targets. Passive radars have a significant advantage in the war zone, according to Barabash. Since they don’t emit their own signal, they can’t be that easily discovered by the enemy.</span></p><p>“Active radar is emitting signals, so if you are using active radars, you are target No. 1 on the front line,” Barabash says.</p><p><span>Bondar, on the other hand, thinks that the increased onboard compute power needed </span><span>for</span> AI-controlled drones will, by itself, generate enough electromagnetic radiation to prevent autonomous drones from ever operating completely undetectably.</p><p><span>“You can have full autonomy, but you will still have systems onboard that emit electromagnetic radiation or heat that can be detected,” says Bondar. “Batteries emit electromagnetic radiation, motors emit heat, and [that heat can be] visible in infrared from far away. You just need to have the right sensors to be able to identify it in advance.” She adds that that takeaway is “how capable contemporary detection systems have become and how technically challenging it is to design drones that can reliably operate in the Ukrainian battlefield environment.”</span></p><h2>There Will Be Nowhere to Hide from Autonomous Drones</h2><p>When autonomous drones become a standard weapon <span>of war, their threat will extend far beyond the battlefields of Ukraine. Autonomous turrets and drone-interceptor fortification might soon dot the perimeter of European cities, particularly in the eastern part of the continent.</span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Person holding gray drone against a blue sky, preparing to launch it." class="rm-shortcode" data-rm-shortcode-id="c480e8fb2bdf2e560c142729e35c7320" data-rm-shortcode-name="rebelmouse-image" id="f9032" loading="lazy" src="https://spectrum.ieee.org/media-library/person-holding-gray-drone-against-a-blue-sky-preparing-to-launch-it.jpg?id=65327903&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">A fixed-wing drone is tested in Ukraine in April 2025.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">ANDREWKRAVCHENKO/BLOOMBERG/GETTY IMAGES</small></p><p>Nefarious actors from all over the world have closely watched Ukraine and taken notes, warns Lange. Today, FPV drones are being used b<a href="https://gnet-research.org/2025/07/30/weaponised-skies-the-expansion-of-terrorist-drone-use-across-africa/" target="_blank">y</a> <a href="https://gnet-research.org/2025/07/30/weaponised-skies-the-expansion-of-terrorist-drone-use-across-africa/" target="_blank">Islamic terrorists in Africa</a> and <a href="https://www.atlanticcouncil.org/blogs/new-atlanticist/drug-cartels-are-adopting-cutting-edge-drone-technology-heres-how-the-us-must-adapt/#%3A~%3Atext%3DIf%20confirmed%2C%20this%20would%20suggest%2CUS%20homeland%20security%E2%80%94are%20profound" target="_blank">Mexican drug cartels</a> to fight against local authorities.</p><p>When autonomous killing machines become widely available, it’s likely that no city will be safe. “We might see nets above city centers, protecting civilian streets,” Lange says. “In every case, the West needs to start performing similar kinetic-defense development that we see in Ukraine. Very rapid iteration and testing cycles to find solutions.”</p><p>Azhnyuk is concerned that the historic defenders of Europe—the <span>United States and the European countries themselves—are falling behind. “We are in danger,” he says. While Russia and Ukraine made major strides in their drones and countermeasures over the past year, “Europe and the United States have progressed, in the best-case scenario, from the winter-of-2022 technology to the summer-of-2022 technology.</span></p><p>“The gap is getting wider,” he warns. “I think the next few years are very dangerous for the security of Europe.” <span class="ieee-end-mark"></span></p><p><em>This article appears in the April 2026 print issue as “Rise of the <span>AUTONOMOUS </span>Attack Drones.”</em></p>]]></description><pubDate>Tue, 24 Mar 2026 13:00:05 +0000</pubDate><guid>https://spectrum.ieee.org/autonomous-drone-warfare</guid><category>Military-robots</category><category>Military-drones</category><category>Drone-war</category><category>Shahed-drones</category><category>Ai-agents</category><dc:creator>Tereza Pultarova</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/person-holding-a-large-drone-outdoors-under-a-sunny-partly-cloudy-sky.jpg?id=65327386&amp;width=980"></media:content></item><item><title>Remembering IEEE Power &amp; Energy Society Leader Mel Olken</title><link>https://spectrum.ieee.org/in-memoriam-march-2026-remembering</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/an-adult-white-woman-with-short-brown-hair-smiling-and-shaking-hands-with-an-older-white-man-while-receiving-an-award.jpg?id=65341623&width=1200&height=600&coordinates=0%2C250%2C0%2C250"/><br/><br/><h2>Mel Olken</h2><p>Former executive director of the IEEE Power & Energy Society</p><p>Fellow, 92; died 9 January</p><p>Olken became the first executive director of the <a href="https://ieee-pes.org/" rel="noopener noreferrer" target="_blank">IEEE Power & Energy Society</a> (PES) in 1995. In 2002 he left the position to serve as founding editor in chief of the society’s <a href="https://ieee-pes.org/publication-item/power-energy-magazine/" rel="noopener noreferrer" target="_blank"><em><em>Power & Energy Magazine</em></em></a>. Olken led the publication until 2016, when he retired.</p><p>After receiving a bachelor’s degree in engineering from the <a href="https://www.ccny.cuny.edu/?srsltid=AfmBOopCvH6eSvBfUYKD5FUuofKgmij7k0i5ekpVX8CdRpBYYFMlhLWM" rel="noopener noreferrer" target="_blank">City College of New York</a>, Olken was hired as an electrical engineer by <a href="https://www.aep.com/" rel="noopener noreferrer" target="_blank">American Electric Power</a>, a utility based in Columbus, Ohio. He helped design coal, hydroelectric, and <a data-linked-post="2674407523" href="https://spectrum.ieee.org/80-billion-us-nuclear-power" target="_blank">nuclear power plants</a>. While at AEP, he was promoted to manager of the electrical generation department.</p><p>He joined IEEE in 1958 and became a <a href="https://spectrum.ieee.org/ieee-student-scholarship-boost" target="_blank">PES</a> member in 1973. An active volunteer, he chaired the society’s <a href="https://ieee-pes.org/technical-activities/committees/energy-development-power-generation-committee-edpg/" rel="noopener noreferrer" target="_blank">energy development and power generation committee</a> and its <a href="https://ieee-pes.org/technical-activities/technical-council/" rel="noopener noreferrer" target="_blank">technical council</a>.</p><p>Olken was elected an IEEE Fellow in 1988 for “contributions to innovative design of reliable generating stations.”</p><p>He became an IEEE staff member in 1984 as society services director for <a href="https://ta.ieee.org/" rel="noopener noreferrer" target="_blank">IEEE Technical Activities</a>. From 1990 to 1995 he served as managing director of Regional Activities group (now <a href="https://www.ieee.org/communities/geographic-activities" rel="noopener noreferrer" target="_blank">IEEE Member and Geographic Activities</a>), before becoming PES executive director.</p><p>He received a PES <a href="https://ieee-pes.org/about-pes/awards-scholarships/ieee-power-energy-society-lifetime-achievement-award/" rel="noopener noreferrer" target="_blank">Lifetime Achievement Award</a> in 2012 for his “broad and sustained technical contributions to the development of power engineering and the power engineering profession.”</p><h2>Stephanie A. Huguenin</h2><p>Research scientist</p><p>IEEE member, 48; died 1 October</p><p>Huguenin was an administrative assistant in the <a href="https://www.augusta.edu/scimath/physics/" rel="noopener noreferrer" target="_blank">physics and biophysics department</a> at <a href="https://www.augusta.edu/" rel="noopener noreferrer" target="_blank">Augusta University</a>, in Georgia. According to her Augusta <a href="https://www.legacy.com/us/obituaries/theaugustapress/name/stephanie-huguenin-obituary" rel="noopener noreferrer" target="_blank">obituary</a>, she died of an illness acquired during her volunteer work in India.</p><p>She received a bachelor’s degree in engineering in 1999 from the <a href="https://www.usnews.com/best-colleges/college-of-charleston-3428" rel="noopener noreferrer" target="_blank">College of Charleston</a>, in South Carolina. During her senior year, she worked as a mathematics and science tutor at the Jenkins Orphanage (now the <a href="https://en.wikipedia.org/wiki/Jenkins_Orphanage" rel="noopener noreferrer" target="_blank">Jenkins Institute for Children</a>), in North Charleston. After graduating, Huguenin traveled to India to volunteer at an orphanage run by the <a href="https://motherteresafoundation.org/" rel="noopener noreferrer" target="_blank">Mother Teresa Foundation</a>.</p><p>Upon returning to the United States in 2001, Huguenin worked as a freelance research consultant. Three years later she was hired as a systems administrator and archivist by photographer <a href="https://ebetroberts.com/" rel="noopener noreferrer" target="_blank">Ebet Roberts</a> in New York City. In 2010 she left to work as an operations strategist and technical consultant.</p><p>She earned a master’s degree in communication and research science in 2016 from <a href="https://www.nyu.edu/" rel="noopener noreferrer" target="_blank">New York University</a>. While at NYU, she conducted experimental and theoretical research in Internet Protocol design and implementation as well as network security and management.</p><p>From 2020 to 2024 she was a research scientist at businesses owned by her family. She joined Augusta University in 2023.</p><p>She was a member of the <a href="https://www.ieee.org/membership-catalog/productdetail/showProductDetailPage.html?product=MEMGRS029" rel="noopener noreferrer" target="_blank">IEEE Geoscience and Remote Sensing Society</a> and the <a href="https://ieeesystemscouncil.org/ieee-systems-council-welcome" rel="noopener noreferrer" target="_blank">IEEE Systems Council</a>.</p><p>Huguenin volunteered for the <a href="https://www.ietf.org/" rel="noopener noreferrer" target="_blank">Internet Engineering Task Force</a>, a standards development organization, and the <a href="https://www.arin.net/" rel="noopener noreferrer" target="_blank">American Registry for Internet Numbers</a>. ARIN manages and distributes internet number resources such as IP addresses and autonomous system numbers.</p><p>The nonprofits she supported included the <a href="https://coastalconservationleague.org/" rel="noopener noreferrer" target="_blank">Coastal Conservation League</a>, the <a href="https://longleafalliance.org/" rel="noopener noreferrer" target="_blank">Longleaf Alliance</a>, the <a href="https://lowcountrylandtrust.org/" rel="noopener noreferrer" target="_blank">Lowcountry Land Trust</a>, the <a href="https://www.nature.org/en-us/" rel="noopener noreferrer" target="_blank">Nature Conservancy</a>, and <a href="https://www.womenindefense.net/" rel="noopener noreferrer" target="_blank">Women in Defense</a>.</p>]]></description><pubDate>Mon, 23 Mar 2026 18:00:05 +0000</pubDate><guid>https://spectrum.ieee.org/in-memoriam-march-2026-remembering</guid><category>Ieee-member-news</category><category>In-memoriam</category><category>Obituary</category><category>Ieee-power-energy-society</category><category>Type-ti</category><dc:creator>Amanda Davis</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/an-adult-white-woman-with-short-brown-hair-smiling-and-shaking-hands-with-an-older-white-man-while-receiving-an-award.jpg?id=65341623&amp;width=980"></media:content></item><item><title>Transforming Data Science With NVIDIA RTX PRO 6000 Blackwell Workstation Edition</title><link>https://spectrum.ieee.org/nvidia-rtx-pro-6000-pny</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/computer-setup-with-a-monitor-displaying-forest-graphics-keyboard-mouse-and-a-sleek-cpu-design.png?id=65315285&width=1200&height=600&coordinates=0%2C154%2C0%2C154"/><br/><br/><p><em>This is a sponsored article brought to you by <a href="https://www.pny.com/" target="_blank">PNY Technologies</a>.</em></p>In today’s data-driven world, data scientists face mounting challenges in preparing, scaling, and processing massive datasets. Traditional CPU-based systems are no longer sufficient to meet the demands of modern AI and analytics workflows. <a href="https://www.pny.com/nvidia-rtx-pro-6000-blackwell-ws?iscommercial=true&utm_source=IEEE+Spectrum+Blog&utm_medium=RTX+PRO+6000+body&utm_campaign=Blackwell+Workstation&utm_id=RTX+PRO+6000" rel="noopener noreferrer" target="_blank">NVIDIA RTX PRO<sup>TM</sup> 6000 Blackwell Workstation Edition</a> offers a transformative solution, delivering accelerated computing performance and seamless integration into enterprise environments.<h2>Key Challenges for Data Science</h2><ul><li><strong>Data Preparation: </strong>Data preparation is a complex, time-consuming process that takes most of a data scientist’s time.</li><li><strong>Scaling: </strong>Volume of data is growing at a rapid pace. Data scientists may resort to downsampling datasets to make large datasets more manageable, leading to suboptimal results.</li><li><strong>Hardware: </strong>Demand for accelerated AI hardware for data centers and cloud service providers (CSPs) is exceeding supply. Current desktop computing resources may not be suitable for data science workflows.</li></ul><h2>Benefits of RTX PRO-Powered AI Workstations</h2><p>NVIDIA RTX PRO 6000 Blackwell Workstation Edition delivers ultimate acceleration for data science and AI workflows. These powerful and robust workstations enable real-time rendering, rapid prototyping, and seamless collaboration. With support for up to four <a href="https://www.pny.com/nvidia-rtx-pro-6000-blackwell-max-q?iscommercial=true&utm_source=IEEE+Spectrum+Blog&utm_medium=RTX+PRO+6000+Blackwell+Max-Q+body&utm_campaign=Blackwell+Workstation&utm_id=RTX+PRO+6000" rel="noopener noreferrer" target="_blank">NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition</a> GPUs, users can achieve data center-level performance right at their desk, making even the most demanding tasks manageable.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="61bf7564ac8304e10487689487367c94" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/jwxxgHsU1jA?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">PNY is redefining professional computing with the ‪@NVIDIA‬ RTX PRO 6000 Blackwell Workstation Edition, the most powerful desktop GPU ever built. Engineered for unmatched compute power, massive memory capacity, and breakthrough performance, this cutting-edge solution delivers a quantum leap forward in workflow efficiency, enabling professionals to tackle the most demanding applications with ease.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">PNY</small></p><p>NVIDIA RTX PRO 6000 Blackwell Workstation Edition empowers data scientists to handle massive datasets, perform advanced visualizations, and support multi-user environments without compromise. It’s ideal for organizations scaling up their analytics or running complex models. NVIDIA RTX PRO 6000 Blackwell Workstation Edition is optimized for AI workflows, leveraging the NVIDIA AI software stack, including CUDA-X, and NVIDIA Enterprise software. These platforms enable zero-code-change acceleration for Python-based workflows and support over 100 AI-powered applications, streamlining everything from data preparation to model deployment.</p><p>Finally, NVIDIA RTX PRO 6000 Blackwell Workstation Edition offers significant advantages in security and cost control. By offloading compute from the data center and reducing reliance on cloud resources, organizations can lower expenses and keep sensitive data on-premises for enhanced protection.</p><h2>Accelerate Every Step of Your Workflow</h2><p>NVIDIA RTX PRO 6000 Blackwell Workstation Edition is designed to transform the entire data science pipeline, delivering end-to-end acceleration from data preparation to model deployment. With NVIDIA CUDA-X open-source data science cuDF library and other GPU-accelerated libraries, data scientists can process massive datasets at lightning speed, often achieving up to 50X faster performance compared to traditional CPU-based tools. This means tasks like cleaning data, managing missing values, and engineering features can be completed in seconds, not hours, allowing teams to focus on extracting insights and building better models.</p><p class="pull-quote">NVIDIA RTX PRO 6000 Blackwell Workstation Edition is designed to transform the entire data science pipeline, delivering end-to-end acceleration from data preparation to model deployment</p><p>Exploratory data analysis is elevated with advanced analytics and interactive visualizations, powered by NVIDIA CUDA-X and PyData libraries. These tools enable users to create expansive, responsive visualizations that enhance understanding and support critical decision-making. When it comes to model training, GPU-accelerated XGBoost slashes training times from weeks to minutes, enabling rapid iteration and faster time-to-market AI solutions.</p><p>NVIDIA RTX PRO 6000 Blackwell Workstation Edition streamlines collaboration and scalability. With NVIDIA AI Workbench, teams can set up projects, develop, and collaborate seamlessly across desktops, cloud platforms, and data centers. The unified software stack ensures compatibility and robustness, while enterprise-grade hardware maximizes uptime and reliability for demanding workflows.</p><p>By integrating these advanced capabilities, NVIDIA RTX PRO 6000 Blackwell Workstation Edition empowers data scientists to overcome bottlenecks, boost productivity, and drive innovation, making them an essential foundation for modern, enterprise-ready AI development.</p><h2>Performance Benchmarks</h2><p>NVIDIA’s cuDF library offers zero-code change acceleration for pandas, delivering up to 50X performance gains. For example, a join operation that takes nearly 5 minutes on CPU completes in just 14 seconds on GPU. Advanced group by operations drop from almost 4 minutes to just 4 seconds.</p><h2>Enterprise-Ready Solutions from PNY</h2><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" rel="float: left;" style="float: left;"> <img alt="Black PNY logo with stylized uppercase letters on a transparent background." class="rm-shortcode" data-rm-shortcode-id="247ffcd9e141f1fc61c5172c5440d97e" data-rm-shortcode-name="rebelmouse-image" id="170af" loading="lazy" src="https://spectrum.ieee.org/media-library/black-pny-logo-with-stylized-uppercase-letters-on-a-transparent-background.png?id=65315393&width=980"/></p><p>Available from leading OEM manufacturers, NVIDIA RTX PRO 6000 Blackwell Workstation Edition Series GPUs are specifically engineered to meet the rigorous demands of enterprise environments. These systems incorporate NVIDIA Connect-X networking, now available at PNY and a comprehensive suite of deployment and support tools, ensuring seamless integration with existing IT infrastructure.</p><p>Designed for scalability, the latest generation of workstations can tackle complex AI development workflows at scale for training, development, or inferencing. Enterprise-grade hardware maximizes uptime and reliability.</p><p><strong>To learn more about NVIDIA RTX PRO™ Blackwell solutions, </strong><strong>visit:</strong> <a href="https://www.pny.com/professional/software-solutions/blackwell-architecture?utm_source=IEEE+Spectrum+Blog&utm_medium=Blackwell+Desktop+GPUs+learn+more&utm_campaign=Blackwell+Workstation&utm_id=RTX+PRO+6000" target="_blank">NVIDIA RTX PRO Blackwell | PNY Pro | pny.com</a> or email <a href="mailto:gopny@pny.com" target="_blank">GOPNY@PNY.COM</a><strong></strong></p>]]></description><pubDate>Mon, 23 Mar 2026 13:00:04 +0000</pubDate><guid>https://spectrum.ieee.org/nvidia-rtx-pro-6000-pny</guid><category>Artificial-intelligence</category><category>Computing</category><category>Data-science</category><category>Gpu-acceleration</category><category>Ai-workstations</category><category>Nvidia</category><dc:creator>PNY Technologies</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/computer-setup-with-a-monitor-displaying-forest-graphics-keyboard-mouse-and-a-sleek-cpu-design.png?id=65315285&amp;width=980"></media:content></item><item><title>Why Thermal Metrology Must Evolve for Next-Generation Semiconductors</title><link>https://content.knowledgehub.wiley.com/heat-beneath-the-surface-thermal-metrology-for-advanced-semiconductor-materials-and-architectures/</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/laser-thermal-logo-with-stylized-red-l-and-t-on-a-white-background.png?id=65320713&width=980"/><br/><br/><p>An in-depth examination of how rising power density, 3D integration, and novel materials are outpacing legacy thermal measurement — and what advanced metrology must deliver.</p><p><strong>What Attendees will Learn</strong></p><ol><li><span>Why heat is now the dominant constraint on semiconductor scaling — Explore how heterogeneous integration, 3D stacking, and AI-driven power density have shifted the primary bottleneck from lithography to thermal management, with heat flux projections exceeding 1,000 W/cm² for next-generation accelerators.<br/></span></li><li><span>How extreme material properties are redefining thermal design requirements —Understand the measurement challenges posed by nanoscale thin films where bulk assumptions fail, engineered ultra-high-conductivity materials (diamond, BAs, BNNTs), and devices operating above 200 °C in wide-band gap systems.</span></li><li><span>Why interfaces and buried layers now govern reliability — Examine how thermal boundary resistance at bonded interfaces, TIM layers, and dielectric stacks has become a first-order reliability accelerator.</span></li><li><span>What a thermal-first design workflow looks like in practice — Learn how measured, scale-appropriate thermal properties can be integrated early in the design cycle to calibrate models, reduce uncertainty, and prevent costly late-stage failures across advanced packaging and 3D architectures.</span></li></ol><div><span><a href="https://content.knowledgehub.wiley.com/heat-beneath-the-surface-thermal-metrology-for-advanced-semiconductor-materials-and-architectures/" target="_blank">Download this free whitepaper now!</a></span></div>]]></description><pubDate>Mon, 23 Mar 2026 10:00:04 +0000</pubDate><guid>https://content.knowledgehub.wiley.com/heat-beneath-the-surface-thermal-metrology-for-advanced-semiconductor-materials-and-architectures/</guid><category>Semiconductors</category><category>Thermal-management</category><category>Scaling</category><category>Type-whitepaper</category><dc:creator>Laser Thermal</dc:creator><media:content medium="image" type="image/png" url="https://assets.rbl.ms/65320713/origin.png"></media:content></item><item><title>What Happens If AI Makes Things Too Easy for Us?</title><link>https://spectrum.ieee.org/frictionless-ai-psychology</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/portrait-of-a-young-white-brunette-woman-behind-her-is-a-collage-of-crumpled-paper-balls-and-ai-sparkle-icons.jpg?id=65324044&width=1200&height=600&coordinates=0%2C625%2C0%2C625"/><br/><br/><p>Most people who regularly use AI tools would say they’re making their lives easier. The technology promises to streamline and take over tasks both professionally and personally—whether that’s summarizing documents, drafting deliverables, generating code, or even offering emotional support. But researchers are concerned AI is making some tasks <em>too</em> easy, and that this will come with unexpected costs.</p><p>In a commentary titled <a href="https://www.nature.com/articles/s44271-026-00402-1" rel="noopener noreferrer" target="_blank"><em>Against Frictionless AI</em></a>, published in <em>Communications Psychology</em><span> on 24 February,</span> psychologists from the University of Toronto discuss what might be lost when AI removes too much effort from human activities. Their argument centers on the idea that friction—difficulty, struggle, and even discomfort—plays an important role in learning, motivation, and meaning. Psychological research has long shown that <a href="https://www.media.mit.edu/publications/your-brain-on-chatgpt/" target="_blank">effortful engagement</a> can deepen understanding and strengthen memory, sometimes described as “desirable difficulties.” <strong></strong></p><p>The authors worry that AI systems capable of instantly producing polished answers or highly responsive conversation may bypass these processes of learning and motivation. By prioritizing outcomes over effort, AI could weaken the experiences that help people develop skills, build relationships, and find meaning in their work.</p><p><em>IEEE Spectrum</em> spoke with the paper’s lead author, <a href="https://www.linkedin.com/in/emily-zohar/?originalSubdomain=ca" target="_blank">Emily Zohar</a>, an experimental psychology Ph.D. student, about why she and her coauthors (psychologists <a href="https://www.psych.utoronto.ca/people/directories/all-faculty/paul-bloom" target="_blank">Paul Bloom</a> and <a href="https://www.utsc.utoronto.ca/psych/person/michael-inzlicht" target="_blank">Michael Inzlicht</a>) argue that friction matters—and what a more human-centered approach to AI design could look like.</p><p><strong>When you say “friction,” what do you mean, from both a cognitive and an interpersonal standpoint?</strong><br/><br/><strong>Zohar:</strong> We define friction as any difficulty encountered during goal pursuit. In the context of work, it involves mental effort—rumination and persistence, staying on a problem for some time, and this helps solidify the idea and the creative process.</p><p>In relationships, friction involves disagreement, compromise, misunderstanding, a back and forth that is natural where you don’t always see eye to eye, and it helps you broaden your horizons. Even the feeling of loneliness is important. It motivates you to find social interactions. So having these negative feelings and difficulty is important in the social context.</p><p><strong>Given that definition, what do you mean by “frictionless” AI?</strong></p><p><strong>Zohar:</strong> Frictionless AI refers to the excessive removal of effort from cognitive and social tasks. With AI, as we typically use it, it’s really easy to go from ideation right to the end product. You ask AI to solve something with one prompt, and it completes the whole thing. This is a problem because it takes away the intermediate steps that really drive motivation and learning, and it prioritizes outcome over process. Rather than working through the steps, AI does that meaningful work for you.<br/><br/>There’s a lot of research showing <a href="https://arxiv.org/abs/2409.14511" target="_blank">work products</a> are better with AI. That makes sense, it has all this knowledge, but it does worry us as it may be eroding something essential that will have long-term consequences. If you’re faced with the same problem and AI is removed, you don’t have the required knowledge to know how to face the problem next time.</p><p><strong>You argue that removing friction can harm learning and relationships. What role do effort and struggle play in human development?</strong></p><p><strong>Zohar: </strong>In learning, the term is “desirable difficulties.” It’s the idea of effort and work, not just any effort but <em>manageable</em> effort. Facing problems that you can overcome, but you have to work at them a bit, that’s the key idea of friction. We don’t want you to face insurmountable problems. We want you to work hard, but still be able to overcome it. This helps you really digest information and learn from it.</p><p>In interpersonal relationships, you have to face some difficulties to see other perspectives and learn from them, and learn to be accepting of others. If you’re used to an AI reinforcing all your ideas and being sycophantic, you’ll come into the real world and you won’t be used to seeing other ideas. You won’t know how to interact socially because you’ll expect people to always be on your side and agree with you. You won’t learn that life doesn’t always go exactly how you expect it to, and conversations don’t always go the way you want them to.</p><h2>AI’s Impact on Creative Processes</h2><p><strong>A lot of technologies have historically aimed to reduce effort: calculators, washing machines, spell-check. What’s different about AI?</strong></p><p><strong>Zohar:</strong> Past technologies have mostly focused on reducing physical effort. We don’t have to go down to the lake to wash our laundry anymore. [Past technologies] took away the mundane tasks that weren’t driving our learning and growth, they were just adding unneeded obstacles and taking away time from more important tasks.</p><p>But AI is taking away effort from creative and cognitive processes that drive meaning, motivation, and learning. That’s a key difference, because it’s not taking away friction from tasks that don’t serve us. It’s taking away friction from experiences that are really important and integral to our development.</p><p><strong>Are there contexts where AI is already removing beneficial friction? How might the impacts of reduced friction show up over time?</strong></p><p><strong>Zohar:</strong> One clear example is writing. People increasingly rely on AI to draft everything from emails to essays, removing many instances of beneficial friction. Research shows that people trust responses less when they learn they were written by AI, judge AI-generated products as less creative and less valuable, and have greater difficulty remembering their own work products when they were produced with AI assistance. Outsourcing writing to AI strips away both social and cognitive friction.</p><p><a data-linked-post="2671645555" href="https://spectrum.ieee.org/vibe-coding" target="_blank">Vibe coding</a> is another good example. If you’re a programmer, coding is integral to what drives your meaning. People get meaning out of their work, and if you’re substituting that with AI, it could be detrimental. The negative impact of frictionless AI is that it takes away friction from things that are really important to who you are as a person, and your skills.</p><p>One area I worry about a lot is <a data-linked-post="2656019975" href="https://spectrum.ieee.org/kids-ai" target="_blank">adolescents using AI in general</a>. It’s a really important developmental period to learn and grow and find the path you’ll follow. So if you don’t have these effortful interactions with work and relationships that teach you how to think, this will have long-term detrimental impacts. They might not be able to think critically in the same way, because they never had to before. If they’re turning to AI for social relationships at such a young age, that could really erode important skills they should be learning at that age.</p><p><strong>What is productive friction?</strong></p><p><strong>Zohar:</strong> Friction goes along a continuum. With too little friction, you’re not getting learning and motivation. Too much friction and the task becomes overwhelming. Productive friction falls right in the middle, where struggle leads to achievement. It’s effortful but possible, and it requires you to think critically and work on a problem for some time or face some difficulty in the process.</p><p>An example we used in the paper is the difference between taking a chairlift and hiking up a mountain. They both get to the top, but with the chairlift, you don’t get any growth benefits, while the hiker’s climb involves difficulties and a sense of achievement. It becomes much more of an experience and a learning opportunity versus the person who just went up the chairlift effortlessly.</p><p><strong>Do you envision AI that sometimes deliberately slows people down or asks them to do part of the work themselves?</strong></p><p><strong>Zohar:</strong> It’s important in behavioral science to think about the default option, because people don’t usually change their default. So right now, the default in AI is to give you your answer and probe you to keep going down the rabbit hole. But I think we could think about AI in a different way. Maybe we can make the default more constructive. Instead of just jumping to the answer, it’s more of a process model where it helps you think about the problem and teaches you along the way, so it’s more collaborative rather than a one-stop shop for the answer.</p><p><strong>How might users of these systems and the companies developing them feel about such a design shift?</strong></p><p><strong>Zohar: </strong>For the makers of these systems, the biggest concern is the pushback. People are used to going in and just getting the answer, and they might be really resistant to a design that makes them work more for it. But it might feed more engagement, because you have to go back and forth and find the answer together.</p><p>Ultimately I think it has to come from the companies making these models, if they think [a more friction-full design] would help people. Friction-full AI is more of a long-term product. It’s hard to say if that would motivate companies to change their models to include moderate friction. But in the long term, I think this would be beneficial.</p>]]></description><pubDate>Sun, 22 Mar 2026 13:00:04 +0000</pubDate><guid>https://spectrum.ieee.org/frictionless-ai-psychology</guid><category>Psychology</category><category>Artificial-intelligence</category><category>Cognitive-science</category><dc:creator>Vanessa Bates Ramirez</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/portrait-of-a-young-white-brunette-woman-behind-her-is-a-collage-of-crumpled-paper-balls-and-ai-sparkle-icons.jpg?id=65324044&amp;width=980"></media:content></item><item><title>Video Friday: Humanoid Learns Tennis Skills Playing Humans</title><link>https://spectrum.ieee.org/tennis-playing-robot</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/robot-playing-tennis-holding-racket-on-green-court-inset-shows-human-opponent-hitting-ball.png?id=65325604&width=1200&height=600&coordinates=0%2C51%2C0%2C52"/><br/><br/><p>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at <em>IEEE Spectrum</em> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please <a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a> for inclusion.</p><h5><a href="https://2026.ieee-icra.org/">ICRA 2026</a>: 1–5 June 2026, VIENNA</h5><h5><a href="https://mrs.fel.cvut.cz/summer-school-2026/">Summer School on Multi-Robot Systems</a>: 29 July–4 August 2026, PRAGUE</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="23zsarayx6o"><em>Human athletes demonstrate versatile and highly dynamic tennis skills to successfully conduct competitive rallies with a high-speed tennis ball. However, reproducing such behaviors on humanoid robots is difficult, partially due to the lack of perfect humanoid action data or human kinematic motion data in tennis scenarios as reference. In this work, we propose LATENT, a system that Learns Athletic humanoid TEnnis skills from imperfect human motioN daTa.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="b359b1966adb83fc68515b1a4514b8ca" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/23ZsaraYX6o?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://zzk273.github.io/LATENT/">LATENT</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="cwithpe4hna">A beautifully designed robot inspired by Strandbeests.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="1c60a43596b696ace279c9366e02ecd4" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/CwItHPe4HnA?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.cranfield.ac.uk/press/news-2026/wind-powered-robot-could-enable-long-term-exploration-of-hostile-environments">Cranfield University</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="uvqdqf8ppuw"><em>We believe we’re the first robotics company to demonstrate a robot peeling an apple with dual dexterous humanlike hands. This breakthrough closes a key gap in robotics, achieving bimanual, contact-rich manipulation and moving far beyond the limits of simple grippers.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="2c50d7039587c10b8f33da57970bff7f" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/UVQdqf8ppuw?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><blockquote><em>Today’s AI models (VLMs) are excellent at perception but struggle with action. Controlling high-degree-of-freedom hands for tasks like this is incredibly complex, and precise finger-level teleoperation is nearly impossible for humans.  Our first step was a shared-autonomy system: rather than controlling every finger, the operator triggers prelearned skills like a “rotate apple or tennis ball” primitive via a keyboard press or pedal. This makes scalable data collection and RL training possible.</em><br/><em>How does the AI manage this? We created “<a data-linked-post="2674040994" href="https://spectrum.ieee.org/video-friday-google-gemini-robotics" target="_blank">MoDE-VLA</a>” (Mixture of Dexterous Experts). It fuses vision, language, force, and touch data by using a team of specialist “experts,” making control in high-dimensional spaces stable and effective.  The combination of these two innovations allows for seamless, contact-rich manipulation. The human provides high-level guidance, and the robot executes the complex in-hand coordination required.</em></blockquote><p>[ <a href="https://www.sharpa.com/">Sharpa</a> ]</p><p>Thanks, Alex!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="pczsnnwxvia"><em>It was great to see our name amongst the other “AI Native” companies during the <a data-linked-post="2676218078" href="https://spectrum.ieee.org/nvidia-groq-3" target="_blank">NVIDIA GTC</a> keynote. NVIDIA Isaac Lab helps us train reinforcement learning policies that enable the UMV to drive, jump, flip, and hop like a pro.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="7b935f0fe975b31f175c1f1fb07566e0" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/pcZSNNWXviA?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://rai-inst.com/">Robotics and AI Institute</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="iojvnq-zhww"><em>This Finger-Tip Changer technology was jointly researched and developed through a collaboration between Tesollo and RoCogMan LaB at Hanyang University ERICA. The project integrates Tesollo’s practical robotic hand development experience with the lab’s expertise in robotic manipulation and gripper design.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="02d553395b82e93112b8f1739a601bd4" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/iojvNQ-Zhww?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>I don’t know why more robots don’t do this. Also, those pointy fingertips are terrifying.</p><p>[ <a href="http://bmr.hanyang.ac.kr/">RoCogMan LaB</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="z55m_um_7fq">Here’s an upcoming ICRA paper from the Fluent Robotics Lab at the University of Michigan featuring an operational <a data-linked-post="2650254910" href="https://spectrum.ieee.org/this-is-what-pr2s-do-for-fun" target="_blank">PR2</a>! With functional batteries!!!</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="df662d906aa6b4c85644b271ad7a281c" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/z55M_um_7fQ?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://fluentrobotics.com/">Fluent Robotics Lab</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="9qzctmarvpk"><em>This video showcases the field tests and interaction capabilities of KAIST Humanoid v0.7, developed at the DRCD Lab featuring in-house actuators. The control policy was trained through deep reinforcement learning leveraging human demonstrations.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="6868cb35447265d5d8ab10642b15acd5" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/9qZcTMARvpk?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://dynamicrobot.kaist.ac.kr/">KAIST DRCD Lab</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="_wnckaf2gb8">This needs to come in adult size.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="489e194d2beb7942474b8da6039ec082" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/_WNckAf2GB8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.deeprobotics.cn/en">Deep Robotics</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="k5wgpmenpcq">I did not know this, but apparently shoeboxes are really annoying to manipulate because if you grab them by the lid, they just open, so specialized hardware is required.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="b2f884b6d81248335c4efbff6414e328" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/k5WGpMENPCQ?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://nomagic.ai/news/zalando-to-install-up-to-50-ai-powered-nomagic-robots/">Nomagic</a> ]</p><p>Thanks, Gilmarie!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="clfpxcpza14"><em>This paper presents a method to recover quadrotor Unmanned Air Vehicles (UAVs) from a throw, when no control parameters are known before the throw.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="da02ec67edcf7a40100d406b105b468a" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/CLFPXcpzA14?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://ieeexplore.ieee.org/document/10801514">MAVLab</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="pmetcxgumhm">Uh-oh, robots can see glass doors now. We’re in trouble.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="31ecd9975c0baef1553d7e3372c79b98" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/pMeTCxGumhM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.limxdynamics.com/en/products/oli">LimX Dynamics</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="pshyocgoc5u">This drone hugs trees <3</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="12d5d406c1777d91e696c722d9f0fba1" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/pSHYocGOC5U?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://slap-perching.github.io/">Stanford BDML</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="afviggntkm8"><em>Electronic waste is one of the fastest-growing environmental problems in the world. As robotics and electronic systems become more widespread, their environmental footprint continues to increase. In this research, scientists developed a fully biodegradable soft robotic system that integrates electronic devices, sensors, and actuators yet completely decomposes after use.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="75a180ef9157983647255f5588abe215" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/AFVIGgntKm8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.nature.com/articles/s41893-026-01780-4">Nature</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="yhyvrk9wce8"><em>We developed a distributed algorithm that enables multiple aerial robots to flock together safely in complex environments, without explicit communication or prior knowledge of the surroundings, using only onboard sensors and computation. Our approach ensures collision avoidance, maintains proximity between robots, and handles uncertainties (tracking errors and sensor noise). Tested in simulations and real-world experiments with up to four drones in a dense forest, it proved robust and reliable.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="0ff57e0c9dc071bc6306ca0c3798c944" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/yHyvrk9WCE8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://mrs.fel.cvut.cz/rbl">RBL</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="b3v-ylwcaee"><em>The University of Pennsylvania’s 2025 President’s Sustainability Prize winner Piotr Lazarek has developed a system that uses satellite data to pinpoint inefficiencies in farmers’ fields, conducts real-time soil analysis with autonomous drones to understand why they occur, and generates precise fertilizer application maps. His startup Nirby aims to increase productivity in farm areas that are underperforming and reduce fertilizer in high-performing ones.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="796256fd6880d5e76310d5685661fa67" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/b3v-yLwcAEE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://penntoday.upenn.edu/news/2025-penn-presidents-sustainability-prize-recipient-nirby">University of Pennsylvania</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="wl0-pu_8f0u"><em>The production version of Atlas is a departure from the typical humanoid form factor, favoring industrial utility over human likeness. Intended for purposeful work in an industrial setting, Atlas has a form factor that signals its role as a machine rather than a companion or friendly assistant. Join two lead hardware engineers and our head of industrial design for a technical discussion of how key product requirements, ranging from passive thermal management to a modular architecture, dictated a bold new vision for a humanoid.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="cce82a9b133af0d383e29e75c54cb937" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/wL0-Pu_8F0U?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://bostondynamics.com/blog/atlas-evolution-from-research-robot-to-industrial-humanoid/">Boston Dynamics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="cmbbkd46z48"><em>Dr. Christian Hubicki gives a talk exploring the common themes of modern robotics research and his time on the reality competition show, “Survivor.”</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="04cf1a709c7b176620b8d56b2629431a" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/CmBbkd46Z48?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.optimalroboticslab.com/">Optimal Robotics Lab</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Sat, 21 Mar 2026 16:30:04 +0000</pubDate><guid>https://spectrum.ieee.org/tennis-playing-robot</guid><category>Humanoid-robots</category><category>Video-friday</category><category>Robot-locomotion</category><category>Nvidia</category><category>Robot-manipulation</category><category>Quadruped-robots</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/robot-playing-tennis-holding-racket-on-green-court-inset-shows-human-opponent-hitting-ball.png?id=65325604&amp;width=980"></media:content></item><item><title>Andrew Ng: Unbiggen AI</title><link>https://spectrum.ieee.org/andrew-ng-data-centric-ai</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/andrew-ng-listens-during-the-power-of-data-sooner-than-you-think-global-technology-conference-in-brooklyn-new-york-on-wednes.jpg?id=29206806&width=1200&height=600&coordinates=0%2C0%2C0%2C632"/><br/><br/><p><strong><a href="https://en.wikipedia.org/wiki/Andrew_Ng" rel="noopener noreferrer" target="_blank">Andrew Ng</a> has serious street cred</strong> in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at <a href="https://stanfordmlgroup.github.io/" rel="noopener noreferrer" target="_blank">Stanford University</a>, cofounded <a href="https://research.google/teams/brain/" rel="noopener noreferrer" target="_blank">Google Brain</a> in 2011, and then served for three years as chief scientist for <a href="https://ir.baidu.com/" rel="noopener noreferrer" target="_blank">Baidu</a>, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told <em>IEEE Spectrum</em> in an exclusive Q&A.</p><hr/><p>
	Ng’s current efforts are focused on his company 
	<a href="https://landing.ai/about/" rel="noopener noreferrer" target="_blank">Landing AI</a>, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the <a href="https://www.youtube.com/watch?v=06-AZXmwHjo" target="_blank">data-centric AI movement</a>, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.
</p><p>
	Andrew Ng on...
</p><ul>
<li><a href="#big">What’s next for really big models</a></li>
<li><a href="#career">The career advice he didn’t listen to</a></li>
<li><a href="#defining">Defining the data-centric AI movement</a></li>
<li><a href="#synthetic">Synthetic data</a></li>
<li><a href="#work">Why Landing AI asks its customers to do the work</a></li>
</ul><p>
<strong>The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an <a href="https://spectrum.ieee.org/deep-learning-computational-cost" target="_self">unsustainable trajectory</a>. Do you agree that it can’t go on that way?</strong>
</p><p>
<strong>Andrew Ng: </strong>This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.
</p><p>
<strong>When you say you want a foundation model for computer vision, what do you mean by that?</strong>
</p><p>
<strong>Ng:</strong> This is a term coined by <a href="https://cs.stanford.edu/~pliang/" rel="noopener noreferrer" target="_blank">Percy Liang</a> and <a href="https://crfm.stanford.edu/" rel="noopener noreferrer" target="_blank">some of my friends at Stanford</a> to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, <a href="https://spectrum.ieee.org/open-ais-powerful-text-generating-tool-is-ready-for-business" target="_self">GPT-3</a> is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.
</p><p>
<strong>What needs to happen for someone to build a foundation model for video?</strong>
</p><p>
<strong>Ng:</strong> I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.
</p><p>
	Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.
</p><p>
<a href="#top">Back to top</a>
</p><p>
<strong>It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.</strong>
</p><p>
<strong>Ng: </strong>Over a decade ago, when I proposed starting the <a href="https://research.google/teams/brain/" rel="noopener noreferrer" target="_blank">Google Brain</a> project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.
</p><p class="pull-quote">
	“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”<br/>
	—Andrew Ng, CEO & Founder, Landing AI
</p><p>
	I remember when my students and I published the first 
	<a href="https://nips.cc/" rel="noopener noreferrer" target="_blank">NeurIPS</a> workshop paper advocating using <a href="https://developer.nvidia.com/cuda-zone" rel="noopener noreferrer" target="_blank">CUDA</a>, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.
</p><p>
<strong>I expect they’re both convinced now.</strong>
</p><p>
<strong>Ng:</strong> I think so, yes.
</p><p>
	Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”
</p><p>
<a href="#top">Back to top</a>
</p><p>
<strong>How do you define data-centric AI, and why do you consider it a movement?</strong>
</p><p>
<strong>Ng:</strong> Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.
</p><p>
	When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.
</p><p>
	The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a 
	<a href="https://neurips.cc/virtual/2021/workshop/21860" rel="noopener noreferrer" target="_blank">data-centric AI workshop at NeurIPS</a>, and I was really delighted at the number of authors and presenters that showed up.
</p><p>
<strong>You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?</strong>
</p><p>
<strong>Ng: </strong>You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.
</p><p>
<strong>When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?</strong>
</p><p>
<strong>Ng: </strong>Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of <a href="https://developers.arcgis.com/python/guide/how-retinanet-works/" rel="noopener noreferrer" target="_blank">RetinaNet</a>. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.
</p><p class="pull-quote">
	“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”<br/>
	—Andrew Ng
</p><p>
	For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.
</p><p>
<strong>Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?</strong>
</p><p>
<strong>Ng:</strong> Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, <a href="https://www.cs.princeton.edu/~olgarus/" rel="noopener noreferrer" target="_blank">Olga Russakovsky</a> gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed <a href="https://neurips.cc/virtual/2021/invited-talk/22281" rel="noopener noreferrer" target="_blank">Mary Gray’s presentation,</a> which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like <a href="https://www.microsoft.com/en-us/research/project/datasheets-for-datasets/" rel="noopener noreferrer" target="_blank">Datasheets for Datasets</a> also seem like an important piece of the puzzle.
</p><p>
	One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.
</p><p>
<strong>When you talk about engineering the data, what do you mean exactly?</strong>
</p><p>
<strong>Ng: </strong>In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a <a href="https://jupyter.org/" rel="noopener noreferrer" target="_blank">Jupyter notebook</a> and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.
</p><p>
	For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.
</p><p>
<a href="#top">Back to top</a>
</p><p>
<strong>What about using synthetic data, is that often a good solution?</strong>
</p><p>
<strong>Ng: </strong>I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, <a href="https://tensorlab.cms.caltech.edu/users/anima/" rel="noopener noreferrer" target="_blank">Anima Anandkumar</a> gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.
</p><p>
<strong>Do you mean that synthetic data would allow you to try the model on more data sets?</strong>
</p><p>
<strong>Ng: </strong>Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.
</p><p class="pull-quote">
	“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”<br/>
	—Andrew Ng
</p><p>
	Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.
</p><p>
<a href="#top">Back to top</a>
</p><p>
<strong>To make these issues more concrete, can you walk me through an example? When a company approaches <a href="https://landing.ai/" rel="noopener noreferrer" target="_blank">Landing AI</a> and says it has a problem with visual inspection, how do you onboard them and work toward deployment?</strong>
</p><p>
<strong>Ng: </strong>When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the <a href="https://landing.ai/platform/" rel="noopener noreferrer" target="_blank">LandingLens</a> platform. We often advise them on the methodology of data-centric AI and help them label the data.
</p><p>
	One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.
</p><p>
<strong>How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?</strong>
</p><p>
<strong>Ng:</strong> It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.
</p><p>
	In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?
</p><p>
<strong>So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.</strong>
</p><p>
<strong>Ng: </strong>Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.
</p><p>
<strong>Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?</strong>
</p><p>
<strong>Ng: </strong>In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.
</p><p>
<a href="#top">Back to top</a>
</p><p><em>This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist</em><em>.”</em></p>]]></description><pubDate>Wed, 09 Feb 2022 15:31:12 +0000</pubDate><guid>https://spectrum.ieee.org/andrew-ng-data-centric-ai</guid><category>Deep-learning</category><category>Artificial-intelligence</category><category>Andrew-ng</category><category>Type-cover</category><dc:creator>Eliza Strickland</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/andrew-ng-listens-during-the-power-of-data-sooner-than-you-think-global-technology-conference-in-brooklyn-new-york-on-wednes.jpg?id=29206806&amp;width=980"></media:content></item><item><title>How AI Will Change Chip Design</title><link>https://spectrum.ieee.org/ai-chip-design-matlab</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/layered-rendering-of-colorful-semiconductor-wafers-with-a-bright-white-light-sitting-on-one.jpg?id=29285079&width=1200&height=600&coordinates=0%2C250%2C0%2C250"/><br/><br/><p>The end of <a href="https://spectrum.ieee.org/on-beyond-moores-law-4-new-laws-of-computing" target="_self">Moore’s Law</a> is looming. Engineers and designers can do only so much to <a href="https://spectrum.ieee.org/ibm-introduces-the-worlds-first-2nm-node-chip" target="_self">miniaturize transistors</a> and <a href="https://spectrum.ieee.org/cerebras-giant-ai-chip-now-has-a-trillions-more-transistors" target="_self">pack as many of them as possible into chips</a>. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.</p><p>Samsung, for instance, is <a href="https://spectrum.ieee.org/processing-in-dram-accelerates-ai" target="_self">adding AI to its memory chips</a> to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has <a href="https://spectrum.ieee.org/heres-how-googles-tpu-v4-ai-chip-stacked-up-in-training-tests" target="_self">doubled its processing power</a> compared with that of  its previous version.</p><p>But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with <a href="https://www.linkedin.com/in/heather-gorr-phd" rel="noopener noreferrer" target="_blank">Heather Gorr</a>, senior product manager for <a href="https://www.mathworks.com/" rel="noopener noreferrer" target="_blank">MathWorks</a>’ MATLAB platform.</p><p><strong>How is AI currently being used to design the next generation of chips?</strong></p><p><strong>Heather Gorr:</strong> AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-resized-container rm-resized-container-25 rm-float-left" data-rm-resized-container="25%" style="float: left;">
<img alt="Portrait of a woman with blonde-red hair smiling at the camera" class="rm-shortcode rm-resized-image" data-rm-shortcode-id="1f18a02ccaf51f5c766af2ebc4af18e1" data-rm-shortcode-name="rebelmouse-image" id="2dc00" loading="lazy" src="https://spectrum.ieee.org/media-library/portrait-of-a-woman-with-blonde-red-hair-smiling-at-the-camera.jpg?id=29288554&width=980" style="max-width: 100%"/>
<small class="image-media media-caption" placeholder="Add Photo Caption..." style="max-width: 100%;">Heather Gorr</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit..." style="max-width: 100%;">MathWorks</small></p><p>Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see  something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.</p><p><strong>What are the benefits of using AI for chip design?</strong></p><p><strong>Gorr:</strong> Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a <a href="https://en.wikipedia.org/wiki/Model_order_reduction" rel="noopener noreferrer" target="_blank">reduced order model</a>, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your <a href="https://www.ibm.com/cloud/learn/monte-carlo-simulation" rel="noopener noreferrer" target="_blank">Monte Carlo simulations</a> using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.</p><p><strong>So it’s like having a digital twin in a sense?</strong></p><p><strong>Gorr:</strong> Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.</p><p><strong>So, it’s going to be more efficient and, as you said, cheaper?</strong></p><p><strong>Gorr:</strong> Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.</p><p><strong>We’ve talked about the benefits. How about the drawbacks?</strong></p><p><strong>Gorr: </strong>The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years.</p><p>Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together.</p><p>One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.</p><p><strong>How can engineers use AI to better prepare and extract insights from hardware or sensor data?</strong></p><p><strong>Gorr: </strong>We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.</p><p>One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on <a href="https://github.com/" rel="noopener noreferrer" target="_blank">GitHub</a> or <a href="https://www.mathworks.com/matlabcentral/" rel="noopener noreferrer" target="_blank">MATLAB Central</a>, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.</p><p><strong>What should engineers and designers consider wh</strong><strong>en using AI for chip design?</strong></p><p><strong>Gorr:</strong> Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.</p><p><strong>How do you think AI will affect chip designers’ jobs?</strong></p><p><strong>Gorr:</strong> It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.</p><p><strong>How do you envision the future of AI and chip design?</strong></p><p><strong>Gorr</strong><strong>:</strong> It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.</p>]]></description><pubDate>Tue, 08 Feb 2022 14:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/ai-chip-design-matlab</guid><category>Chip-fabrication</category><category>Matlab</category><category>Moores-law</category><category>Chip-design</category><category>Ai</category><category>Digital-twins</category><dc:creator>Rina Diane Caballar</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/layered-rendering-of-colorful-semiconductor-wafers-with-a-bright-white-light-sitting-on-one.jpg?id=29285079&amp;width=980"></media:content></item><item><title>Atomically Thin Materials Significantly Shrink Qubits</title><link>https://spectrum.ieee.org/2d-hbn-qubit</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-golden-square-package-holds-a-small-processor-sitting-on-top-is-a-metal-square-with-mit-etched-into-it.jpg?id=29281587&width=1200&height=600&coordinates=0%2C250%2C0%2C250"/><br/><br/><p>Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.</p><p>IBM has adopted the superconducting qubit road map of <a href="https://spectrum.ieee.org/ibms-envisons-the-road-to-quantum-computing-like-an-apollo-mission" target="_self">reaching a 1,121-qubit processor by 2023</a>, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.</p><p>Now researchers at <a href="https://www.nature.com/articles/s41563-021-01187-w" rel="noopener noreferrer" target="_blank">MIT have been able to both reduce the size of the qubits</a> and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.</p><p>“We are addressing both qubit miniaturization and quality,” said <a href="https://equs.mit.edu/william-d-oliver/" rel="noopener noreferrer" target="_blank">William Oliver</a>, the director for the <a href="https://cqe.mit.edu/" target="_blank">Center for Quantum Engineering</a> at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”</p><p>The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.</p><p>Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-resized-container rm-resized-container-25 rm-float-left" data-rm-resized-container="25%" style="float: left;">
<img alt="Golden dilution refrigerator hanging vertically" class="rm-shortcode rm-resized-image" data-rm-shortcode-id="694399af8a1c345e51a695ff73909eda" data-rm-shortcode-name="rebelmouse-image" id="6c615" loading="lazy" src="https://spectrum.ieee.org/media-library/golden-dilution-refrigerator-hanging-vertically.jpg?id=29281593&width=980" style="max-width: 100%"/>
<small class="image-media media-caption" placeholder="Add Photo Caption..." style="max-width: 100%;">Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit..." style="max-width: 100%;">Nathan Fiske/MIT</small></p><p>In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.</p><p>As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.</p><p>In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.</p><p>“We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author <a href="https://equs.mit.edu/joel-wang/" rel="noopener noreferrer" target="_blank">Joel Wang</a>, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics. </p><p>On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.</p><p>While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.</p><p>“What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”</p><p>This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.</p><p>“The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.</p><p>Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.</p>]]></description><pubDate>Mon, 07 Feb 2022 16:12:05 +0000</pubDate><guid>https://spectrum.ieee.org/2d-hbn-qubit</guid><category>Quantum-computing</category><category>2d-materials</category><category>Ibm</category><category>Qubits</category><category>Hexagonal-boron-nitride</category><category>Superconducting-qubits</category><category>Mit</category><dc:creator>Dexter Johnson</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-golden-square-package-holds-a-small-processor-sitting-on-top-is-a-metal-square-with-mit-etched-into-it.jpg?id=29281587&amp;width=980"></media:content></item></channel></rss>