<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:media="http://search.yahoo.com/mrss/"><channel><title>IEEE Spectrum</title><link>https://spectrum.ieee.org/</link><description>IEEE Spectrum</description><atom:link href="https://spectrum.ieee.org/feeds/feed.rss" rel="self"></atom:link><language>en-us</language><lastBuildDate>Fri, 24 Apr 2026 18:01:02 -0000</lastBuildDate><item><title>Yong Wang Turns Information Into Insights</title><link>https://spectrum.ieee.org/yong-wang-data-visualization</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-chinese-man-speaking-into-a-podium-microphone-while-on-stage.png?id=65835863&width=1245&height=700&coordinates=0%2C187%2C0%2C188"/><br/><br/><p>When <a href="https://yong-wang.org/" rel="noopener noreferrer" target="_blank">Yong Wang</a> recently received one of the highest honors for early-career data visualization researchers, it marked a milestone in an extraordinary journey that began far from the world’s technology hubs.</p><p>Wang was born in a small farming village in southwestern China to parents with little formal education and few electronic devices. Today the IEEE member and associate editor of <a href="https://www.computer.org/csdl/journal/tg" rel="noopener noreferrer" target="_blank"><em><em>IEEE Transactions on Visualization and Computer Graphics</em></em></a><em> </em>is an assistant professor of computing and data science at <a href="https://www.ntu.edu.sg/" rel="noopener noreferrer" target="_blank">Nanyang Technological University</a>, in Singapore. He studies how people can employ <a href="https://spectrum.ieee.org/tag/visualization" target="_self">data visualization</a> techniques to get more out of <a href="https://spectrum.ieee.org/tag/artificial-intelligence" target="_self">artificial intelligence</a> tools.</p><h3>YONG WANG</h3><br/><p><strong>EMPLOYER </strong></p><p><strong></strong>Nanyang Technological University, in Singapore</p><p><strong>POSITION </strong></p><p><strong></strong>Assistant professor of computing and data science</p><p><strong>IEEE MEMBER GRADE </strong></p><p><strong></strong>Member</p><p><strong>ALMA MATERS</strong> </p><p>Harbin Institute of Technology in China; Huazhong University of Science and Technology in Wuhan, China; Hong Kong University of Science and Technology</p><p>“Visualization helps people understand complex ideas,” Wang says. “If we design these tools well, they can make advanced technologies accessible to everyone.”</p><p>For his work in the field, the <a href="https://www.computer.org/" rel="noopener noreferrer" target="_blank">IEEE Computer Society</a> visualization and graphics technical committee presented him with its 2025 <a href="https://www.ntu.edu.sg/computing/news-events/news/detail/only-two-in-the-world--ccds-s-wang-yong--first-asian-honoured-by-ieee-for-advancing-visualisation" rel="noopener noreferrer" target="_blank">Significant New Researcher Award</a><a href="https://www.computer.org/" rel="noopener noreferrer" target="_blank">. </a>The recognition highlights his growing influence in fields including <a href="https://spectrum.ieee.org/tag/human-computer-interaction" target="_self">human-computer interaction</a> and <a href="https://spectrum.ieee.org/isaac-asimov-robotics#:~:text=We%20need%20clear%20boundaries.%20While,wasted%20time%2C%20emotional%20distress%2C%20and" target="_self">human-AI collaboration</a>—areas becoming more important as the world generates more data than humans can easily interpret.</p><h2>Growing up in rural Hunan</h2><p>Wang was born in southwestern <a href="https://www.chinatoday.com/city/hunan.htm" rel="noopener noreferrer" target="_blank">Hunan Province</a>. China’s economy was still developing, and life in his village was modest. Most families in Hunan grew rice, vegetables, and fruit to support themselves.</p><p>Wang’s parents worked in agriculture too, and his father often traveled to cities to earn money working in a factory or on construction jobs. The extra income helped support the family and made it possible for Wang to attend college.</p><p>“I’m very grateful to my parents,” Wang says. “They never attended university, but they strongly supported my education.”</p><p class="pull-quote">“If we build tools that help people understand information, then more people can participate in science and innovation. That’s the real power of visualization.”</p><p>Technology was scarce in the village, he says. Computers were almost nonexistent, and televisions were considered precious, expensive household possessions.</p><p>One childhood memory still makes him laugh: During a summer vacation, he and his brother spent so many hours playing video games on a simple console connected to the family’s television that the TV screen eventually burned out.</p><p>“My mother was very angry,” he recalls. “At that time, a TV was a very valuable thing.”</p><p>He says that despite never having used a laptop or experimenting with electronic equipment, he was fascinated by the technologies he saw on TV shows.</p><h2>Discovering robotics and engineering</h2><p>His parents encouraged a practical career such as medicine or civil engineering, but he felt drawn to robotics and computing, he says.</p><p>“I didn’t really understand what computer science involved,” he says. “But from what I saw on TV, it looked exciting and advanced.”</p><p>He enrolled at <a href="https://en.hit.edu.cn/" rel="noopener noreferrer" target="_blank">Harbin Institute of Technology</a>, in northeastern China. The esteemed university is known for its engineering programs. His major—automation— combined elements of electrical engineering, robotics, and control systems.</p><p>One of the defining experiences of his undergraduate years, he says, was a university robotics competition. Wang and his teammates designed a robot capable of autonomously navigating around obstacles.</p><p>The design was simple compared with professional systems, he acknowledges. But, he says, the experience was exhilarating. His team placed second, and Wang began to see engineering as both creative and collaborative.</p><p>He graduated with a bachelor’s degree in 2011 and briefly worked as an assistant at the <a href="https://ensa.hit.edu.cn/20668/list.htm" rel="noopener noreferrer" target="_blank">Research Institute of Intelligent Control and Systems</a> at Harbin.</p><p>In 2014 he took a position as a research intern working at <a href="https://www.dji.com/" rel="noopener noreferrer" target="_blank">Da Jiang Innovation</a> in Shenzhen, China.</p><p>That experience helped him clarify his future, he says: “I realized I didn’t enjoy doing repetitive work or simply following instructions. I wanted to explore ideas that interested me, and I wanted to conduct research.” The realization pushed him toward graduate school, he says.</p><h2>Building tools that help humans work with AI</h2><p>Wang received a master’s degree in pattern recognition and image processing from the <a href="https://english.hust.edu.cn/" rel="noopener noreferrer" target="_blank">Huazhong University of Science and Technology</a>, in Wuhan, China, in 2016.</p><p>He then enrolled in the computer science Ph.D. program at the <a href="https://hkust.edu.hk/" rel="noopener noreferrer" target="_blank">Hong Kong University of Science and Technology</a> and earned the degree in 2018. He remained there as a postdoctoral researcher until 2020, when he moved to Singapore to join <a href="https://www.smu.edu.sg/" rel="noopener noreferrer" target="_blank">Singapore Management University</a> as an assistant professor of computing and information systems. He moved over to Nanyang Technological University as an assistant professor in 2024.</p><p>His research focuses on a challenge facing nearly every business: how to make sense of the enormous amounts of data being generated.</p><p>“We live in an era of information explosions,” Wang says. “Huge amounts of data are generated, and it’s difficult for people to interpret all of it to make better business decisions.”</p><p>Data visualization offers a solution by turning complex information into images, patterns, and diagrams that people can more readily understand.</p><p>But many visualizations still must be designed manually by experts, Wang notes. It’s a time-consuming process that creates a bottleneck, he says.</p><p>His solution is to use large language models and multimodal systems that can generate text, images, video, and sensor data simultaneously and automate parts of the process.</p><p>One system developed by his research group lets users design complex infographics through natural-language instructions combined with simple interactions such as drawing on a touchscreen with a finger. It allows nontechnical people to generate visualizations instead of hiring professional designers.</p><p>Another focus of Wang’s research is <a href="https://spectrum.ieee.org/ai-proof-verification" target="_self">human-AI collaboration</a>. AI systems can analyze data at enormous scale, but people still need to be the final decision-makers, he says.</p><p>Visualization helps bridge the gap between human intention and AI’s complex calculations by making the process an AI system uses to reach a result more transparent and understandable.</p><p>“If people understand how the AI system works,” Wang says, “they can collaborate with it more effectively.”</p><p>He recently explored how visualization techniques could help researchers understand <a href="https://spectrum.ieee.org/quantum-computers" target="_self">quantum computing</a>, a field where core concepts—such as superposition, where a bit can be in more than one state at a time—are abstract. In classical computing, the bit state is binary: It’s either 1 or 0. A quantum bit, or qubit, can be 1, 0, or both. The differences get more dizzying from there.</p><p>Visualization tools could help scientists monitor quantum systems and interpret quantum machine-learning models, he says.</p><h2>The importance of IEEE communities</h2><p>Teaching and <a href="https://spectrum.ieee.org/ieee-collabratec-mentoring-program" target="_self">mentoring</a> students remain among the most meaningful parts of Wang’s career, he says.</p><p>Professional communities such as the IEEE Computer Society, he says, play a major role in helping him transform early-stage graduate students unsure of which lines of inquiry they will pursue into independent researchers with a solid technical focus. Through conferences, publications, and technical committees, IEEE connects Wang with other researchers working in visualization, AI, and human-computer interactions, he says.</p><p>Those connections have helped him share ideas, collaborate, and stay up to date on innovations in the research community.</p><p>Receiving the Significant New Researcher award motivates him to continue pushing the field forward, he says.</p><p>Looking back, he says, the distance between his rural village in Hunan and an international research career still feels remarkable. But, he says, the journey reflects something larger about his chosen field: “If we build tools that help people understand information, then more people can participate in science and innovation.</p><p>“That’s the real power of visualization.”</p>]]></description><pubDate>Fri, 24 Apr 2026 18:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/yong-wang-data-visualization</guid><category>Ieee-member-news</category><category>Data-visualization</category><category>Artificial-intelligence</category><category>Human-computer-collaboration</category><category>Quantum-computing</category><category>Ieee-computer-society</category><category>Type-ti</category><dc:creator>Willie D. Jones</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/a-chinese-man-speaking-into-a-podium-microphone-while-on-stage.png?id=65835863&amp;width=980"></media:content></item><item><title>What Anthropic’s Mythos Means for the Future of Cybersecurity</title><link>https://spectrum.ieee.org/ai-cybersecurity-mythos</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-cgi-image-of-a-translucent-padlock-filled-with-0s-and-1s-one-spot-is-broken-and-the-numbers-are-spraying-out-of-that-spot.jpg?id=65714765&width=1245&height=700&coordinates=0%2C156%2C0%2C157"/><br/><br/><p>Two weeks ago, Anthropic <a href="https://red.anthropic.com/2026/mythos-preview/" rel="noopener noreferrer" target="_blank">announced</a> that its new model, Claude Mythos Preview, can autonomously find and weaponize software vulnerabilities, turning them into working exploits without expert guidance. These were vulnerabilities in key software like operating systems and internet infrastructure that thousands of software developers working on those systems failed to find. This capability will have major security implications, compromising the devices and services we use every day. As a result, <a href="https://spectrum.ieee.org/tag/anthropic" target="_blank">Anthropic</a> is not releasing the model to the general public, but instead to a <a href="https://www.anthropic.com/glasswing" rel="noopener noreferrer" target="_blank">limited number</a> of companies.</p><div class="rm-embed embed-media"><iframe height="110px" id="noa-web-audio-player" src="https://embed-player.newsoveraudio.com/v4?key=q5m19e&id=https://spectrum.ieee.org/ai-cybersecurity-mythos&bgColor=F5F5F5&color=1b1b1c&playColor=1b1b1c&progressBgColor=F5F5F5&progressBorderColor=bdbbbb&titleColor=1b1b1c&timeColor=1b1b1c&speedColor=1b1b1c&noaLinkColor=556B7D&noaLinkHighlightColor=FF4B00&feedbackButton=true" style="border: none" width="100%"></iframe></div><p><span>The news rocked the internet security community. There were few details in Anthropic’s announcement, </span><a href="https://srinstitute.utoronto.ca/news/the-mythos-question-who-decides-when-ai-is-too-dangerous" target="_blank">angering</a><span> many observers. Some speculate that Anthropic </span><a href="https://kingy.ai/ai/too-dangerous-to-release-or-just-too-expensive-the-real-reason-anthropic-is-hiding-its-most-powerful-ai/" target="_blank">doesn’t have</a><span> the GPUs to run the thing, and that cybersecurity was the excuse to limit its release. Others argue Anthropic is holding to their AI safety mission. </span><a href="https://www.nytimes.com/2026/04/07/opinion/anthropic-ai-claude-mythos.html" target="_blank">There’s</a><span> </span><a href="https://www.axios.com/2026/04/08/anthropic-mythos-model-ai-cyberattack-warning" target="_blank">hype</a><span> and </span><a href="https://www.artificialintelligencemadesimple.com/p/anthropics-claude-mythos-launch-is" target="_blank">counter</a><span>-</span><a href="https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jagged-frontier" target="_blank">hype</a><span>, </span><a href="https://www.aisi.gov.uk/blog/our-evaluation-of-claude-mythos-previews-cyber-capabilities" target="_blank">reality</a><span> and marketing. It’s a lot to sort out, even if you’re an expert.</span></p><p>We see Mythos as a real but incremental step, one in a long line of incremental steps. But even incremental steps can be important when we look at the big picture.</p><h2>How AI Is Changing Cybersecurity</h2><p>We’ve <a href="https://spectrum.ieee.org/online-privacy" target="_self">written about</a> Shifting Baseline Syndrome, a phenomenon that leads people—the public and experts alike—to discount massive long-term changes that are hidden in incremental steps. It has happened with online privacy, and it’s happening with AI. Even if the vulnerabilities found by Mythos could have been found using AI models from last month or last year, they couldn’t have been found by AI models from five years ago.</p><p>The Mythos announcement reminds us that AI has come a long way in just a few years: The baseline really has shifted. Finding vulnerabilities in source code is the type of task that today’s large language models excel at. Regardless of whether it happened last year or will happen next year, it’s been clear for a <a href="https://sockpuppet.org/blog/2026/03/30/vulnerability-research-is-cooked/" target="_blank">while</a> this kind of capability was coming soon. The question is how we <a href="https://labs.cloudsecurityalliance.org/mythos-ciso/" rel="noopener noreferrer" target="_blank">adapt to it</a>.</p><p>We don’t believe that an AI that can hack autonomously will create permanent asymmetry between offense and defense; it’s likely to be more <a href="https://danielmiessler.com/blog/will-ai-help-moreattackers-defenders" rel="noopener noreferrer" target="_blank">nuanced</a> than that. Some vulnerabilities can be found, verified, and patched automatically. Some vulnerabilities will be hard to find, but easy to verify and patch—consider generic cloud-hosted web applications built on standard software stacks, where updates can be deployed quickly. Still others will be easy to find (even without powerful AI) and relatively easy to verify, but harder or impossible to patch, such as IoT appliances and industrial equipment that are rarely updated or can’t be easily modified.</p><p>Then there are systems whose vulnerabilities will be easy to find in code but difficult to verify in practice. For example, complex distributed systems and cloud platforms can be composed of thousands of interacting services running in parallel, making it difficult to distinguish real vulnerabilities from false positives and to reliably reproduce them.</p><p>So we must separate the patchable from the unpatchable, and the easy to verify from the hard to verify. This taxonomy also provides us guidance for how to protect such systems in an era of powerful AI vulnerability-finding tools.</p><p>Unpatchable or hard to verify systems should be protected by wrapping them in more restrictive, tightly controlled layers. You want your fridge or thermostat or industrial control system behind a restrictive and constantly-updated firewall, not freely talking to the internet.</p><p>Distributed systems that are fundamentally interconnected should be traceable and should follow the principle of least privilege, where each component has only the access it needs. These are bog standard security ideas that we might have been tempted to throw out in the era of AI, but they’re still as relevant as ever.</p><h2>Rethinking Software Security Practices</h2><p>This also raises the salience of best practices in software engineering. Automated, thorough, and continuous testing was always important. Now we can take this practice a step further and use defensive AI agents to <a href="https://www.secwest.net/ai-triage" rel="noopener noreferrer" target="_blank">test exploits</a> against a real stack, over and over, until the false positives have been weeded out and the real vulnerabilities and fixes are confirmed. This kind of <a href="https://www.csoonline.com/article/4069075/autonomous-ai-hacking-and-the-future-of-cybersecurity.html" rel="noopener noreferrer" target="_blank">VulnOps</a> is likely to become a standard part of the development process.</p><p>Documentation becomes more valuable, as it can guide an AI agent on a bug finding mission just as it does developers. And following standard practices and using standard tools and libraries allows AI and engineers alike to recognize patterns more effectively, even in a world of individual and ephemeral <a href="https://www.csoonline.com/article/4152133/cybersecurity-in-the-age-of-instant-software.html" rel="noopener noreferrer" target="_blank">instant software</a>—code that can be generated and deployed on demand.</p><p>Will this favor <a href="https://www.schneier.com/essays/archives/2018/03/artificial_intellige.html" rel="noopener noreferrer" target="_blank">offense or defense</a>? The defense eventually, probably, especially in systems that are easy to patch and verify. Fortunately, that includes our phones, web browsers, and major internet services. But today’s cars, electrical transformers, fridges, and lampposts are connected to the internet. Legacy banking and airline systems are networked.</p>Not all of those are going to get patched as fast as needed, and we may see a few years of constant hacks until we arrive at a new normal: where verification is paramount and software is patched continuously.]]></description><pubDate>Thu, 23 Apr 2026 14:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/ai-cybersecurity-mythos</guid><category>Cybersecurity</category><category>Anthropic</category><category>Agentic-ai</category><category>Hacking</category><dc:creator>Bruce Schneier</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-cgi-image-of-a-translucent-padlock-filled-with-0s-and-1s-one-spot-is-broken-and-the-numbers-are-spraying-out-of-that-spot.jpg?id=65714765&amp;width=980"></media:content></item><item><title>This Roboticist-Turned-Teacher Built a Life-Size Replica of ENIAC</title><link>https://spectrum.ieee.org/roboticist-turned-teacher-eniac-replica</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/man-crouches-behind-three-robots.png?id=65575461&width=1245&height=700&coordinates=0%2C219%2C0%2C219"/><br/><br/><p><a href="https://linkedin.com/in/thomas-burick" rel="noopener noreferrer" target="_blank">Tom Burick</a> has always considered himself a builder. Over the years he’s designed robots, constructed a <a href="https://www.youtube.com/watch?v=po58YSF8UKs&t=596s" rel="noopener noreferrer" target="_blank">vintage teardrop trailer</a>, and most recently, led a group of students in building a full-scale replica of a pivotal 1940s computer. </p><p>Burick is a technology instructor at PS Academy in Gilbert, Ariz., a middle and high school for students with <a href="https://spectrum.ieee.org/tag/autism-spectrum-disorder" target="_blank">autism</a> and other specialized learning needs. At the start of the 2025–26 school year, he began a project with his students to build a full-scale replica of the Electronic Numerical Integrator and Computer, or ENIAC, for the <a href="https://spectrum.ieee.org/eniac-80-ieee-milestone" target="_self">80th anniversary of the historic computer’s construction</a>. ENIAC was one of the world’s first programmable electronic computers. When it was built, it was about one thousand times as fast as other machines.</p><p>Before becoming a teacher, Burick owned a robotics company for a decade in the 2000s. But when a financial downturn forced him to close the business, he turned to teaching. “I had so many amazing people help me when I was young [who] really gave me their time and resources, and really changed the trajectory of my life,” Burick says. “I thought I need to pay that forward.”</p><h2>Becoming a Roboticist</h2><p>As a young child in Latrobe, Pa., Burick watched the television show <em><em>Lost in Space</em></em>, which includes a robot character who protects the family. “He was the young boy’s best friend, and I was so captivated by that. I remember thinking to myself, I want that in my life. And that started that lifelong love affair with robotics and technology.”</p><p>He started building toy robots out of anything he could find, and in junior high school, he began adding electronics. “By early high school, I was building full-fledged autonomous, microprocessor-controlled machines,” he says. At age 15, he built a 150-pound steel firefighting robot, for which he won awards from IEEE and other organizations. </p><p>Burick kept building robots and reached out for help from local colleges and universities. He first got in touch with a student at <a href="https://www.cmu.edu/" rel="noopener noreferrer" target="_blank">Carnegie Mellon University</a>, who invited him to visit campus. “My parents drove me down the next weekend, and he gave me a tour of the robotics lab. I was mesmerized. He sent me home with college textbooks and piles of metal and gears and wires,” Burick says. He would read the textbook a page at a time, reading it again and again until he felt he had an understanding of it. Then, to help fill gaps in his understanding, he got in touch with a robotics instructor at <a href="https://www.stvincent.edu/index.html" rel="noopener noreferrer" target="_blank">Saint Vincent College</a>, in his hometown of Latrobe, who let him sit in on classes. Each of these adults, he says, “helped change the trajectory of my life.” </p><p>Toward the end of high school, Burick realized that college wouldn’t be the right environment for him. “I was drawn to real-world problem-solving rather than structured coursework and I chose to continue along that path,” he says. Additionally, Burick has <a href="https://my.clevelandclinic.org/health/diseases/23949-dyscalculia" rel="noopener noreferrer" target="_blank">dyscalculia</a>, which makes traditional mathematics more challenging for him. “It pushed me to develop alternative methods of engineering.”</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="recreation of a large machine arranged in a U shape. A podium in the middle reads \u201cENIAC 80\u201d" class="rm-shortcode" data-rm-shortcode-id="11b834e11cfecce37836f1a912816b02" data-rm-shortcode-name="rebelmouse-image" id="2528e" loading="lazy" src="https://spectrum.ieee.org/media-library/recreation-of-a-large-machine-arranged-in-a-u-shape-a-podium-in-the-middle-reads-u201ceniac-80-u201d.png?id=65575467&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The ENIAC replica Burick’s students built precisely matches what the original computer would have looked like before it was disassembled in the 1950s. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Robert Gamboa</small></p><p>When he graduated, he worked in several tech jobs before starting his own company. In 2000, he opened a computer retail store and adjacent robotics business, White Box Robotics. The idea for the company came when Burick was building a “white box” PC from standard, off-the-shelf components, and realized there was no comparable product for robotics. </p><p>So, he started developing a modular, general-purpose platform that applied white box PC standards to mobile robots. “The robot’s chassis was like a box of Legos,” he says. You could click together two torsos to double its payload, switch out the drive system, or swap its head for a different set of sensors. He filed utility and design <a href="https://patents.justia.com/inventor/thomas-j-burick" target="_blank">patents</a> for the platform, called the 914 PC-Bot, and after merging with a Canadian defense robotics company called Frontline Robotics, started production. They sold about 200 robots in 17 countries, Burick says. </p><p>Then the 2008 financial crisis hit. White Box Robotics held on for a couple of years, shuttering in late 2010. “I got to live my life’s dream for 10 years,” he says. After closing White Box, “there was some soul searching” about what to do next. He recalled the impact his own mentors had, and decided to pay it forward by teaching. </p><h2>Neurodiversity as a Superpower</h2><p> In 2013, Burick started working in a vocational training program for young adults living with autism. The program didn’t have a technical arm, so he started one and ran it until 2019, when he was hired to be a technology instructor at PS Academy Arizona. </p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" rel="float: left;" style="float: left;"> <img alt="Student using power drill on wood under instructor\u2019s guidance in workshop." class="rm-shortcode" data-rm-shortcode-id="f2ffb116874f4573ed0d154a8392678a" data-rm-shortcode-name="rebelmouse-image" id="bd65a" loading="lazy" src="https://spectrum.ieee.org/media-library/student-using-power-drill-on-wood-under-instructor-u2019s-guidance-in-workshop.png?id=65575500&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Burick and one of his students assemble the base for one of ENIAC’s three portable function tables, which contained banks of switches that stored numerical constants. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Bri Mason</small></p><p> Burick feels he can connect with his students, because he is also neurodivergent. Throughout his childhood, he was told what he wasn’t able to do because of his dyscalculia diagnosis. “People tell you what it takes, but they never tell you what it gives,” Burick says. </p><p>In adulthood, he realized that some of his strengths are linked to dyscalculia, too, like strong 3D spatial reasoning. “I have this CAD program that runs in my head 24 hours a day,” he says. “I think the reason I was successful in robotics, truly, was because of the dyscalculia…. To me, [it] has always been a superpower.” </p><p>Whenever his students say something disparaging about living with autism, he shares his own experience. “You need to have maybe just a bit more tenacity than others, because there are parts of it you do have to fight through, but you come through with gifts and strengths,” he tells them. </p><p>And Burick’s classes aim to play to those strengths. “I didn’t want my technology program to feel like craft hour,” he says. Instead, through projects like the ENIAC replica, students can leverage traits many of them share, like the abilities to hyperfocus and to precisely repeat tasks. </p><h2>Recreating ENIAC</h2><p> Burick has taught his students about ENIAC for several years. While reading about it, he learned that the massive, 27-tonne computer was dismantled and partially destroyed after being decommissioned in 1955. Although a few of ENIAC’s 40 original panels are on display at museums, “there was no hope of ever seeing it together again. We wanted to give the world that experience,” Burick says. </p><p> He and his students started by learning about ENIAC, and even Burick was surprised by how complex the 80-year-old computer was. They built a one-twelfth scale model to help the students better understand what it looked like. Seeing the students light up, Burick became confident in their ability to move onto the full-scale model, and he started ordering supplies. </p><p> ENIAC was composed of 40 large metal panels arranged in a U-shape that housed its many vacuum tubes, resistors, capacitors, and switches. Twenty of the panels were accumulators with the same design, so the students started with these, then worked through smaller groupings of panels. The repeating panels brought symmetry to ENIAC, Burick says, but it was also one of the main challenges of recreating it. If one part was slightly out of place, the next one would be too and the mistake would compound. </p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Group of students in a gym holding large silver patterned boards facing the camera." class="rm-shortcode" data-rm-shortcode-id="ec54f1caeb938893258637e62d3d7e21" data-rm-shortcode-name="rebelmouse-image" id="1cc34" loading="lazy" src="https://spectrum.ieee.org/media-library/group-of-students-in-a-gym-holding-large-silver-patterned-boards-facing-the-camera.png?id=65575510&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The students installed 500 simulated vacuum tubes in each of the panels here, for a total of 18,000 vacuum tubes.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Robert Gamboa</small></p><p> Once they constructed the panels, they added ENIAC’s three function tables, which stored numerical constants in banks of switches, then two punch-card machines. Finally, they installed 18,000 simulated vacuum tubes. In total, the project used nearly 300 square meters of thick-ream cardboard, 1,600 hot-glue-gun sticks, and 7 gallons of black paint. </p><p> The scale of the machine—and his students’ work—left Burick in awe. “By the time we were done, I felt like I was in a room full of scientists,” he says.</p><p> Previously, Burick’s students built an 8-foot-long drivable Tesla Cybertruck (“complete with a 400-watt stereo system and a subwoofer”) and he plans to keep the momentum with another recreation—maybe from the Apollo moon missions. </p><p>“I go to work every day, and I feel passionate about robotics [and] technology. I get to share that passion with the students,” Burick says. “I get to feel what it’s like to be in the position of the people that helped me. It closes that loop, and I find that really rewarding.”</p>]]></description><pubDate>Thu, 23 Apr 2026 13:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/roboticist-turned-teacher-eniac-replica</guid><category>Robotics</category><category>Eniac</category><category>Teaching</category><category>Neurodivergent</category><category>Computer-history</category><dc:creator>Gwendolyn Rak</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/man-crouches-behind-three-robots.png?id=65575461&amp;width=980"></media:content></item><item><title>Reviving Teletext for Ham Radio</title><link>https://spectrum.ieee.org/reviving-teletext-for-ham-radio</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-personal-computer-displays-a-blocky-computer-graphic-depicting-a-city-skyline-with-the-words-cq-cq-cq-de-kb1wnr-in-front-of.png?id=65575350&width=1245&height=700&coordinates=0%2C372%2C0%2C372"/><br/><br/><p>Once upon a time in Europe, television remote controls had a magic <a href="https://en.wikipedia.org/wiki/Teletext" rel="noopener noreferrer" target="_blank">teletext</a> button. Years before the internet stole into homes, pressing that button brought up teletext digital information services with hundreds of constantly updated pages. Living in Ireland in the 1980s and ’90s, my family accessed the national teletext service—<a href="https://en.wikipedia.org/wiki/RT%C3%89_Aertel" rel="noopener noreferrer" target="_blank">Aertel</a>—multiple times a day for weather and news bulletins, as well as things like TV program guides and updates on airport flight arrivals.</p><p>It was an elegant system: fast, low bandwidth, unaffected by user load, and delivering readable text even on analog television screens. So when I recently saw it was the <a href="https://bsky.app/profile/40yearsago.bsky.social/post/3mcfgzqm2ns2w" rel="noopener noreferrer" target="_blank">40th anniversary of Aertel</a>’s test transmissions, it reactivated a thought that had been rolling around in my head for years. Could I make a ham-radio version of teletext?</p><h2>What is Teletext?</h2><p>First developed in the United Kingdom and rolled out to the public by the <a href="https://www.bbc.com/articles/cvg360rr91zo" rel="noopener noreferrer" target="_blank">BBC</a> under the name <a href="http://news.bbc.co.uk/2/hi/entertainment/3681174.stm" rel="noopener noreferrer" target="_blank">Ceefax</a>, teletext exploited a quirk of analog television signals. These signals transmitted video frames as <a href="https://spectrum.ieee.org/build-this-8bit-home-computer-with-just-5-chips" target="_self">lines of luminosity and color</a>, plus some additional blank lines that weren’t displayed. Teletext piggybacked a digital signal onto these spares, transmitting a carousel of pages over time. Using their remotes, viewers typed in the three-digit code of the page they wanted. Generally within a few seconds, the carousel would cycle around and display the desired page.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A diagram depicting the enlargement and interpolation process of teletext characters." class="rm-shortcode" data-rm-shortcode-id="aae9b892c22251e316e1444080ad0757" data-rm-shortcode-name="rebelmouse-image" id="2d6e6" loading="lazy" src="https://spectrum.ieee.org/media-library/a-diagram-depicting-the-enlargement-and-interpolation-process-of-teletext-characters.png?id=65575388&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Teletext created unusually legible text in the 8-bit era by enlarging alphanumeric characters and interpolating new pixels by looking for existing pixels touching diagonally, and adding whitespace between characters. Graphic characters were not interpolated, and featured blocky chunks known as sixels for their 2-by-3 arrangement. My modern recreation uses the open-source font Bedstead, which replicates the look of teletext, including the graphics characters. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">James Provost</small></p><p>Teletext is composed of characters that can be one of eight colors. Control codes in the character stream select colors and can also produce effects like flashing text and double-height characters. The text’s legibility was better than most computers could manage at the time, thanks to the <a href="https://www.cpcwiki.eu/imgs/9/9e/Mullard_SAA5050_datasheet.pdf" target="_blank">SAA5050</a> character-generator chip at the heart of teletext. Although characters are internally stored on this chip in 6-by-10-pixel cells—fewer pixels than the <a href="https://home-2002.code-cop.org/c64/" rel="noopener noreferrer" target="_blank">typical 8-by-8-pixel cell</a> used in 1980s home computers—the SAA5050 interpolates additional pixels for alphanumeric characters on the fly, making the effective resolution <a href="https://en.wikipedia.org/wiki/Mullard_SAA5050" rel="noopener noreferrer" target="_blank">10 by 18 pixels</a>. The trade-off is very low-resolution graphics, comprising characters that use a 2-by-3 set of blocky pixels.</p><p>Teletext screens use a 40-by-24-character grid. This means that a kilobyte of memory can store a full page of multicolor text, half <a href="https://www.c64-wiki.com/wiki/Screen_RAM" rel="noopener noreferrer" target="_blank">the memory required</a> for a similar amount of text on, for example, the Commodore 64. The <a href="https://www.computinghistory.org.uk/det/182/acorn-bbc-micro-model-b/" rel="noopener noreferrer" target="_blank">BBC Microcomputer</a> took advantage of this by putting <a href="https://www.bbcbasic.co.uk/bbcwin/manual/bbcwinh.html" rel="noopener noreferrer" target="_blank">an SAA5050</a> on its motherboard, which could be accessed in one of the computer’s graphics modes. Despite the crude graphics, some educational games used this mode, most notably <a href="https://www.4mation.co.uk/retro/retrogranny.html" rel="noopener noreferrer" target="_blank"><em><em>Granny’s Garden</em></em></a>, which filled the same cultural niche among British schoolchildren that <a href="https://en.wikipedia.org/wiki/The_Oregon_Trail_(1985_video_game)" rel="noopener noreferrer" target="_blank"><em><em>The Oregon Trail</em></em></a> did for their U.S. counterparts.</p><p>By the 2010s, most teletext services had ceased broadcasting. But teletext is still <a href="https://www.bbc.com/audio/play/m00268v4" rel="noopener noreferrer" target="_blank">remembered fondly by many</a>, and enthusiasts are keeping it alive, <a href="https://teletextarchaeologist.org/" rel="noopener noreferrer" target="_blank">recovering and archiving old content</a>, running <a href="https://nmsceefax.co.uk/" rel="noopener noreferrer" target="_blank">internet-based services with current newsfeeds</a>, and developing systems that make it possible to <a href="https://www.raspberrypi.com/news/create-your-own-teletext-service/" rel="noopener noreferrer" target="_blank">create and display teletext</a> with modern TVs.</p><h2>Putting Teletext Back on the Air</h2><p>I wanted to do something a little different. Inspired by how the BBC Micro co-opted teletext for its own purposes, I thought it might make a great radio protocol. In particular I thought it could be a digital counterpart to <a href="https://en.wikipedia.org/wiki/Slow-scan_television" rel="noopener noreferrer" target="_blank">slow-scan television</a> (SSTV).</p><p>SSTV is an analog method of transmitting pictures, typically including banners with ham-radio call signs and other messages. SSTV is fun, but, true to its name, it’s slow—the most popular protocols take <a href="https://sevierraces.org/all-about-slow-scan-tv" rel="noopener noreferrer" target="_blank">a little under 2 minutes to send an image</a>—and it can be tricky to get a complete picture with legible text. For that reason, SSTV images are often broadcast multiple times.</p><p class="pull-quote"><span>Teletext is still remembered fondly by many.</span></p><p>I decided to send the teletext using the <a href="https://en.wikipedia.org/wiki/AX.25" target="_blank">AX.25</a> protocol, which encodes ones and zeros as audible tones. For <a href="https://www.arrl.org/frequency-bands" target="_blank">VHF and UHF transmissions</a> at a rate of 1,200 baud, it would take 11 seconds to send one teletext screen. Over <a href="https://en.wikipedia.org/wiki/High_frequency" rel="noopener noreferrer" target="_blank">HF bands</a>, AX.25 data is normally sent at 300 baud, which would result in a still-acceptable 44 seconds per screen. When a teletext page is sent repeatedly, any missed or corrupted rows are filled in with new ones. So in a little over 2 minutes, I could send a screen three times over HF, and the receiver would automatically combine the data. I also wanted to build the system in Python for portability, with an editor for creating pages, an AX.25 encoder and decoder, and a monitor for displaying received images.</p><p>The reason why I hadn’t done this before was because it requires digesting the details of the <a href="https://www.ax25.net/AX25.2.2-Jul%2098-2.pdf" rel="noopener noreferrer" target="_blank">AX.25 standard</a> and <a href="https://www.etsi.org/deliver/etsi_i_ets/300700_300799/300706/01_60/ets_300706e01p.pdf" rel="noopener noreferrer" target="_blank">teletext’s official spec</a>, and then translating them into a suite of software, which I never seemed to have the time to do. So I tried an experiment within an experiment, and turned to vibe coding.</p><p>Despite the popularity of vibe coding with developers, I have reservations. Even if concerns about <a href="https://spectrum.ieee.org/responsible-ai" target="_self">AI slop</a>, <a href="https://spectrum.ieee.org/ai-water-usage" target="_self">the environment</a>, and <a href="https://spectrum.ieee.org/high-bandwidth-memory-shortage" target="_self">memory hoarding</a> were not on the table, I would still worry about the <a href="https://spectrum.ieee.org/top-programming-languages-2025" target="_self">reliance on centralized systems</a> that vibe coding brings. The whole point of a DIY project is to, well, do it yourself. A DIY project lets you craft things for your own purposes, not just operate within someone else’s profit margins and policies.</p><p>Still, criticizing a technology from afar isn’t ideal, so I directed <a href="https://chat.chatbotapp.ai/" rel="noopener noreferrer" target="_blank">Anthropic’s Claude</a> toward the AX.25 and teletext specs and told it what I wanted. After about 250,000 to 300,000 tokens and several nights of back and forth about bugs and features, I had the complete system running without writing a single line of code. Being honest with myself, I doubt this system—which I’m calling Spectel—would ever have come about without vibe coding.</p><p>But I didn’t learn anything new about how teletext works, and only a little bit more about AX.25. Updates are contingent on my paying Anthropic’s fees. So I remain deeply ambivalent about vibe coding. And one final test remains in any case: trying Spectel out on HF bands. Of course, that means I’ll need willing partners out in the ether. So if you’re a ham who’d like to help out, let me know in the comments below!</p>]]></description><pubDate>Wed, 22 Apr 2026 16:19:08 +0000</pubDate><guid>https://spectrum.ieee.org/reviving-teletext-for-ham-radio</guid><category>Amateur-radio</category><category>Ham-radio</category><category>Llms</category><category>Vibe-coding</category><category>Teletext</category><category>Ax25</category><dc:creator>Stephen Cass</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/a-personal-computer-displays-a-blocky-computer-graphic-depicting-a-city-skyline-with-the-words-cq-cq-cq-de-kb1wnr-in-front-of.png?id=65575350&amp;width=980"></media:content></item><item><title>Building an Interregional Transmission Overlay for a Resilient U.S. Grid</title><link>https://content.knowledgehub.wiley.com/energy-in-motion-unlocking-the-interconnected-grid-of-tomorrow/</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/stylized-red-wsp-logo-on-a-dark-teal-background.png?id=65565498&width=980"/><br/><br/><p>Examining how a U.S. Interregional Transmission Overlay could address aging grid infrastructure, surging demand, and renewable integration challenges.</p><p><strong>What Attendees will Learn</strong></p><ol><li>Why the current regional grid structure is approaching its limits — Explore how coal-fired generation retirements, renewable integration, aging infrastructure past its 50-year lifespan, and exponential large-load growth from data centers and manufacturing reshoring are creating unprecedented pressure on the U.S. transmission system.</li><li>How an Interregional Transmission Overlay (ITO) would work — Understand the architecture of a high-capacity overlay using HVDC and 765 kV EHVAC technologies, how it would bridge the East/West/ERCOT seams, integrate renewable generation from resource-rich regions to demand centers, and potentially reduce electric system costs by hundreds of billions of dollars through 2050.</li><li>The five major challenges facing interregional transmission — Examine the obstacles of cross-state planning coordination, investment barriers including permitting and cost allocation, energy market harmonization across regions, supply chain limitations for specialized equipment, and political and regulatory uncertainties that must be navigated.</li><li>Actionable steps to begin building the ITO roadmap — Learn how utilities and developers can identify strategic corridors, form multi-stakeholder oversight entities, coordinate regional studies, secure state and federal support through FERC Order 1920 and DOE programs, and develop equitable cost allocation frameworks to move from vision to implementation.</li></ol><div><span><a href="https://content.knowledgehub.wiley.com/energy-in-motion-unlocking-the-interconnected-grid-of-tomorrow/" target="_blank">Download this free whitepaper now!</a></span></div>]]></description><pubDate>Wed, 22 Apr 2026 10:00:02 +0000</pubDate><guid>https://content.knowledgehub.wiley.com/energy-in-motion-unlocking-the-interconnected-grid-of-tomorrow/</guid><category>Type-whitepaper</category><category>Grid-resiliency</category><category>Transmission</category><category>Infrastructure</category><dc:creator>WSP</dc:creator><media:content medium="image" type="image/png" url="https://assets.rbl.ms/65565498/origin.png"></media:content></item><item><title>What to Consider Before You Accept a Management Role</title><link>https://spectrum.ieee.org/ic-or-manager</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/an-illustration-of-stylized-people-wearing-business-casual-clothing.webp?id=65257424&width=1245&height=700&coordinates=0%2C112%2C0%2C113"/><br/><br/><p><em>This article is crossposted from </em>IEEE Spectrum<em>’s careers newsletter. <a href="https://engage.ieee.org/Career-Alert-Sign-Up.html" rel="noopener noreferrer" target="_blank"><em>Sign up now</em></a><em> to get insider tips, expert advice, and practical strategies, <em><em>written i<em>n partnership with tech career development company <a href="https://www.parsity.io/" rel="noopener noreferrer" target="_blank">Parsity</a> and </em></em></em>delivered to your inbox for free!</em></em></p><h2>The Individual Contributor–Manager Fork: It’s Not a Promotion. It’s a Profession Change.</h2><p>When I was promoted to engineering manager of a mid-sized team at Clorox, I thought I had made it.</p><p>More money. More stock. More visibility. More proximity to senior leadership. From the outside, and on paper, it was clearly a promotion.</p><p>I had often heard the phrase, “Management isn’t a promotion. It’s a job switch.” I brushed it off as cliché advice engineers tell each other to sound wise.</p><p>It turns out both things were true. It was a promotion. It was also an entirely different job.</p><p>And I was nowhere near ready for what that meant.</p><h3>A Shift in Priorities</h3><p>There’s surprisingly little training for new managers. As engineers, we’re highly technical and used to mastering complex systems. Many of us assume managing people will be easier than distributed systems. Or we assume it’s just “more meetings.”</p><p>Both assumptions are wrong.</p><p>Yes, I had more meetings. But what changed most wasn’t my calendar, it was how my impact was measured. As an individual contributor, my output was visible. Code shipped. Features delivered. Bugs fixed.</p><p>As a manager, my impact became indirect. It flowed through other people.</p><p>That shift was disorienting.</p><p>So I fell back into my comfort zone. I started writing more code. I tried to be the strongest engineer on the team. It felt productive and measurable.</p><p>It was also a mistake.</p><p>By trying to be the number one engineer, I was neglecting my actual job. I wasn’t supporting senior engineers. I wasn’t unblocking systemic problems. I wasn’t building career paths. I was competing with the very people I was supposed to enable.</p><p>Management is about amplification.</p><h3>Learning to Redefine Impact</h3><p>The turning point came when I began each week with a simple question:</p><p><strong>What is the single most impactful thing I can do right now?</strong></p><p>Often, it wasn’t code. It was writing a document that clarified direction. It was fixing a broken process with a single point of failure. It was redistributing ownership so that knowledge wasn’t concentrated in one person.</p><p>I started deliberately removing myself from implementation work. I committed to writing almost no code. That forced trust. It also revealed gaps in the system that I could address at the right level: through coaching, documentation, hiring, or process changes.</p><p>Another major shift was taking one-on-one meetings seriously.</p><p>Many engineers dislike one-on-ones. They can feel awkward or devolve into status updates. I scheduled them every other week and approached them with a mix of tactical alignment and human check-in.</p><p>I rarely started with engineering questions. Instead:</p><ul><li>Are you happy with the work you’re doing?<br/></li><li>Do you feel stretched or stagnant?<br/></li><li>What’s frustrating you right now?</li></ul><p>Burnout doesn’t show up in Jira tickets. Neither does quiet disengagement.</p><p>Those conversations helped me anticipate turnover, redistribute workload, and build trust.</p><p>I also spent more time thinking about career ladders. Was I giving my team the kind of work that would help them grow? Was I hoarding high-visibility projects? Was I clear about what senior-level impact looked like?</p><p>That work felt less tangible than code, but it moved the needle far more.</p><h3>Why I Went Back to IC</h3><p>Ultimately, I returned to the individual contributor track.</p><p>Part of it was practical: I was laid off from my management role, and the market rewarded senior IC roles more strongly at the time. But if I’m honest, the deeper reason was simpler.</p><p>I love writing code.</p><p>I enjoy improving systems and helping people, but the part of my day that energized me most was still building. Management required relinquishing that. You can’t be absorbed in technical implementation and deeply people-focused at the same time. Something has to give.</p><p>Personally, I don’t need to climb the corporate ladder to feel successful. And you might not have to. Many organizations offer technical leadership tracks that are truly in parity with management when it comes to salary bands. Staff and principal engineers steer strategy without managing people.</p><p>If you want to remain deeply technical, you should think very carefully before moving into people management. It requires surrendering control over implementation and focusing on alignment, growth, and long-range planning. If you don’t genuinely care about those things, you won’t just be unhappy, you’ll make your team unhappy.</p><h3>A Simple Test Before You Choose</h3><p>Before taking a management role, ask yourself:</p><ul><li>Do I get energy from solving people-problems every day?<br/></li><li>Am I comfortable measuring impact indirectly?<br/></li><li>Would I be satisfied if I rarely wrote production code again?<br/></li><li>Do I want leverage or craft?</li></ul><p>There’s no right answer.</p><p>The IC/manager fork isn’t about prestige. It’s about what kind of work you want your days to consist of.</p><p>Choose based on energy, not ego.</p><p>—Brian</p><h2><a href="https://spectrum.ieee.org/state-of-ai-index-2026" target="_self">12 Graphs That Explain the State of AI in 2026</a></h2><p>Stanford University’s AI Index is out for 2026, tracking trends and noble developments in artificial intelligence. This year, China has taken a notable lead in AI model releases and industrial robotics compared to previous years. AIs are rapidly reaching benchmarks and achieving high levels of compute, but public trust in AI and confidence in government regulation of AI is mixed. </p><p><a href="https://spectrum.ieee.org/state-of-ai-index-2026" target="_blank">Read more here.</a></p><h2><a href="https://spectrum.ieee.org/large-physics-models-design-engineering" target="_self">AI Models Trained on Physics Are Changing Engineering</a></h2><p>Much like large language models have learned from existing texts, new AI physics models are being trained on simulation results. This results in “large physics models” that can simulate situations in transportation, aerospace, or semiconductor engineering much faster than traditional physics simulations. Using new AI physics models “can be anywhere between 10,000 to close to a million times faster,” says Jacomo Corbo, CEO and co-founder of PhysicsX.</p><p><a href="https://spectrum.ieee.org/large-physics-models-design-engineering" target="_blank">Read more here.</a></p><h2><a href="https://spectrum.ieee.org/temple-university-student-membership-perks" target="_self">Temple University Student Highlights IEEE Membership Perks</a></h2><p>Kyle McGinley is an IEEE Student Member pursuing a bachelor’s degree in electrical and computer engineering at Temple University. Joining IEEE helped him to develop the skills necessary for real-world teams. “In school, they don’t teach you how to communicate with people. They only teach you how to remember stuff,” he says.</p><p><a href="https://spectrum.ieee.org/temple-university-student-membership-perks" target="_blank">Read more here.</a></p>]]></description><pubDate>Tue, 21 Apr 2026 16:43:49 +0000</pubDate><guid>https://spectrum.ieee.org/ic-or-manager</guid><category>Tech-careers</category><category>Career-development</category><category>Careers-newsletter</category><dc:creator>Brian Jenney</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/an-illustration-of-stylized-people-wearing-business-casual-clothing.webp?id=65257424&amp;width=980"></media:content></item><item><title>The Forgotten History of Hershey’s Electric Railway in Cuba</title><link>https://spectrum.ieee.org/hershey-electric-railway-cuba</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/black-and-white-photo-of-a-train-station-platform-with-people-on-it.jpg?id=65558846&width=1245&height=700&coordinates=0%2C178%2C0%2C179"/><br/><br/><p>Why does a chocolatier build a railroad? For Milton S. Hershey, it was a logical response to a sugar shortage brought on by World War I. The Hershey Chocolate Co. was by then a chocolate-making powerhouse, having refined the automation and mass production of its products, including the eponymous Hershey’s Milk Chocolate Bar and the bite-size Hershey’s Kiss. To satisfy its many customers, the company needed a steady supply of sugar. Plus, it wanted a way to circumvent the American Sugar Refining Co., also known as the Sugar Trust, which had a virtual monopoly on sugar processing in the United States.</p><div class="rm-embed embed-media"><iframe height="110px" id="noa-web-audio-player" src="https://embed-player.newsoveraudio.com/v4?key=q5m19e&id=https://spectrum.ieee.org/hershey-electric-railway-cuba&bgColor=F5F5F5&color=1b1b1c&playColor=1b1b1c&progressBgColor=F5F5F5&progressBorderColor=bdbbbb&titleColor=1b1b1c&timeColor=1b1b1c&speedColor=1b1b1c&noaLinkColor=556B7D&noaLinkHighlightColor=FF4B00&feedbackButton=true" style="border: none" width="100%"></iframe></div><h2>Why Did Hershey Build an Electric Railroad in Cuba?</h2><p>Beginning in 1916, Hershey looked to Cuba to secure his sugar supply. According to historian Thomas R. Winpenny, the chocolate magnate had a “personal infatuation” with the lush, beautiful island. What’s more, U.S. business interests there were protected by a treaty known as the <a href="https://en.wikipedia.org/wiki/Platt_Amendment" rel="noopener noreferrer" target="_blank">Platt Amendment</a>, which made Cuba a satellite state of the United States.</p><p>Like many industrialists of the day, Hershey believed in vertical integration, and the company’s Cuban operation eventually expanded to include five sugar plantations, five modern sugar mills, a refinery, several company towns, and an oil-fired power plant with three substations to run it all.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A 1943 rail pass for the Hershey Cuban Railway" class="rm-shortcode" data-rm-shortcode-id="a11e30af3d20dc2089d7dad3fb37fcd7" data-rm-shortcode-name="rebelmouse-image" id="9f555" loading="lazy" src="https://spectrum.ieee.org/media-library/a-1943-rail-pass-for-the-hershey-cuban-railway.jpg?id=65558881&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">A 1943 rail pass entitled the holder to travel on all ordinary passenger trains of the Hershey Electric Railway. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Hershey Community Archives</small></p><p>The company also built a railroad. To maximize the sugar yield, the cane needed to be ground promptly after being cut, and the rail system offered an efficient means of transporting the cane to the mills, and ensured that the mills operated around the clock during the harvest. By 1920, one of Hershey’s three main sites was processing 135,000 tonnes of cane, yielding 14.4 million kilograms of sugar.</p><p>Initially, the Hershey Cuban Railway consisted of a single 56-kilometer-long standard gauge track on which ran seven steam locomotives that burned coal or oil. But due to the high cost of the imported fuel and the inefficiency of the locomotives, Hershey began electrifying the line in 1920. Although it was the first electrified train in Cuba, rail lines in Europe and the United States were already being electrified.</p><p>In addition to powering the various Hershey entities, the generating station supplied Matanzas and the smaller towns with electricity. F.W. Peters of General Electric’s Railway and Traction Engineering Department published a <a href="https://babel.hathitrust.org/cgi/pt?id=nyp.33433062631860&seq=317" target="_blank">detailed account of the system</a> in the April 1920 <em><em>General Electric Review</em></em>.</p><h2>Hershey’s Company Towns</h2><p>The company town of Central Hershey became the headquarters for Hershey’s Cuba operations. (“Central” is the Cuban term for a mill and the surrounding settlement.) It sat on a plateau overlooking the port of Santa Cruz del Norte, about halfway between Havana and Matanzas in the heart of Cuba’s sugarcane region.</p><p>Hershey imported the industrial utopian model he had established in Hershey, Penn., which was itself inspired by Richard and George Cadbury’s Bournville Village outside Birmingham, England.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Elderly man in a suit sits at a polished desk with papers in a dim office." class="rm-shortcode" data-rm-shortcode-id="8cd8c5885fb34f31d89a424b72aa30f0" data-rm-shortcode-name="rebelmouse-image" id="f1a35" loading="lazy" src="https://spectrum.ieee.org/media-library/elderly-man-in-a-suit-sits-at-a-polished-desk-with-papers-in-a-dim-office.jpg?id=65558890&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The chocolate magnate Milton S. Hershey had a “personal infatuation” with Cuba.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Underwood Archives/Getty Images</small></p><p>In Cuba as in Pennsylvania, Hershey’s factory complex was complemented by comfortable homes for his workers and their families, as well as swimming pools, baseball fields, and affordable medical clinics staffed with doctors, nurses, and dentists. Managers had access to a golf course and country club in Central Hershey. Schools provided free education for workers’ children.</p><p>Milton Hershey himself had very little formal education, and so in 1909 he and his wife, Catherine, established the <a href="https://www.mhskids.org/about/history/" target="_blank">Hershey Industrial School</a> in Hershey, Penn. There, white, male orphans received an education until they were 18 years old. Now known as the Milton Hershey School, the school has broadened its admission criteria considerably over the years.</p><p>Hershey duplicated this concept in the Cuban company town of Central Rosario, founding the <a href="https://www.mhskids.org/blog/built-sugar-hershey-cuba/" rel="noopener noreferrer" target="_blank">Hershey Agricultural School</a>. The first students were children whose parents had died in a horrific 1923 train accident on the Hershey Electric Railway. The high-speed, head-on collision between two trains killed 25 people and injured 50 more.</p><p>Milton Hershey was a generous philanthropist, and by most accounts he truly cared for his employees and their welfare, and yet his early 20th-century paternalism was not without fault. He was a fierce opponent of union activity, and any hard-won pay increases for workers often came at the expense of profit-sharing benefits. Like other U.S. businessmen in Cuba, Hershey employed migrant seasonal labor from neighboring Caribbean islands, undercutting the wages of local workers. Historians are still wrangling with how to capture the long-lasting effects of U.S. economic imperialism on Cuba.</p><h2>Can the Hershey Electric Railway Be Revived?</h2><p>Hershey continued to acquire new sugar plantations in Cuba throughout the 1920s, eventually owning about 24,300 hectares and leasing another 12,000 hectares. In 1946, a year after Milton Hershey’s death and amid growing political uncertainty on the island, the company sold its Cuban interests to the Cuban Atlantic Sugar Co. In addition to Hershey’s sugar operations, the sale included a peanut oil plant, four electric plants, and 404 km of railroad track plus locomotives and train cars.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="An old red electric passenger train car sitting on the tracks." class="rm-shortcode" data-rm-shortcode-id="c41680a8f71b3de96c77e4160eb744d1" data-rm-shortcode-name="rebelmouse-image" id="795e3" loading="lazy" src="https://spectrum.ieee.org/media-library/an-old-red-electric-passenger-train-car-sitting-on-the-tracks.jpg?id=65558895&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Service on the Hershey Electric Railway in Cuba continued into at least the 2010s but became increasingly sporadic, with aging equipment like this car at the Central Hershey station. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Hershey Community Archives</small></p><p>The Central Hershey sugar refinery continued to operate even after the Cuban Revolution but eventually closed in 2002. Passenger service, meanwhile, continued on the Hershey Electric Railway, albeit sporadically: By 2012, there were only two trips a day between Havana and Matanzas. This video, from 2013, gives a good sense of the route:</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="0500e67f75054ea735b8136f0ec25663" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/nn7jEDz9Bew?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> </p><p><span>A colleague of mine who studies Cuban history told me that in his travels to the country over almost 30 years, he has never been able to ride the Hershey electric train. It was always out of service or had restricted service due to the island’s </span><a href="https://spectrum.ieee.org/cuba-energy-crisis" target="_self">chronic electricity shortages</a><span>, which have only gotten worse in recent years. I’ve been trying to find out if any part of the line is still operating. If you happen to know, please add a comment below.</span></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Photo of a stopped train, with passengers standing in the doorways looking down the track." class="rm-shortcode" data-rm-shortcode-id="51154594edade1fbef0e2f88cd626088" data-rm-shortcode-name="rebelmouse-image" id="6f7d4" loading="lazy" src="https://spectrum.ieee.org/media-library/photo-of-a-stopped-train-with-passengers-standing-in-the-doorways-looking-down-the-track.jpg?id=65558907&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Cuba’s frequent power outages make it difficult to operate the Hershey Electric Railway. In this 2009 photo, passengers await the restoration of electricity so they can continue their journey.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Adalberto Roque/AFP/Getty Images</small></p><p>A <a href="https://tots.upol.cz/pdfs/tot/2024/03/07.pdf" target="_blank">2024 analysis</a> of the economic potential and challenges of reactivating Cuba’s Hershey Electric Railway noted that an electric railway could be a hedge against climate change and geopolitical factors. But it also acknowledged that frequent power outages and damaged infrastructure argue against reactivating the electrified railway, and it favored the diesel engines used on most of Cuba’s rail network.</p><p>Cuba has been mostly off-limits to U.S. tourists for my entire life, but it was one of my grandmother’s favorite vacation spots. I would love to imagine a future where political ties are restored, the power grid is stabilized, and the Hershey Electric Railway is reopened to the Cuban public and to curious visitors like me.</p><p><em><em>Part of a </em></em><a href="https://spectrum.ieee.org/collections/past-forward/" target="_self"><em><em>continuing series</em></em></a><em> </em><em><em>looking at historical artifacts that embrace the boundless potential of technology.</em></em></p><p><em><em>An abridged version of this article appears in the May 2026 print issue as “This Chocolate Empire Ran on Electric Rails.”</em></em></p><h3>References</h3><br/><p><strong></strong>In April 1920, F.W. Peters of General Electric’s Railway and Traction Engineering Department wrote a detailed account called “<a href="https://babel.hathitrust.org/cgi/pt?id=nyp.33433062631860&seq=317" target="_blank">Electrification of the Hershey Cuban Railway</a>” in the <em>General Electric Review, </em>which was later abstracted in <a href="https://archive.org/details/scientificameric1161newy/page/540/mode/1up" target="_blank"><em>Scientific American Monthly</em></a><em> </em>to reach a broader audience<em>.</em></p><p>Thomas R. Winpenny’s article “<a href="https://share.google/DpnuhNK3R6govGIio" target="_blank">Milton S. Hershey Ventures into Cuban Sugar</a>” in <em>Pennsylvania History: A Journal of Mid-Atlantic Studies, </em>Fall 1995, provided background to the business side of Hershey’s Cuba enterprise.</p><p>Florian Wondratschek’s 2024 article “<a href="https://tots.upol.cz/pdfs/tot/2024/03/07.pdf" rel="noopener noreferrer" target="_blank">Between Investment Risk and Economic Benefit: Potential Analysis for the Reactivation of the Hershey Railway in Cuba</a>” in <em>Transactions on Transport Sciences </em>brought the story up to the present.</p><p>And if you’re interested in a visual take on the Hershey operation on Cuba, check out the documentary <a href="https://www.youtube.com/watch?v=7QcrY0CwMu0" rel="noopener noreferrer" target="_blank"><em>Milton Hershey’s Cuba</em></a> by Ric Morris, a professor of Spanish and linguistics at Middle Tennessee State University.</p>]]></description><pubDate>Tue, 21 Apr 2026 13:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/hershey-electric-railway-cuba</guid><category>Past-forward</category><category>Cuba</category><category>Electric-railroad</category><category>Trains</category><category>Sugarcane</category><category>Food-production</category><dc:creator>Allison Marsh</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/black-and-white-photo-of-a-train-station-platform-with-people-on-it.jpg?id=65558846&amp;width=980"></media:content></item><item><title>The USC Professor Who Pioneered Socially Assistive Robotics</title><link>https://spectrum.ieee.org/socially-assistive-robotics</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-smiling-blonde-woman-poses-with-a-humanoid-robotic-torso-wearing-a-usc-sweatshirt.jpg?id=65574156&width=1245&height=700&coordinates=0%2C187%2C0%2C188"/><br/><br/><p>When the robotics engineering field that <a href="https://www.linkedin.com/in/maja-mataric-5b670014/" rel="noopener noreferrer" target="_blank">Maja Matarić</a> wanted to work in didn’t exist, she helped create it. In 2005 she helped define the new area of socially assistive robotics.</p><p>As an associate professor of computer science, neuroscience, and pediatrics at the <a href="https://www.usc.edu/" rel="noopener noreferrer" target="_blank">University of Southern California</a>, in Los Angeles, she developed robots to provide personalized therapy and care through social interactions.</p><h3>Maja Matarić</h3><br/><p><strong>Employer </strong></p><p><strong></strong>University of Southern California, Los Angeles</p><p><strong>Job Title </strong></p><p><strong></strong>Professor of computer science, neuroscience, and pediatrics</p><p><strong>Member grade</strong></p><p>Fellow</p><p><strong>Alma maters </strong></p><p><strong></strong>University of Kansas and MIT</p><p>The robots could have conversations, play games, and respond to emotions.</p><p>Today the IEEE Fellow is a professor at USC. She studies how robots can help students with anxiety and depression undergo cognitive behavioral therapy. CBT focuses on changing a person’s negative thought patterns, behaviors, and emotional responses.</p><p>For her work, she received a 2025 Robotics Medal from <a href="https://www.massrobotics.org/" rel="noopener noreferrer" target="_blank">MassRobotics</a>, which recognizes female researchers advancing robotics. The Boston-based nonprofit provides robotics startups with a workspace, prototyping facilities, mentorship, and networking opportunities.</p><p>When receiving the award at the ceremony in Boston, Matarić was overcome with joy, she says.</p><p>“I’ve been very fortunate to be honored with several awards, which I am grateful for. But there was something very special about getting the MassRobotics medal, because I knew at least half the people in the room,” she says. “Everyone was just smiling, and there was a great sense of love.”</p><h2>Seeing herself as an engineer</h2><p>Matarić grew up in Belgrade, Serbia. Her father was an engineer, and her mother was a writer. After her father died when she was 16, Matarić and her mother moved to the United States.</p><p>She credits her father for igniting her interest in engineering, and her uncle who worked as an aerospace engineer for introducing her to computer science.</p><p>Matarić says she didn’t consider herself an engineer until she joined USC’s faculty, since she always had worked in computer science.</p><p>“In retrospect, I’ve always been an engineer,” Matarić says. “But I didn’t set out specifically thinking of myself as one—which is just one of the many things I like to convey to young people: You don’t always have to know exactly everything in advance.”</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="d2fd2dba0701e451f2378a616fd4821c" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/NbTDF3_djI8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">Maja Matarić and her lab are exploring how socially assistive robots can help improve the communication skills of children with autism spectrum disorder.</small> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">National Science Foundation News</small> </p><p>While pursuing her bachelor’s degree in computer science at the <a href="https://www.ku.edu/" rel="noopener noreferrer" target="_blank">University of Kansas</a> in Lawrence, she was introduced to industrial robotics through a textbook. After earning her degree in 1987, she had an opportunity to continue her education as a graduate student at MIT’s AI Lab (now the <a href="https://www.csail.mit.edu/node/2873" rel="noopener noreferrer" target="_blank">Computer Science and Artificial Intelligence Lab</a>). During her first year, she explored the different research projects being conducted by faculty members, she said in a <a href="https://ethw.org/Oral-History:Maja_Mataric" rel="noopener noreferrer" target="_blank">2010 oral history</a> conducted by the <a href="https://www.ieee.org/content/dam/ieee-org/ieee/web/org/about/history-center/ieee-history-center-newsletter-114.pdf" rel="noopener noreferrer" target="_blank">IEEE History Center</a>. She met IEEE Life Fellow <a href="https://spectrum.ieee.org/rodney-brooks-three-laws-robotics" target="_self">Rodney Brooks</a>, who was working on novel reactive and behavior-based robotic systems. His work so excited her that she joined his lab and conducted her master’s thesis under his tutelage.</p><p>Inspired by the way animals use landmarks to navigate, Matarić developed <a href="https://dspace.mit.edu/bitstream/handle/1721.1/7027/AITR-1228.pdf?...#:~:text=Toto%20is%20an%20example%20of,learn%2D%20ing%20and%20path%20planning." rel="noopener noreferrer" target="_blank">Toto</a>, the first navigating behavior-based robot. Toto used distributed models to map the AI Lab building where Matarić worked and plan its path to different rooms. Toto used sonar to detect walls, doors, and furniture, according to Matarić’s paper, “<a href="https://pages.ucsd.edu/~ehutchins/cogs8/mataric-primer.pdf" rel="noopener noreferrer" target="_blank">The Robotics Primer</a>.”</p><p>After earning her master’s degree in AI and robotics in 1990, she continued to work under Brooks as a doctoral student, pioneering distributed algorithms that allowed a team of up to 20 robots to execute complex tasks in tandem, including searching for objects and exploring their environment.</p><p>Matarić earned her Ph.D. in AI and robotics in 1994 and joined <a href="https://www.brandeis.edu/" rel="noopener noreferrer" target="_blank">Brandeis University</a>, in Waltham, Mass., as an assistant professor of computer science. There she founded the Interaction Lab, where she developed autonomous robots that work together to accomplish tasks.</p><p>Three years later, she relocated to California and joined USC’s <a href="https://viterbischool.usc.edu/" rel="noopener noreferrer" target="_blank">Viterbi School of Engineering</a> as an assistant professor in computer science and neuroscience.</p><p>In 2002 she helped to found the Center for Robotics and Embedded Systems (now the <a href="https://rasc.usc.edu/" rel="noopener noreferrer" target="_blank">Robotics and Autonomous Systems Center</a>). The RASC focuses on research into human-centric and scalable robotic systems and promotes interdisciplinary partnerships across USC.</p><p>Matarić’s shift in her research came after she gave birth to her first child in 1998. When her daughter was a bit older and asked Matarić why she worked with robots, she wanted to be able to “say something better than ‘I publish a lot of research papers,’ or ‘it’s well-recognized,’” she says.</p><p class="pull-quote">“In academia, you can be in a leadership role and still do research. It’s a wonderful and important opportunity that lets academics be on top of our field and also train the next generation of students and help the next generation of faculty colleagues.”</p><p>“Kids don’t consider those good answers, and they’re probably right,” she says. “This made me realize I was in a position to do something different. And I really wanted the answer to my daughter’s future question to be, ‘Mommy’s robots help people.’”</p><p>Matarić and her doctoral student <a href="https://www.unr.edu/cse/people/david-feil-seifer" rel="noopener noreferrer" target="_blank">David Feil-Seifer</a> presented a paper defining socially assistive robotics at the 2005 <a href="https://icorr-c.org/" rel="noopener noreferrer" target="_blank">International Conference on Rehabilitation Robotics</a>. It was the only paper that talked about helping people complete tasks and learn skills by speaking with them rather than by performing physical jobs, she says.</p><p>Feil-Seifer is now a professor of computer science and engineering at the <a href="https://www.unr.edu/" rel="noopener noreferrer" target="_blank">University of Nevada</a> in Reno.</p><p>At the same time, she founded the <a href="https://uscinteractionlab.web.app/" rel="noopener noreferrer" target="_blank">Interaction Lab at USC</a> and made its focus creating robots that provide social, rather than physical, support.</p><p>“At this point in my career journey, I’ve matured to a place where I don’t want to do just curiosity-driven research alone,” she says. “Plenty of what my team and I do today is still driven by curiosity, but it is answering the question: ‘How can we help someone live a better life?’”</p><p>In 2006 she was promoted to full professor and made the senior associate dean for research in USC’s Viterbi School of Engineering. In 2012 she became vice dean for research.</p><p>“In academia, you can be in a leadership role and still do research,” she says. “It’s a wonderful and important opportunity that lets academics be on top of our field and also train the next generation of students and help the next generation of faculty colleagues.”</p><h2>Research in socially assistive robotics</h2><p>One of the longest research projects Matarić has led at her Interaction Lab is exploring how socially assistive robots can help improve the communication skills of children with <a href="https://www.mayoclinic.org/diseases-conditions/autism-spectrum-disorder/symptoms-causes/syc-20352928" rel="noopener noreferrer" target="_blank">autism spectrum disorder</a>. ASD is a lifelong neurological condition that affects the way people interact with others, and the way they learn. Children with ASD often struggle with social behaviors such as reading nonverbal cues, playing with others, and making eye contact.</p><p>Matarić and her team developed a robot, <a href="https://spectrum.ieee.org/041910-bandit-little-dog-and-more-usc-shows-off-its-robots" target="_self">Bandit</a>, that can play games with a child and give the youngster words of affirmation. Bandit is 56 centimeters tall and has a humanlike head, torso, and arms. Its head can pan and tilt. The robot uses two <a href="https://www.edmundoptics.com/c/firewire-cameras/1014/?srsltid=AfmBOopjvhJQdzbmxyRP-Bgi50iYGeAIcQp3WkFHPM4R78EHqgr4buL0" rel="noopener noreferrer" target="_blank">FireWire</a> cameras as its eyes, and it has a movable mouth and eyebrows, allowing it to exhibit a variety of facial expressions, according to the <a href="https://spectrum.ieee.org/" target="_self"><em><em>IEEE Spectrum</em></em></a>’s <a href="https://robotsguide.com/robots/bandit" rel="noopener noreferrer" target="_blank">robots guide</a>. Its torso is attached to a wheeled base.</p><p>The study showed that when interacting with Bandit, children with ASD exhibited social behaviors that were out of the ordinary for them, such as initiating play and imitating the robot.</p><p>Matarić and her team also studied how the robot could serve as a social and cognitive aid for elderly people and stroke patients. Bandit was programmed to instruct and motivate users to perform daily movement exercises such as seated aerobics.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A smiling blonde woman gestures at a customizable tabletop robot that wears a knit outfit of a cute animal over its shell." class="rm-shortcode" data-rm-shortcode-id="d0240a8f48f895ca49e2fdac2114e5f9" data-rm-shortcode-name="rebelmouse-image" id="e361f" loading="lazy" src="https://spectrum.ieee.org/media-library/a-smiling-blonde-woman-gestures-at-a-customizable-tabletop-robot-that-wears-a-knit-outfit-of-a-cute-animal-over-its-shell.jpg?id=65574186&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Maja Matarić and doctoral student Amy O’Connell testing Blossom, which is being used to study how it can aid students with anxiety or depression.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">University of Southern California</small></p><p>Over the years, Matarić’s lab developed other robots including <a href="https://magazine.viterbi.usc.edu/spring-2020/features/say-hi-to-kiwi/" target="_blank">Kiwi</a> and <a href="https://dl.acm.org/doi/10.1145/3310356" rel="noopener noreferrer" target="_blank">Blossom</a>. Kiwi, which looked like an owl, helped children with ASD learn social and cognitive skills, helped motivate elderly people living alone to be more physically active, and mediated discussions among family members. Blossom, originally developed at <a href="https://www.cornell.edu/" rel="noopener noreferrer" target="_blank">Cornell</a>, was adapted by the Interaction Lab to make it less expensive and personalizable for individuals. The robot is being used to study how it can aid students with anxiety or depression to practice cognitive behavioral therapy.</p><p>Matarić’s line of research began when she learned that large language model (LLM) chatbots were being promoted to help people with mental health struggles, she said in an <a href="https://edhub.ama-assn.org/jn-learning/audio-player/18985349" rel="noopener noreferrer" target="_blank">episode of the AMA Medical News podcast</a>.</p><p>“It is generally not easy to get [an appointment with a] therapist, or there might not be insurance coverage,” she said. “These, combined with the rates of anxiety and depression, created a real need.”</p><p>That made the chatbot idea appealing, she says, but she was interested to see if they were effective compared with a friendly robot such as Blossom.</p><p>Matarić and her team used the same LLMs to power CBT practice with a chatbot and with Blossom. They ran a two-week study in the USC dorms, where students were randomly assigned to complete CBT exercises daily with either a chatbot or the robot. Participants filled out a clinical assessment to measure their psychiatric distress before and after each session.</p><p>The study showed that students who interacted with the robot experienced a significant decrease in their mental state, Matarić said in the podcast, and students who interacted with the chatbot did not.</p><p class="pull-quote">“Joining an [IEEE] society has an impact, and it can be personal. That’s why I recommend my students join the organization—because it’s important to get out there and get connected.”</p><p>She and her team also reviewed transcripts of conversations between the students and the robot to evaluate how well the LLM responded to the participants. They found the robot was more effective than the chatbot, even though both were using the same model.</p><p>Based on those findings, in 2024 Matarić received a <a href="https://reporter.nih.gov/search/l8sqmMXycEaOMmv3hQHU1A/project-details/11064932" rel="noopener noreferrer" target="_blank">grant</a> from the U.S. <a href="https://www.nimh.nih.gov/" rel="noopener noreferrer" target="_blank">National Institute of Mental Health</a> to conduct a six-week clinical trial to explore how effective a socially assistive robot could be at delivering CBT practice. The trial, currently underway, also is expected to study how Blossom can be personalized to adapt to each user’s preferences and progress, including the way the robot moves, which exercises it recommends, and what feedback it gives.</p><p>During the trial, the 120 students participating are wearing <a href="https://spectrum.ieee.org/fitbit" target="_self">Fitbits</a> to study their physiologic responses. The participants fill out a clinical assessment to measure their psychiatric distress before and after each session.</p><p>Data including the participants’ feelings of relating to the robot, intrinsic motivation, engagement, and adherence will be assessed by the research team, Matarić says.</p><p>She says she’s proud of the graduate students working on this project, and seeing them grow as engineers is one of the most rewarding parts of working in academia.</p><p>“Engineers generally don’t anticipate having to work with human study participants and needing to understand psychology in addition to the hardcore engineering,” she says. “So the students who choose to do this research are just wonderful, caring people.”</p><h2>Finding a community at IEEE</h2><p>Matarić joined IEEE as a graduate student in 1992, the year she published her first paper in <a href="https://ieeexplore.ieee.org/document/1303682" rel="noopener noreferrer" target="_blank">IEEE Transactions on Robotics and Automation</a>. The paper, “<a href="https://ieeexplore.ieee.org/document/143349/" rel="noopener noreferrer" target="_blank">Integration of Representation Into Goal-Driven Behavior-Based Robots</a>,” described her work on Toto.</p><p>As a member of the <a href="https://www.ieee-ras.org/" rel="noopener noreferrer" target="_blank">IEEE Robotics and Automation Society</a>, she says she has gained a community of like-minded people. She enjoys attending conferences including the <a href="https://2025.ieee-icra.org/" rel="noopener noreferrer" target="_blank">IEEE International Conference on Robotics and Automation</a>, the <a href="https://www.ieee-ras.org/conferences-workshops/financially-co-sponsored/iros/" rel="noopener noreferrer" target="_blank">IEEE/RSJ International Conference on Intelligent Robots and Systems</a>, and the <a href="https://humanrobotinteraction.org/2026/" rel="noopener noreferrer" target="_blank">ACM/IEEE International Conference on Human-Robot Interaction</a>, which is closest to her field of research.</p><p>Matarić credits IEEE Life Fellow <a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10896982" rel="noopener noreferrer" target="_blank">George Bekey</a>, the founding editor in chief of the <a href="https://dl.acm.org/journal/tor" rel="noopener noreferrer" target="_blank"><em><em>IEEE Transactions on Robotics</em></em></a>, for recruiting her for the USC engineering faculty position. He knew of her work through her graduate advisor Brooks, who published a paper in the journal that introduced reactive control and the subsumption architecture, which became the foundation of a new way to control robots. It is his <a href="https://ieeexplore.ieee.org/document/108703" rel="noopener noreferrer" target="_blank">most cited paper</a>. Bekey, who was editor in chief at the time, helped guide Brooks through the challenging review process. Matarić joined Brooks’s lab at MIT two years after its publication, and her work on Toto built on that foundation.</p><p>“Joining a society has an impact, and it can be personal,” she says. “That’s why I recommend my students join the organization—because it’s important to get out there and get connected.”</p>]]></description><pubDate>Mon, 20 Apr 2026 18:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/socially-assistive-robotics</guid><category>Ieee-member-news</category><category>Robots</category><category>Socially-assistive-robotics</category><category>Mental-health</category><category>Ieee-robotics-and-automation-soc</category><category>Type-ti</category><dc:creator>Joanna Goodrich</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-smiling-blonde-woman-poses-with-a-humanoid-robotic-torso-wearing-a-usc-sweatshirt.jpg?id=65574156&amp;width=980"></media:content></item><item><title>How Engineers Kick-Started the Scientific Method</title><link>https://spectrum.ieee.org/francis-bacon-scientific-method</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/illustration-of-cornelis-drebbel-francis-bacon-and-salomon-de-caus-with-images-of-a-ship-gears-a-model-of-the-universe-and.png?id=65539363&width=1245&height=700&coordinates=0%2C16%2C0%2C17"/><br/><br/><p><em></em>In 1627, a year after the death of the philosopher and statesman <a href="https://www.britannica.com/biography/Francis-Bacon-Viscount-Saint-Alban" rel="noopener noreferrer" target="_blank">Francis Bacon</a>, a short, evocative tale of his was published. <a href="https://www.gutenberg.org/files/2434/2434-h/2434-h.htm" rel="noopener noreferrer" target="_blank"><em><em>The New Atlantis</em></em></a> describes how a ship blown off course arrives at an unknown island called Bensalem. At its heart stands Salomon’s House, an institution devoted to “the knowledge of causes, and secret motions of things” and to “the effecting of all things possible.” The novel captured Bacon’s vision of a science built on skepticism and empiricism and his belief that understanding and creating were one and the same pursuit.</p><div class="rm-embed embed-media"><iframe height="110px" id="noa-web-audio-player" src="https://embed-player.newsoveraudio.com/v4?key=q5m19e&id=https://spectrum.ieee.org/francis-bacon-scientific-method&bgColor=F5F5F5&color=1b1b1c&playColor=1b1b1c&progressBgColor=F5F5F5&progressBorderColor=bdbbbb&titleColor=1b1b1c&timeColor=1b1b1c&speedColor=1b1b1c&noaLinkColor=556B7D&noaLinkHighlightColor=FF4B00&feedbackButton=true" style="border: none" width="100%"></iframe></div><p>No mere scholar’s study filled with curiosities, Salomon’s House had deep-sunk caves for refrigeration, towering structures for astronomy, sound-houses for acoustics, engine-houses, and optical perspective-houses. Its inhabitants bore titles that still sound futuristic: Merchants of Light, Pioneers, Compilers, and Interpreters of Nature.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Engraved title page of \u201cThe Advancement and Proficience of Learning\u201d with ship and globes" class="rm-shortcode" data-rm-shortcode-id="888d24b04de32d66d216409368256998" data-rm-shortcode-name="rebelmouse-image" id="9fb45" loading="lazy" src="https://spectrum.ieee.org/media-library/engraved-title-page-of-u201cthe-advancement-and-proficience-of-learning-u201d-with-ship-and-globes.png?id=65539387&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Francis Bacon wrote The Advancement and Proficience of Learning.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Public Domain</small></p><p>Bacon didn’t conjure his story from nothing. Engineers he likely had met or observed firsthand gave him reason to believe such an institution could actually exist. Two in particular stand out: the Dutch engineer <a href="https://www.britannica.com/biography/Cornelis-Jacobszoon-Drebbel" target="_blank">Cornelis Drebbel</a> and the French engineer <a href="https://en.wikipedia.org/wiki/Salomon_de_Caus" target="_blank">Salomon de Caus</a>. Their bold creations suggested that disciplined making and testing could transform what we know.</p><h2>Engineers show the way</h2><p>Drebbel came to England around 1604 at the invitation of <a href="https://en.wikipedia.org/wiki/James_VI_and_I" target="_blank">King James I</a>. His audacious inventions quickly drew notice. By the early 1620s, he unveiled a contraption that bordered on fantasy: a boat that could dive beneath the Thames and resurface hours later, ferrying passengers from Westminster to Greenwich. Contemporary descriptions mention tubes reaching the surface to supply air, while later accounts claim Drebbel had found chemical means to replenish it. He refined the underwater craft through iterative builds, each informed by test dives and adjustments. His other creations included a perpetual-motion device driven by heat and air-pressure changes, a mercury regulator for egg incubation, and advanced microscopes.</p><p>De Caus, who arrived in England around 1611, created ingenious fountains that transformed royal gardens into animated spectacles. Visitors marveled as statues moved and birds sang in water-driven automatons, while hidden pipes and pumps powered elaborate fountains and mythic scenes. In 1615, de Caus published <a href="https://archive.org/details/raisonsdesforce00Caus" target="_blank"><em><em>The Reasons for Moving Forces</em></em></a>, an illustrated manual on water- and air-driven devices like spouts, hydraulic organs, and mechanical figures. What set him apart was scale and spectacle: He pressed ancient physical principles into the service of courtly theater.</p><p>Drebbel’s airtight submersibles and methodical trials echo in the motion studies and environmental chambers of Salomon’s House. De Caus’s melodic fountains and hidden mechanisms parallel its acoustic trials and optical illusions. From such hands-on workshops, Bacon drew the lesson that trustworthy knowledge comes from working within material constraints, through gritty making and testing. On the island of Bensalem, he imagines an entire society organized around it.</p><p>Beyond inspiring Bacon’s fiction, figures like Drebbel and de Caus honed his emerging philosophy. In 1620, Bacon published <a href="https://www.gutenberg.org/ebooks/45988" target="_blank"><em><em>Novum Organum</em></em></a>, which critiqued traditional philosophical methods and advocated a fresh way to investigate nature. He pointed to printing, gunpowder, and the compass as practical inventions that had transformed the world far more than abstract debates ever could. Nature reveals its secrets, Bacon argued, when probed through ingenious tools and stringent tests. <em><em>Novum Organum</em></em> laid out the rationale, while <em><em>New Atlantis </em></em>gave it a vivid setting. </p><h2>A final legacy to science</h2><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Engraved title page of Bacon\u2019s *Novum Organum* with ships between two pillars" class="rm-shortcode" data-rm-shortcode-id="443f32b4eb542e7f2493dadbb1232ef8" data-rm-shortcode-name="rebelmouse-image" id="559cf" loading="lazy" src="https://spectrum.ieee.org/media-library/engraved-title-page-of-bacon-u2019s-novum-organum-with-ships-between-two-pillars.png?id=65539379&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Francis Bacon also wrote Novum Organum.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Public Domain</small></p><p>That devotion to inquiry followed Bacon to the roadside one day in March 1626. In a biting late-winter chill, he halted his carriage for an impromptu trial. He bought a hen and helped pack its gutted body with fresh snow to test whether freezing alone could prevent decay. Unfortunately, the cold seeped through Bacon’s own body, and within weeks pneumonia claimed him. Bacon’s life ended with an experiment—and set in motion a larger one. In 1660, a group of London thinkers <a href="https://sirbacon.org/royalsociety.htm" target="_blank">hailed Bacon as their inspiration</a> in <a href="https://royalsociety.org/about-us/who-we-are/history/" target="_blank">founding the Royal Society</a>. Their motto, <em><em>Nullius in verba</em></em> (“take no one’s word for it”), committed them to evidence over authority, and their ambition was nothing less than to create a Salomon’s House for England.</p><p>The Royal Society and its successors realized fragments of Bacon’s dream, institutionalizing experimental inquiry. Over the following centuries, though, a distorting story took root: Scientists discover nature’s truths, and the rest is just engineering. Nineteenth-century “men of science” <a href="https://www.gutenberg.org/files/1216/1216-h/1216-h.htm" target="_blank">pressed for greater recognition</a> and invented the title of “scientist,” creating a new professional hierarchy. Across the Atlantic, U.S. <a href="https://www.asme.org/topics-resources/content/robert-henry-thurston" target="_blank">engineers</a> adopted the rigorous science-based curricula of French and German technical schools and recast engineering as “applied science” to gain institutional legitimacy. </p><p>We still call engineering “applied science,” a label that retrofits and reverses history. Alongside it stands “technology,” a <a href="https://www.ft.com/content/a48ca1fb-83ba-4fb6-80f6-cd7115f8c452" target="_blank">catchall word</a> that obscures as much as it describes. And we speak of “development” as if ideas cascade neatly from theory to practice. But <a href="https://spectrum.ieee.org/engineering-and-humanities" target="_blank">creation and comprehension have been partners</a> from the start. Yes, theory does equip engineers with tools to push for further insights. But knowing often follows making, arising from things that someone made work.</p><p>Bacon’s imaginary academy offered only fleeting glimpses of its inventions and methods. Yet he had seen the real thing: engineers like Drebbel and de Caus who tested, erred, iterated, and pushed their contraptions past the edge of known theory. From his observations of those muddy, noisy endeavors, Bacon forged his blueprint for organized inquiry. Later generations of scientists would reduce Bacon’s ideas to the clean, orderly “scientific method.” But in the process, they lost sight of its <a href="https://spectrum.ieee.org/engineering-is-not-science" target="_blank">inventive roots</a>.</p>]]></description><pubDate>Sun, 19 Apr 2026 13:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/francis-bacon-scientific-method</guid><category>History-of-technology</category><category>Science-and-technology</category><category>Charles-babbage</category><category>Francis-bacon</category><dc:creator>Guru Madhavan</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/illustration-of-cornelis-drebbel-francis-bacon-and-salomon-de-caus-with-images-of-a-ship-gears-a-model-of-the-universe-and.png?id=65539363&amp;width=980"></media:content></item><item><title>Designing Broadband LPDA-Fed Reflector Antennas With Full-Wave EM Simulation</title><link>https://content.knowledgehub.wiley.com/efficient-design-and-simulation-of-lpda-fed-parabolic-reflector-antennas/</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/wipl-d-logo.png?id=26851496&width=980"/><br/><br/><p>A practical guide to designing log-periodic dipole array fed parabolic reflector antennas using advanced 3D MoM simulation — from parametric modeling to electrically large structures.</p><p><strong>What Attendees will Learn</strong></p><ol><li>How to set design requirements for LPDA-fed reflector antennas — Understand the key specifications including bandwidth ratio, gain targets, and VSWR matching constraints across the full operating range from 100 MHz to 1 GHz.</li><li>Why advanced 3D EM solvers enable simulation of electrically large multiscale structures — Learn how higher order basis functions, quadrilateral meshing, geometrical symmetry, and CPU/GPU parallelization extend MoM simulation capability by an order of magnitude.</li><li>How to apply a systematic three-step design strategy with proven workflow starting with first optimizing the stand-alone LPDA for VSWR and gain, then integrating the reflector, and finally tuning parameters to satisfy all performance requests including gain and impedance matching.</li><li>How parametric CAD modeling accelerates LPDA design — Discover how self-scaling geometry, automated wire-to-solid conversion, and multiple-copy-with-scaling features enable fully parametrized antenna models that streamline optimization across dozens of design variants.</li></ol><div><span><a href="https://content.knowledgehub.wiley.com/efficient-design-and-simulation-of-lpda-fed-parabolic-reflector-antennas/" target="_blank">Download this free whitepaper now!</a></span></div>]]></description><pubDate>Fri, 17 Apr 2026 14:00:50 +0000</pubDate><guid>https://content.knowledgehub.wiley.com/efficient-design-and-simulation-of-lpda-fed-parabolic-reflector-antennas/</guid><category>Type-whitepaper</category><category>Broadband</category><category>Antennas</category><category>Simulation</category><dc:creator>WIPL-D</dc:creator><media:content medium="image" type="image/png" url="https://assets.rbl.ms/26851496/origin.png"></media:content></item><item><title>IEEE Entrepreneurship Connects Hardware Startups With Investors</title><link>https://spectrum.ieee.org/ieee-entrepreneurship-hardware-startups-investors</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/groups-of-people-seated-together-at-several-tables-inside-of-a-large-meeting-hall.jpg?id=65559941&width=1245&height=700&coordinates=0%2C156%2C0%2C157"/><br/><br/><p>Roughly 90 percent of <a href="https://bowoftheseus.substack.com/p/what-is-hard-tech" rel="noopener noreferrer" target="_blank">hard tech</a> startups fail due to funding constraints, longer R&D timelines for developing hardware, and the complexity of manufacturing their products, according to a number of studies.</p><p>Generally, these startups require up to 50 percent more investor financing than software ones, according to <a href="https://ehandbook.com/why-is-hardtech-so-effing-hard-a652738c886a" rel="noopener noreferrer" target="_blank">a <em><em>Medium</em></em> article</a>. Typically, they need at least US $30 million, according to <a href="https://www.lucid.now/blog/cost-of-capital-saas-vs-hardware-startups/" rel="noopener noreferrer" target="_blank">a <em><em>Lucid</em></em> article</a>. That’s double the funding needed by software companies on average.</p><p>To help them connect with investors, <a href="https://entrepreneurship.ieee.org/" rel="noopener noreferrer" target="_blank">IEEE Entrepreneurship</a> in 2024 launched its <a href="https://entrepreneurship.ieee.org/venturesummits" rel="noopener noreferrer" target="_blank">Hard Tech Venture Summits</a>. The two-day events connect founders with potential investors and other <a href="https://spectrum.ieee.org/thinking-like-an-entrepreneur" target="_self">entrepreneurs</a>. Attendees include manufacturers, design engineers, and intellectual property lawyers.</p><p>“Even though there are a lot of startup investor conferences, it’s hard to find those focused on hard tech,” says <a href="https://ca.linkedin.com/in/joannewongreddscapital" rel="noopener noreferrer" target="_blank">Joanne Wong</a>, who helped initiate the program and is now the chair. She is a general partner at <a href="https://reddscapital.com/" rel="noopener noreferrer" target="_blank">Redds Capital</a>, a California-based venture capital firm that invests in global early-stage IT startups.</p><p>The IEEE member is also an entrepreneur. She founded <a href="https://spectrum.ieee.org/cloud-software-manages-biomedical-data" target="_self">SciosHub</a> in 2020. The company’s software-as-a-service and informatics platform automates the data-management process for biomedical research labs.</p><p>“Many investors are focused on AI software—which is good,” she says. “But for hard tech companies, it is still hard to find support.”</p><p>The summit also includes a workshop to help founders navigate manufacturing processes and regulatory compliance. The event is open to IEEE members and others.</p><p>IEEE is a natural fit for the program, Wong says, because hard tech is synonymous with electrical engineering.</p><p>“Some of the domains we’re covering are <a href="https://www.ieee-ras.org/" rel="noopener noreferrer" target="_blank">robotics</a>, <a href="https://eds.ieee.org/" rel="noopener noreferrer" target="_blank">semiconductors</a>, and <a href="https://ieee-aess.org/home" rel="noopener noreferrer" target="_blank">aerospace technology</a>. IEEE has societies for all these fields,” she says. “Because of that, there are many resources within the organizations for startups, whether it be mentors or guides on how to commercialize products.”</p><p>There are several venture summits planned for this year. Two are scheduled in collaboration with the <a href="https://ieeesystemscouncil.org/ieee-systems-council-welcome" rel="noopener noreferrer" target="_blank">IEEE Systems Council</a>: this month in <a href="https://entrepreneurship.ieee.org/venturesummitsiliconvalley" rel="noopener noreferrer" target="_blank">Menlo Park, Calif.</a>, and in October in <a href="https://entrepreneurship.ieee.org/venturesummittoronto" rel="noopener noreferrer" target="_blank">Toronto</a>.</p><p>On 10 and 11 June, a third <a href="https://entrepreneurship.ieee.org/venturesummitboston" rel="noopener noreferrer" target="_blank">summit</a> is scheduled to take place in Boston at the <a href="https://mtt.org/" rel="noopener noreferrer" target="_blank">IEEE Microwave Theory and Technology Society</a>’s <a href="https://ims-ieee.org/attend" rel="noopener noreferrer" target="_blank">International Microwave Symposium</a>.</p><p>More events are being planned for next year in Asia, Europe, Latin America, and North America.</p><h2>Networking and a pitch competition</h2><p>Each summit includes keynote speakers, followed by networking roundtables. Each table is composed of people from three to five startups, one or two investors, and a service provider.</p><p>That arrangement helps founders build relationships, which is the summit organizers’ priority, Wong says. Investors at past events have included <a href="https://i3.ventures/" rel="noopener noreferrer" target="_blank">i3 Ventures</a>, <a href="https://monozukuri.vc/" rel="noopener noreferrer" target="_blank">Monozukuri Ventures</a>, and <a href="https://www.tsvcap.com/" rel="noopener noreferrer" target="_blank">TSV Capital</a>.</p><p class="pull-quote">“The connection with the community was fantastic, especially investors and founders in robotics.” <strong>—Mark Boysen, founder of Naware</strong></p><p>Startups present their pitch, which a number of investors evaluate before ranking the business plan and product. The top 10 startups pitch their business to all the investors.</p><p>On the second day, the startup founders participate in a half-day engineering design–to–manufacturing workshop, at which manufacturing engineers teach them how to navigate the process and meet regulations.</p><p>In an exhibition area, participants can see demonstrations from the startups and connect with service providers.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A woman standing next to a presentation screen while speaking to small seated groups during a professional workshop." class="rm-shortcode" data-rm-shortcode-id="9df606a8e1cf9a9702d0c39942224f08" data-rm-shortcode-name="rebelmouse-image" id="5c118" loading="lazy" src="https://spectrum.ieee.org/media-library/a-woman-standing-next-to-a-presentation-screen-while-speaking-to-small-seated-groups-during-a-professional-workshop.jpg?id=65559964&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">The 2025 event’s half-day engineering design–to–manufacturing workshop was led by Liz Taylor, president of DOER Marine. The company manufactures marine equipment.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Larissa Abi Nakhle/IEEE</small></p><h2>Positive feedback from attendees</h2><p>In a survey of past summit attendees, startup founders said the event connected them not only with investors but also with other entrepreneurs having similar struggles.</p><p>“The connection with the community was fantastic, especially investors and founders in robotics,” said <a href="https://www.linkedin.com/in/boysen1/" target="_blank">Mark Boysen</a>, who founded <a href="https://www.linkedin.com/company/naware/about/" target="_blank">Naware</a>. The company, based in Edina, Minn., developed a robot that uses AI to detect and remove weeds from golf courses, parks, and lawns.</p><p>“I loved getting the investors’ perspectives and understanding what they’re looking for,” Boysen said.</p><p><a href="https://www.linkedin.com/in/jeffrey-cook-9501114b/" rel="noopener noreferrer" target="_blank">Jeffrey Cook</a>, who attended a summit in 2024, said he met “a lot of great contacts and saw what the hard tech venture climate is like.”</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="7f6223c19ea1d3522ce4f0fcb46846f1" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/74OJ6CTJ7xE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">Attendees of the Hard Tech Venture Summit spend the first day networking and presenting their pitch to investors.</small> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">IEEE Entrepreneurship</small> </p><p>“Those in the community would benefit from coming to the summit,” said Cook, who founded <a href="https://www.linkedin.com/company/gigantor-technologies-inc/" rel="noopener noreferrer" target="_blank">Gigantor Technologies</a> in Melbourne Beach, Fla. It develops hardware systems for AI-powered devices.</p><p>More than 90 percent of attendees at the 2025 event in San Francisco said they would highly recommend the summit to others, according to a survey.</p><p>Investors and service providers also have found the events successful.</p><p><a href="https://www.linkedin.com/in/ji-ke" rel="noopener noreferrer" target="_blank">Ji Ke</a>, a partner and the chief technology officer of deep tech VC firm <a href="https://sosv.com/" rel="noopener noreferrer" target="_blank">SOSV</a>, attended the 2025 summit.</p><p>“I met a lot of young entrepreneurs tackling some big challenges,” he said. “This is one of the best events to meet some very-early-stage companies.”</p><h2>Making important connections in hard tech</h2><p>Startup founders who want to attend a summit must apply. <a href="https://entrepreneurship.ieee.org/venturesummits" rel="noopener noreferrer" target="_blank">Applications for this year’s events are open</a>. Participants must be founders of preseed, seed, or Series A startups.</p><p>Preseed founders are seeking small investments to get their businesses off the ground. Those in the seed stage have already secured funding from their first investor. Series A startups have obtained funding and are developing their product.</p><p>Applicants are reviewed by a committee of investors to ensure the startups would be a good fit. Those who are approved are matched with investors and service providers based on their specialty.</p><p>“The journey for a hard tech startup is very long and arduous,” Wong says. “Founders need to meet as many investors as possible and other people who support hard tech systems so that they’re able to reach out to them for advice or help.”</p><p>Those interested in learning more about an upcoming event can send a request to <a href="mailto:entrepreneurship@ieee.org" rel="noopener noreferrer" target="_blank">entrepreneurship@ieee.org</a>.</p>]]></description><pubDate>Thu, 16 Apr 2026 18:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/ieee-entrepreneurship-hardware-startups-investors</guid><category>Ieee-news</category><category>Hard-tech</category><category>Startups</category><category>Ieee-entrepreneurship</category><category>Entrepreneurs</category><category>Careers</category><category>Type-ti</category><dc:creator>Joanna Goodrich</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/groups-of-people-seated-together-at-several-tables-inside-of-a-large-meeting-hall.jpg?id=65559941&amp;width=980"></media:content></item><item><title>Stealth Signals Are Bypassing Iran’s Internet Blackout</title><link>https://spectrum.ieee.org/iran-internet-blackout-satellite-tv</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.png?id=65716479&width=1245&height=700&coordinates=0%2C700%2C0%2C701"/><br/><br/><p><strong>On 8 January 2026, </strong>the Iranian government imposed a near-total communications shutdown. It was the country’s first full information blackout: For weeks, the internet was off across all provinces while services including the government-run intranet, VPNs, text messaging, mobile calls, and even landlines were severely throttled. It was an unprecedented lockdown that left more than <a href="https://www.chathamhouse.org/2026/01/irans-internet-shutdown-signals-new-stage-digital-isolation" rel="noopener noreferrer" target="_blank">90 million people</a> cut off not only from the world, but from one another.</p><div class="rm-embed embed-media"><iframe height="110px" id="noa-web-audio-player" src="https://embed-player.newsoveraudio.com/v4?key=q5m19e&id=https://spectrum.ieee.org/iran-internet-blackout-satellite-tv&bgColor=F5F5F5&color=1b1b1c&playColor=1b1b1c&progressBgColor=F5F5F5&progressBorderColor=bdbbbb&titleColor=1b1b1c&timeColor=1b1b1c&speedColor=1b1b1c&noaLinkColor=556B7D&noaLinkHighlightColor=FF4B00&feedbackButton=true" style="border: none" width="100%"></iframe></div><p>Since then, connectivity has never fully returned. Following <a href="https://en.wikipedia.org/wiki/2026_Iran_war" rel="noopener noreferrer" target="_blank">U.S. and Israeli airstrikes</a> in late February, Iran again imposed near-total restrictions, and people inside the country again saw global information flows dry up.</p><p>The original January shutdown came amid nationwide protests over the deepening economic crisis and political repression, in which millions of people chanted antigovernment slogans in the streets. While Iranian protests have become frequent in recent years, this was one of the most significant uprisings since the Islamic Revolution in 1979. The government responded quickly and brutally. One report put the death toll at <a href="https://www.en-hrana.org/the-crimson-winter-a-50-day-record-of-irans-2025-2026-nationwide-protests/" rel="noopener noreferrer" target="_blank">more than 7,000 confirmed deaths</a> and more than 11,000 under investigation. Many sources believe the death toll could exceed 30,000.</p><p>Thirteen days into the January shutdown, we at <a href="https://www.netfreedompioneers.org/" rel="noopener noreferrer" target="_blank">NetFreedom Pioneers</a> (NFP) turned to a system we had built for exactly this kind of moment—one that sends files over ordinary satellite TV signals. During the national information vacuum, our technology, called <a href="https://www.netfreedompioneers.org/toosheh-datacasting-technology/" rel="noopener noreferrer" target="_blank">Toosheh</a>, delivered real-time updates into Iran, offering a lifeline to millions starved of trusted information.</p><h2>How Iran Censors the Internet<br/></h2><p>I joined NetFreedom Pioneers, a nonprofit focused on anticensorship technology, in 2014. Censorship in <a href="https://spectrum.ieee.org/tag/iran" target="_blank">Iran</a> was a defining feature of my youth in the 1990s. After the Islamic Revolution, most Iranians began to lead double lives—one at home, where they could drink, dance, and choose their clothing, and another in public, where everyone had to comply with stifling government laws.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Photo of a helmeted soldier with a machine gun standing in front of an Iranian flag and cell tower." class="rm-shortcode" data-rm-shortcode-id="ef533f84cc5eb097a4cfe78e30b2984b" data-rm-shortcode-name="rebelmouse-image" id="7a368" loading="lazy" src="https://spectrum.ieee.org/media-library/photo-of-a-helmeted-soldier-with-a-machine-gun-standing-in-front-of-an-iranian-flag-and-cell-tower.jpg?id=65520617&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Iran’s internet infrastructure is more centralized than in other parts of the world, making it easier for the government to restrict the flow of information. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Morteza Nikoubazl/NurPhoto/Getty Images</small></p>My first experience with secret communications was when I was five and living in the small city of Fasa in southern Iran. My uncle brought home a satellite dish—dangerously illegal at the time—that allowed us to tune into 12 satellite channels. My favorite was Cartoon Network. Then, during my teenage years, this same uncle introduced me to the internet through dial-up modems. I remember using Yahoo Mail with its 4 megabytes of storage, reading news from around the world, and learning about the Chandra X-ray telescope from NASA’s website. <p><br/><br/><span>That openness didn’t last. As internet use spread in the early 2000s, the Iranian government began reshaping the network itself. Unlike the highly distributed networks in the United States or Europe, where thousands of providers exchange traffic across many independent routes, Iran’s connection to the global internet is relatively centralized. Most international traffic passes through a small number of gateways controlled by state-linked telecom operators. That architecture gives authorities unusual leverage: By restricting or withdrawing those connections, they can sharply reduce the country’s access to the outside world.</span></p><p>Over the past decade, Iran has expanded this control through what it calls the <a href="https://en.wikipedia.org/wiki/National_Information_Network" target="_blank">National Information Network</a>, a domestically routed system designed to keep data inside the country whenever possible. Many government services, banking systems, and local platforms are hosted on this internal network. During periods of unrest, access to the global internet can be throttled or cut off while portions of this domestic network continue to function.</p><h3></h3><br/><div class="rblad-ieee_in_content"></div><p>The government began its censorship campaign by redirecting or blocking websites. As internet use grew, it adopted more sophisticated approaches. For example, the <a href="https://en.wikipedia.org/wiki/Telecommunication_Company_of_Iran" target="_blank">Telecommunication Company of Iran</a> uses a technique called <a href="https://www.fortinet.com/resources/cyberglossary/dpi-deep-packet-inspection" target="_blank">deep packet inspection</a> to analyze the content of data packets in real time. This method enables it to identify and block specific types of traffic, such as VPN connections, messaging apps, social media platforms, and banned websites.</p><h2>The Stealth of Satellite Transmissions<br/></h2><p>Toosheh’s communication workaround builds on a history of satellite TV adoption in Middle Eastern and North African countries. By the early 2000s, satellite dishes were common in Iran; today the majority of households in Iran have access to satellite TV despite its official prohibition.</p><p>Unlike subscription services such as DirecTV and Dish Network, “free-to-air” satellite TV broadcasts are unencrypted and can be received by anyone with a dish and receiver—no subscription required. Because the signals are open, users can also capture and store the data they carry, rather than simply watching it live. Tech-savvy people learned that they could use a digital video broadcasting (DVB) card—a piece of hardware that connects to a computer and tunes into satellite frequencies—to transform a personal computer into a satellite receiver. This way, they could watch and store media locally as well as download data from dedicated channels.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Photo of satellite dishes adorning the side of an apartment building." class="rm-shortcode" data-rm-shortcode-id="a558326e8ca2bd5c645e392fb0166b58" data-rm-shortcode-name="rebelmouse-image" id="577d2" loading="lazy" src="https://spectrum.ieee.org/media-library/photo-of-satellite-dishes-adorning-the-side-of-an-apartment-building.jpg?id=65520620&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Many Iranian citizens have free-to-air satellite dishes, like the ones on this apartment building in Tehran, and can thus download Toosheh transmissions, giving them a lifeline during internet blackouts.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Morteza Nikoubazl/NurPhoto/Getty Images</small></p><p>Toosheh, a Persian word that translates to “knapsack,” is the brainchild of <a href="https://x.com/mehdiy_fa" target="_blank">Mehdi Yahyanejad</a>, an Iranian-American technologist and entrepreneur. Yahyanejad cofounded NetFreedom Pioneers in 2012. He proposed that the satellite-computer connections enabled by a DVB card could be re-created in software, eliminating the need for specialized hardware. He added a simple digital interface to the software to make it easy for anyone to use. The next breakthrough came when the NFP team developed a new transfer protocol that tricks ordinary satellite receivers into downloading data alongside audio and video content. Thus, Toosheh was born.</p><p>Satellite TV uses a file system called an <a href="https://en.wikipedia.org/wiki/MPEG_transport_stream" target="_blank">MPEG transport stream</a> that allows multiple audio, video, or data layers to be packaged into a single stream file. When you tune in to a satellite channel and select an audio option or closed captions, you’re accessing data stored in different parts of this stream. The NFP team’s insight was that, by piggybacking on one of these layers, Toosheh could send an MPEG stream that included documents, videos, and more.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="An illustration of an 8 step process for sending digital files via satellite TV signals." class="rm-shortcode" data-rm-shortcode-id="500fc02c0c38f890606e42dec590ae8f" data-rm-shortcode-name="rebelmouse-image" id="371ea" loading="lazy" src="https://spectrum.ieee.org/media-library/an-illustration-of-an-8-step-process-for-sending-digital-files-via-satellite-tv-signals.png?id=65521138&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">HOW TOOSHEH WORKS: At NetFreedom Pioneers, content curators pull together files—news articles, videos, audio, and software [1]. Toosheh’s encoder software [2] compresses the files into a bundle, in .ts format, creating an MPEG transport stream [3]. From there, it’s uploaded to a server for transmission [4] via a free-to-air TV channel on a Yahsat satellite that’s positioned over the Middle East to provide regional coverage [5]. Satellite receivers [6] directly capture the data streams, which are downloaded to computers, smartphones, and other devices, and decoded by Toosheh software [8].</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Chris Philpot</small></p><p>A satellite receiver can’t tell the difference between our data and normal satellite audio and video data since it only “sees” the MPEG streams, not what’s encoded on them. This means the data can be downloaded and read, watched, and saved on local devices such as computers, smartphones, or storage devices. What’s more, the system is entirely private: No one can detect whether someone has received data through Toosheh; there are no traceable logs of user activity.</p><p>Toosheh doesn’t provide internet access, but rather delivers curated data through satellite technology. The fundamental distinction lies in the way users interact with the system. Unlike traditional internet services, where you type a request into your browser and receive data in response, Toosheh operates more like a combination of radio and television, presenting information in a magazine-like format. Users don’t make requests; instead, they receive 1 to 5 gigabytes of prepackaged, carefully selected data.</p><p class="pull-quote"><span>Access to information is not only about news or politics, but about exposure to possibilities.  </span></p><p>During this year’s internet blackout, we distributed official statements from Iranian opposition leader Crown Prince Reza Pahlavi and the U.S. government. We provided first-aid tutorials for medics and injured protesters. We sent uncensored news reports from BBC Persian, Iran International, IranWire, VOA Farsi, and others. We also shared critical software packages including anticensorship and antisurveillance tools, along with how-to guides to help people securely connect to Starlink satellite terminals, allowing them to stay protected and anonymous as they sent their own communications.</p><h2>How to Combat Signal Interference<br/></h2><p>Because Toosheh relies on one-way satellite broadcasts, it evades the usual tactics governments use to block internet access. However, it remains vulnerable to <a href="https://spectrum.ieee.org/satellite-jamming" target="_blank">satellite signal jamming</a>.</p><p>The Iranian government is notorious for deploying signal jamming, especially in larger cities. In 2009, the government <a href="https://www.dw.com/fa-ir/%D9%86%D8%A7%D8%AA%D9%88%D8%A7%D9%86%DB%8C-%D8%AF%D8%B1-%D9%85%D9%82%D8%A7%D8%A8%D9%84-%D8%A7%D9%85%D9%88%D8%A7%D8%AC-%D9%BE%D8%A7%D8%B1%D8%A7%D8%B2%DB%8C%D8%AA-%D8%A7%D8%B2-%D8%AA%D9%87%D8%B1%D8%A7%D9%86/a-5417209" target="_blank">used uplink interference</a>, which attacks the satellite in orbit by beaming strong noise in the frequency of the satellite’s receiver. This makes it impossible for the satellite to distinguish the information it’s supposed to receive. However, because this type of attack temporarily disables the entire satellite, Iran was threatened with international <a href="https://www.dw.com/fa-ir/%D8%AA%D8%B4%D8%AF%DB%8C%D8%AF-%D8%A7%D9%86%D8%AA%D9%82%D8%A7%D8%AF%D9%87%D8%A7-%D8%A8%D9%87-%D8%A7%D8%B1%D8%B3%D8%A7%D9%84-%D9%BE%D8%A7%D8%B1%D8%A7%D8%B2%DB%8C%D8%AA-%D8%A7%D8%B2-%D8%B3%D9%88%DB%8C-%D8%A7%DB%8C%D8%B1%D8%A7%D9%86/a-5382663" target="_blank">sanctions</a> and in 2012 stopped using the method .</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A chart displayed on a cellphone shows internet connectivity in Iran dropped from almost 100% to 0% on 9 January 2026." class="rm-shortcode" data-rm-shortcode-id="c5f3ef2e60cfa653b7c461cda6d68e0f" data-rm-shortcode-name="rebelmouse-image" id="c778a" loading="lazy" src="https://spectrum.ieee.org/media-library/a-chart-displayed-on-a-cellphone-shows-internet-connectivity-in-iran-dropped-from-almost-100-to-0-on-9-january-2026.jpg?id=65520652&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">A graph of network connectivity in Iran shows that on 9 January 2026, internet access dropped from nearly 100 percent to 0. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Samuel Boivin/NurPhoto/Getty Images</small></p><p>The current method, called terrestrial jamming, uses antennas installed at higher elevations than the surrounding buildings to beam strong noise over a specific area in the frequency range of household receivers. This attack is effective in keeping some of the packets from arriving and damaging others, effectively jamming the transmission. But it’s short-range and requires significant power, so it’s impossible to implement nationwide. There are always people somewhere who can still watch TV, download from Toosheh, or tune into a satellite radio despite the jamming. Even so, we wanted a workaround that would keep our transmissions broadly accessible.</p><p>NFP’s solution was to add redundancy, similar in principle to a data-storage technique called RAID (redundant array of independent disks). Instead of sending each piece of data once, we send extra information that allows missing or corrupted packets to be reconstructed. Under normal circumstances, we often use 5 percent of our bandwidth for this redundancy. During periods of active jamming, we increase that to as much as 25 to 30 percent, improving the chances that users can recover complete files despite interference.</p><h2>From Crisis Response to Public Access<br/></h2><p>Toosheh initially came online in 2015 in Iran and Afghanistan. Its full potential, however, was first realized during the 2019 protests in Iran, which saw the most widespread internet shutdown prior to the blackout this year. <a href="https://www.wired.com/story/iran-news-internet-shutdown/" target="_blank"><em><em>Wired</em></em></a> called the 2019 shutdown “the most severe disconnection” tracked by <a href="https://netblocks.org/" target="_blank">NetBlocks</a> in any country in terms of its “technical complexity and breadth.” Our technology helped thousands of people stay informed. We sent crucial local updates, legal-aid guides, digital security tools, and independent news to satellite receivers all over the country, seeing a sixfold increase in our user base.</p><p>When that wave of protests subsided, the government allowed some communication services to return. People were again able to access the free internet using VPNs and other antifilter software that allowed them to bypass restrictions. Toosheh then became a public access point for news, educational material, and entertainment beyond government filtering.</p><p>Toosheh’s impact is often personal. A traveling teacher in western Iran told NFP that he regularly distributed Toosheh files to students in remote villages. One package included footage of female athletes competing in the Olympic Games, something never broadcast in Iran. For one young girl, it was the first time she realized women could compete professionally in sports. That moment underscores a broader truth: Access to information is not only about news or politics, but about exposure to possibilities.</p><h2>The Cost of Toosheh<br/></h2><p>Unlike internet-based systems, Toosheh’s operational cost remains constant regardless of the number of users. A single TV satellite in geostationary earth orbit, deployed and maintained by an international company such as Eutelsat, can broadcast to an entire continent with no increase in cost to audiences. What’s more, the startup cost for users isn’t high: A satellite dish and receiver in Iran costs less than US $50, which is affordable to many. And it costs nothing for people to use Toosheh’s service and receive its files.</p><p class="pull-quote"><span>We aim not just to build a tool for censorship circumvention, but to redefine access itself. </span></p><p>However, operating the service is costly: NetFreedom Pioneers pays tens of thousands of dollars a month for satellite bandwidth. We had received funding from the U.S. State Department, but in August of 2025, that funding ended, forcing us to suspend services in Iran.</p><p>Then the December protests happened, and broadcasting to Iran became an urgent priority. To turn Toosheh back on, we needed roughly $50,000 a month. With the support of a handful of private donors, we were able to meet these costs and sustain operations in Iran for a few months, though our future there and elsewhere is uncertain.</p><h2>Satellites Against Censorship<br/></h2><p>Toosheh’s revival in Iran came alongside NFP’s ongoing support for deployments of Starlink, a satellite internet service that allows users to connect directly to satellites rather than relying on domestic networks, which the government can shut down. Unlike Toosheh’s one-way broadcasts, <a href="https://spectrum.ieee.org/tag/starlink" target="_blank">Starlink</a> provides full two-way internet access, enabling users to send messages, upload videos, and communicate with the outside world.</p><p>In 2022, we started gathering <a href="https://www.gofundme.com/f/urgent-help-deliver-starlink-and-vpn-access-for-freedom" target="_blank">donations</a> to buy Starlink terminals for Iran. We have delivered more than 300 of the <a href="https://www.theguardian.com/world/2026/jan/13/ecosystem-smuggled-tech-iran-last-link-outside-world-internet" target="_blank">roughly 50,000</a> there, enabling citizens to send encrypted updates and videos to us from inside the country. Because the technology is banned by the government, access remains limited and carries risk; Iranian authorities have recently arrested Starlink users and sellers. And unlike Toosheh’s receive-only broadcasts, Starlink terminals transmit signals back to orbit, creating a radio footprint that can potentially be detected.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A photo of a laptop screen says the user is offline." class="rm-shortcode" data-rm-shortcode-id="2c0caa05d5589d7d25beeb8342db442e" data-rm-shortcode-name="rebelmouse-image" id="103c7" loading="lazy" src="https://spectrum.ieee.org/media-library/a-photo-of-a-laptop-screen-says-the-user-is-offline.png?id=65521782&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The internet shutdown in Iran continued after the attacks by Israel and the United States began in late February, preventing Iranians from communicating with the outside world and with one another.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Fatemeh Bahrami/Anadolu/Getty Images</small></p><p>Looking ahead, we envision Toosheh becoming a foundational part of global digital resilience. It is uncensored, untraceable, and resistant to government shutdowns. Because Toosheh is downlink only, it can sometimes feel hard to explain the value of this technology to those living in the free world, those accustomed to open internet access. Yet, people living under censorship have few other choices when there’s a digital blackout.</p><p>Currently, NFP is developing new features like intelligent content curation and automatically prioritizing data packages based on geographic or situational needs. And we’re experimenting with local sharing tools that allow users who receive Toosheh broadcasts to redistribute those files via Wi-Fi hotspots or other offline networks, which could extend the system’s reach to disaster zones, conflict areas, and climate-impacted regions where infrastructure may be destroyed.</p><p>We’re also looking at other use cases. Following the Taliban’s return to power in Afghanistan, NetFreedom Pioneers designed a satellite-based system to deliver educational materials. Our goal is to enable private, large-scale distribution of coursework to anyone—including the girls who are banned from Afghanistan’s schools. The system is technically ready but has yet to secure funding for deployment.</p><p>We aim not just to build a tool for censorship circumvention, but to redefine access itself. Whether in an Iranian city under surveillance, a Guatemalan village without internet, or a refugee camp in East Africa, Toosheh offers a powerful and practical model for delivering vital information without relying on vulnerable or expensive networks.</p><p>Toosheh is a reminder that innovation doesn’t have to mean complexity. Sometimes, the most transformative ideas are the simplest, like delivering data through the sky, quietly and affordably, into the hands of those who need it most.<span class="ieee-end-mark"></span></p><p><em>This article appears in the May 2026 print issue as “The Stealth Signals Bypassing Iran’s Internet Blackout.”</em></p>]]></description><pubDate>Wed, 15 Apr 2026 13:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/iran-internet-blackout-satellite-tv</guid><category>Satellite-communications</category><category>Censorship</category><category>Iran</category><category>Protests</category><category>Democracy</category><category>Internet-shutdowns</category><dc:creator>Evan Alireza Firoozi</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/image.png?id=65716479&amp;width=980"></media:content></item><item><title>Crypto Faces Increased Threat From Quantum Attacks</title><link>https://spectrum.ieee.org/quantum-safe-crypto</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/abstract-pixel-art-resembling-a-padlock-and-token.jpg?id=65520763&width=1245&height=700&coordinates=0%2C156%2C0%2C157"/><br/><br/><p>The <a href="https://spectrum.ieee.org/post-quantum-cryptography-standards-nist" target="_self">race</a> to transition online security protocols to ones that can’t be cracked by a quantum computer is already on. The algorithms that are commonly used today to protect data online—<a href="https://en.wikipedia.org/wiki/RSA_cryptosystem" rel="noopener noreferrer" target="_blank">RSA</a> and <a href="https://en.wikipedia.org/wiki/Elliptic-curve_cryptography" rel="noopener noreferrer" target="_blank">elliptic curve cryptography</a>—are uncrackable by supercomputers, but a large enough quantum computer would make quick work of them. There are <a href="https://spectrum.ieee.org/post-quantum-cryptography-2668949802" target="_self">algorithms</a> secure enough to be out of reach for both classical and future quantum machines, called post-quantum cryptography, but transitioning to these is a <a href="https://spectrum.ieee.org/post-quantum-cryptography-2667758178" target="_self">work in progress</a>. </p><p>Late last month, the team at <a href="https://quantumai.google/" rel="noopener noreferrer" target="_blank">Google Quantum AI</a> published a <a href="https://arxiv.org/abs/2603.28846" rel="noopener noreferrer" target="_blank">whitepaper</a> that added significant urgency to this race. In it, the team showed that the size of a quantum computer that would pose a cryptographic threat is approximately 20 times <a href="https://research.google/blog/safeguarding-cryptocurrency-by-disclosing-quantum-vulnerabilities-responsibly/" rel="noopener noreferrer" target="_blank">smaller</a> than previously thought. This is still far from accessible to the quantum computers that exist today: The largest machines currently consist of approximately 1,000 quantum bits, or qubits, and the whitepaper estimated that about 500 times as much is needed. Nonetheless, this shortens the timeline to switch over to post-quantum algorithms. </p><p>The news had a surprising beneficiary: Obscure cryptocurrency <a href="https://algorand.co/" rel="noopener noreferrer" target="_blank">Algorand</a> <a href="https://www.indexbox.io/blog/algorand-price-surges-44-after-google-research-paper-citation/" rel="noopener noreferrer" target="_blank">jumped</a> 44% in price in response. The whitepaper called out Algorand specifically for implementing post-quantum cryptography on their blockchain. We caught up with Algorand’s chief scientific officer and professor of computer science and engineering at the University of Michigan, <a href="https://web.eecs.umich.edu/~cpeikert/" rel="noopener noreferrer" target="_blank">Chris Peikert</a>, to understand how this announcement is impacting cryptography, why cryptocurrencies are feeling the effects, and what the future might hold. Peikert’s early work on a particular type of algorithm known as <a href="https://en.wikipedia.org/wiki/Lattice-based_cryptography" rel="noopener noreferrer" target="_blank">lattice cryptography</a> underlies most post-quantum security today.</p><p><strong>IEEE Spectrum:</strong><span> What is the significance of this Google Quantum AI whitepaper?</span></p><p><strong>Peikert:</strong> The upshot of this paper is that it shows that a quantum computer would be able to break some of the cryptography that is most widely used, especially in blockchains and cryptocurrencies, with much, much fewer resources than had previously been established. Those resources include the time that it would take to do so and the number of qubits (or quantum bits) that it would have to use.</p><p>This cryptography is very central to not just cryptocurrencies, but more broadly to cryptography on the internet. It is also used for secure web connections between web browsers and web servers. Versions of elliptic curve cryptography are used in national security systems and military encryption. It’s very prevalent and pervasive in all modern networks and protocols.</p><p>And not only was this paper improving the algorithms, but there was also a concurrent paper showing that the hardware itself was substantially improved. The claim here was that the number of physical qubits needed to achieve a certain kind of logical qubit was also greatly reduced. These two kinds of improvements are compounding upon each other. It’s a kind of a win-win situation from the quantum computing perspective, but a lose-lose situation for cryptography.</p><p><strong>IEEE Spectrum: </strong>What do Google AI’s findings mean for cryptocurrencies and the broader cybersecurity ecosystem?</p><p><strong>Peikert:</strong> There’s always been this looming threat in the distance of quantum computers breaking a large fraction of the cryptography that’s used throughout the cryptocurrency ecosystem. And I think what this paper did was really the loudest alarm yet that these kinds of quantum attacks might not be as far off as some have suspected, or hoped, in recent years. It’s caused a reevaluation across the industry, and a moving up of the timeline for when quantum computers might be capable of breaking this cryptography.</p><p>When we think about the timelines and when it’s important to have completed these transitions [to post-quantum cryptography], we also need to factor in the unknown improvements that we should expect to see in the coming years. The science of quantum computing will not stay static, and there will be these further breakthroughs. We can’t say exactly what they will be or when they will come, but you can bet that they will be coming.</p><p><strong>IEEE Spectrum:</strong> What is your guess on if or when quantum computers will be able to break cryptography in the real world?</p><p><strong>Peikert:</strong> Instead of thinking about a specific date when we expect them to come, we have to think about the probabilities and the risks as time goes on. There have been huge breakthrough developments, including not only this paper, but also <a href="https://research.google/blog/making-quantum-error-correction-work/" target="_blank">some</a> last year. But even with these, I think that the chance of a cryptographic attack by quantum computers being successful in the next three years is extremely low, maybe less than a percent. But then, as you get out to several years, like five, six, or 10 years, one has to seriously consider a probability, maybe 5 percent or 10 percent or more. So it’s still rather small, but significant enough that we have to worry about the risk, because the value that is protected by this kind of cryptography is really enormous. </p><p>The U.S. government has put 2035 as its target for migrating all of the national security systems to post-quantum cryptography. That seems like a prudent date, given the timelines that it takes to upgrade cryptography. It’s a slow process. It has to be done very deliberately and carefully to make sure that you’re not introducing new vulnerabilities, that you’re not making mistakes, that everything still works properly. So, you know, given the outlook for quantum computers on the horizon, it’s really important that we prepare now, or ideally, yesterday, or a few years ago, for that kind of transition.</p><p><strong>IEEE Spectrum: </strong>Are there significant roadblocks you see to industrial adoption of post-quantum cryptography going forward?</p><p><strong>Peikert:</strong> Cryptography is very hard to change. We’ve only had one or maybe two major transitions in cryptography since the early 1980s or late 1970s, when the field first was invented. We don’t really have a systematic way of transitioning cryptography. </p><p>An additional challenge is that the performance trade-offs are very different in post-quantum cryptography than they are in the legacy systems. Keys and cipher texts and digital signatures are all significantly larger in post-quantum cryptography, but the computations are actually faster, typically. People have optimized cryptography for speed in the past, and we have very good fast speeds now for post-quantum cryptography, but the sizes of the keys are a challenge. </p><p>Especially in blockchain applications, like cryptocurrencies, space on the blockchain is at a premium. So it calls for a reevaluation in many applications of how we integrate the cryptography into the system, and that work is ongoing. And, the blockchain ecosystem uses a lot of advanced cryptography, exotic things like zero-knowledge proofs. In many cases, we have rudimentary constructions of these fancy cryptography tools from post-quantum-type mathematics, but they’re not nearly as mature and industry-ready as the legacy systems that have been deployed. It continues to be an important technical challenge to develop post-quantum versions of these very fancy cryptographic schemes that are used in cutting-edge applications.</p><p><strong>IEEE Spectrum: </strong>As an academic cryptography researcher, what attracted you to work with a cryptocurrency, and Algorand in particular?</p><p><strong>Peikert:</strong> My former Ph.D. advisor is <a href="https://en.wikipedia.org/wiki/Silvio_Micali" target="_blank">Silvio Micali</a>, the inventor of Algorand. The system is very elegant. It is a very high-performing blockchain system, and it uses very little energy, has fast transaction finalization, and a number of other great features. And Silvio appreciated that this quantum threat was real and was coming, and the team approached me about helping to improve the Algorand protocol at the basic levels to become more post-quantum secure in 2021. That was a very exciting opportunity, because it was a difficult engineering and scientific challenge to integrate post-quantum cryptography into all the different technical and cryptographic mechanisms that were underlying the protocol.</p><p><strong>IEEE Spectrum: </strong>What is the current status of post-quantum cryptography in Algorand, and blockchains in general? </p><p><strong>Peikert:</strong> We’ve identified some of the most pressing issues and worked our way through some of them, but it’s a many-faceted problem overall. We started with the integrity of the chain itself, which is the transaction history that everybody has to agree upon. </p><p>Our first major project was developing a system that would add post-quantum security to the history of the chain. We developed a system called <a href="https://dev.algorand.co/concepts/protocol/state-proofs/" rel="noopener noreferrer" target="_blank">state proofs</a> for that, which is a mixture of ordinary post-quantum cryptography and also some more fancy cryptography: It’s a way of taking a large number of signatures and digesting them down into a much smaller number of signatures, while still being confident that these large number of signatures actually exist and are properly formed. We also followed it with other papers and projects that are about adding post-quantum cryptography and security to other aspects of the blockchain in the Algorand ecosystem. </p><p>It’s not a complete project yet. We don’t claim to be fully post-quantum secure. That’s a very challenging target to hit, and there are aspects that we will continue to work on into the near future.</p><p><strong>IEEE Spectrum: </strong>In your view, will we adopt post-quantum cryptography before the risks actually catch up with us? </p><p><strong>Peikert:</strong> I tend to be an optimist about these things. I think that it’s a very good thing that more people in decision-making roles are recognizing that this is an important topic, and that these kinds of migrations have to be done. I think that we can’t be complacent about it, and we can’t kick the can down the road much longer. But I do see that the focus is being put on this important problem, so I’m optimistic that most important systems will eventually have good either mitigations or full migrations in place. </p><p>But it’s also a point on the horizon that we don’t know exactly when it will come. So, there is the possibility that there is a huge breakthrough, and we have many fewer years than we might have hoped for, and that we don’t get all the systems upgraded that we would like to have fixed by the time quantum computers arrive.</p>]]></description><pubDate>Wed, 15 Apr 2026 13:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/quantum-safe-crypto</guid><category>Quantum-computing</category><category>Post-quantum-cryptography</category><category>Cryptocurrency</category><category>Lattice-cryptography</category><category>Security-protocols</category><category>Blockchain</category><category>Cryptography</category><dc:creator>Dina Genkina</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/abstract-pixel-art-resembling-a-padlock-and-token.jpg?id=65520763&amp;width=980"></media:content></item><item><title>Sarang Gupta Builds AI Systems With Real-World Impact</title><link>https://spectrum.ieee.org/openai-engineer-sarang-gupta</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-young-adult-indian-man-smiling-with-his-arms-crossed.png?id=65519413&width=1245&height=700&coordinates=0%2C187%2C0%2C188"/><br/><br/><p>Like many engineers, <a href="https://www.linkedin.com/in/sarang-gupta/" rel="noopener noreferrer" target="_blank">Sarang Gupta</a> spent his childhood tinkering with everyday items around the house. From a young age he gravitated to projects that could make a difference in someone’s everyday life.</p><p>When the family’s microwave plug broke, Gupta and his father figured out how to fix it. When a drawer handle started jiggling annoyingly, the youngster made sure it didn’t do so for long.</p><h3>Sarang Gupta</h3><br/><p><strong>Employer</strong></p><p><strong></strong>OpenAI in San Francisco</p><p><strong>Job</strong></p><p><strong></strong>Data science staff member</p><p><strong>Member grade</strong></p><p>Senior member</p><p><strong>Alma maters </strong></p><p><strong></strong>The Hong Kong University of Science and Technology; Columbia</p><p>By age 11, his interest expanded from nuts and bolts to software. He learned <a data-linked-post="2674010559" href="https://spectrum.ieee.org/top-programming-languages-2025" target="_blank">programming languages</a> such as <a href="https://en.wikipedia.org/wiki/BASIC" rel="noopener noreferrer" target="_blank">Basic</a> and <a href="https://en.wikipedia.org/wiki/Logo_(programming_language)" rel="noopener noreferrer" target="_blank">Logo</a> and designed simple programs including one that helped a local restaurant automate online ordering and billing.</p><p>Gupta, an IEEE senior member, brings his mix of curiosity, hands-on problem-solving, and a desire to make things work better to his role as member of the data science staff at <a href="https://openai.com/" rel="noopener noreferrer" target="_blank">OpenAI</a> in San Francisco. He works with the go-to-market (GTM) team to help businesses adopt <a href="https://chatgpt.com/" rel="noopener noreferrer" target="_blank">ChatGPT</a> and other products. He builds data-driven models and systems that support the sales and marketing divisions.</p><p>Gupta says he tries to ensure his work has an impact. When making decisions about his career, he says, he thinks about what AI solutions he can unlock to improve people’s lives.</p><p>“If I were to sum up my overall goal in one sentence,” he says, “it’s that I want AI’s benefits to reach as many people as possible.”</p><h2>Pursuing engineering through a business lens</h2><p>Gupta’s early interest in tinkering and programming led him to choose physics, chemistry, and math as his higher-level subjects at <a href="https://www.cirschool.org/" rel="noopener noreferrer" target="_blank">Chinmaya International Residential School</a>, in Tamil Nadu, India. As part of the high school’s <a href="https://www.ibo.org/" rel="noopener noreferrer" target="_blank">International Baccalaureate</a> chapter, students select three subjects in which to specialize.</p><p>“I was interested in engineering, including the theoretical part of it,” Gupta says, “But I was always more interested in the applications: how to sell that technology or how it ties to the real world.”</p><p>After graduating in 2012, he moved overseas to attend the <a href="https://hkust.edu.hk/" rel="noopener noreferrer" target="_blank">Hong Kong University of Science and Technology</a>. The university offered a <a href="https://techmgmt.hkust.edu.hk/" rel="noopener noreferrer" target="_blank">dual bachelor’s program</a> that allowed him to earn one degree in industrial engineering and another in business management in just four years.</p><p>In his spare time, Gupta built a smartphone app that let students upload their class schedules and find classmates to eat lunch with. The app didn’t take off, he says, but he enjoyed developing it. He also launched Pulp Ads, a business that printed advertisements for student groups on tissues and paper napkins, which were distributed in the school’s cafeterias. He made some money, he says, but shuttered the business after about a year.</p><p>After graduating from the university in 2016, he decided to work in Hong Kong’s financial hub and joined <a href="https://www.goldmansachs.com/" rel="noopener noreferrer" target="_blank">Goldman Sachs</a> as an analyst in the bank’s operations division.</p><h2>From finance to process optimization at scale</h2><p>After two parties agree on securities transactions, the bank’s operations division ensures that the trade details are recorded correctly, the securities and payments are ready to transfer, and the transaction settles accurately and on time.</p><p>As an analyst, Gupta’s task was to find bottlenecks in the bank’s workflows and fix them. He identified an opportunity to automate trade reconciliation: when analysts would manually compare data across spreadsheets and systems to make sure a transaction’s details were consistent. The process helped ensure financial transactions were recorded accurately and settled correctly.</p><p>Gupta built internal automation tools that pulled trade data from different systems, ran validation checks, and generated reports highlighting any discrepancies.</p><p>“Instead of analysts manually checking large datasets, the tools automatically flagged only the cases that required investigation,” he says. “This helped the team spend less time on repetitive verification tasks and more time resolving complex issues. It was also my first real exposure to how software and data systems could dramatically improve operational workflows.”</p><p class="pull-quote">“Whether it’s helping a person improve a trait like that or driving efficiencies at a business, AI just has so much potential to help. I’m excited to be a little part of that.”</p><p>The experience made him realize he wanted to work more deeply in technology and data-driven systems, he says. He decided to return to school in 2018 to study data science and AI, when the fields were just beginning to surge into broader awareness.</p><p>He discovered that <a href="https://www.columbia.edu/" rel="noopener noreferrer" target="_blank">Columbia</a> offered a dedicated master’s degree program in data science with a focus on AI. After being accepted in 2019, he moved to New York City.</p><p>Throughout the program, he gravitated to the applied side of machine learning, taking courses in applied deep learning and neural networks.</p><p>One of his major academic highlights, he says, was a project he did in 2019 with the <a href="https://brown.columbia.edu/" rel="noopener noreferrer" target="_blank">Brown Institute</a>, a joint research lab between Columbia and <a href="https://www.stanford.edu/" rel="noopener noreferrer" target="_blank">Stanford</a> focused on using technology to improve journalism. The team worked with <a href="https://www.inquirer.com/" rel="noopener noreferrer" target="_blank"><em><em>The Philadelphia Inquirer</em></em></a><em> </em>to help the newsroom staff better understand their coverage from a geographic and social standpoint. The project highlighted “news deserts”—underserved communities for which the newspaper was not providing much coverage—so the publication could redirect its reporting resources.</p><p>To identify those areas, <a href="https://aclanthology.org/2020.nlpcss-1.17.pdf" rel="noopener noreferrer" target="_blank">Gupta and his team built tools that extracted locations such as</a> street names and neighborhoods from news articles and mapped them to visualize where most of the coverage was concentrated. The <em><em>Inquirer</em></em> implemented the tool in several ways including a new <a href="https://medium.com/the-lenfest-local-lab/how-we-built-a-tool-to-spot-geographic-clusters-and-gaps-in-local-news-e553abe88287" rel="noopener noreferrer" target="_blank">web page that aggregated stories about COVID-19 by county</a>.</p><p> “Journalism was an interesting problem set for me, because I really like to read the news every day,” Gupta says. “It was an opportunity to work with a real newsroom on a problem that felt really impactful for both the business and the local community.”</p><h2>The GenAI inflection point</h2><p>After earning his master’s degree in 2020, Gupta moved to San Francisco to join <a href="https://asana.com/" rel="noopener noreferrer" target="_blank">Asana</a>, the company that developed the work management platform by the same name. He was drawn to the opportunity to work for a relatively small company where he could have end-to-end ownership of projects. He joined the organization as a product data scientist, focusing on A/B testing for new platform features.</p><p>Two years later, a new opportunity emerged: He was asked to lead the launch of Asana Intelligence, an internal machine learning team building AI-powered features into the company’s products.</p><p>“I felt I didn’t have enough experience to be the founding data scientist,” he says. “But I was also really interested in the space, and spinning up a whole machine learning program was an opportunity I couldn’t turn down.”</p><p>The Asana Intelligence team was given six months to build several machine learning–powered features to help customers work more efficiently. They included automatic summaries of project updates, insights about potential risks or delays, and recommendations for next steps.</p><p>The team met that goal and launched several other features including <a href="https://help.asana.com/s/article/smart-status" target="_blank">Smart Status</a>, an AI tool that analyzes a project’s tasks, deadlines, and activity, then generates a status update.</p><p>“When you finally launch the thing you’ve been working on, and you see the usage go up, it’s exhilarating,” he says. “You feel like that’s what you were building toward: users actually seeing and benefiting from what you made.”</p><p>Gupta and his team also translated that first wave of work into reusable frameworks and documentation to make it easier to create machine learning features at Asana. He and his colleagues filed several <a href="https://patents.google.com/patent/US20250355685A1/" rel="noopener noreferrer" target="_blank">U.S. patents</a>.</p><p>At the time he took on that role, OpenAI launched ChatGPT. The mainstreaming of generative AI and large language models shifted much of his work at Asana from model development to assessing LLMs.</p><p>OpenAI captured the attention of people around the world, including Gupta. In September 2025 he left Asana to join OpenAI’s data science team.</p><p>The transition has been both energizing and humbling, he says. At OpenAI, he works closely with the marketing team to help guide strategic decisions. His work focuses on developing models to understand the efficiency of different marketing channels, to measure what’s driving impact, and to help the company better reach and serve its customers.</p><p>“The pace is very different from my previous work. Things move quickly,” he says. “The industry is extremely competitive, and there’s a strong expectation to deliver fast. It’s been a great learning experience.”</p><p>Gupta says he plans to stay in the AI space. With technology evolving so rapidly, he says, he sees enormous potential for task automation across industries. AI has already transformed his core software engineering work, he says, and it’s helped him enhance areas that aren’t natural strengths.</p><p>“I’m not a good writer, and AI has been huge in helping me frame my words better and <a href="https://spectrum.ieee.org/engineering-communication" target="_blank">present my work more clearly</a>,” he says. “Whether it’s helping a person improve a trait like that or driving efficiencies at a business, AI just has so much potential to help. I’m excited to be a little part of that.”</p><h2>Exploring IEEE publications and connections</h2><p>Gupta has been an IEEE member since 2024, and he values the organization as both a technical resource and a professional network.</p><p>He regularly turns to IEEE publications and the <a href="https://ieeexplore.ieee.org/Xplore/guesthome.jsp" rel="noopener noreferrer" target="_blank">IEEE Xplore Digital Library</a> to read articles that keep him abreast of the evolution of AI, data science, and the engineering profession.</p><p>IEEE’s <a href="https://cis.ieee.org/activities/membership-activities/ieee-member-directory" rel="noopener noreferrer" target="_blank">member directory</a> tools are another valuable resource that he uses often, he says.</p><p>“It’s been a great way to connect with other engineers in the same or similar fields,” he says. “I love sharing and hearing about what folks are working on. It brings me outside of what I’m doing day to day.</p><p>“It inspires me, and it’s something I really enjoy and cherish.”</p>]]></description><pubDate>Tue, 14 Apr 2026 18:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/openai-engineer-sarang-gupta</guid><category>Ieee-member-news</category><category>Openai</category><category>Generative-ai</category><category>Chatgpt</category><category>Careers</category><category>Type-ti</category><dc:creator>Julianne Pepitone</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/a-young-adult-indian-man-smiling-with-his-arms-crossed.png?id=65519413&amp;width=980"></media:content></item><item><title>What It’s Like to Live With an Experimental Brain Implant</title><link>https://spectrum.ieee.org/bci-user-experience</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-close-up-shows-a-man-seated-in-a-wheelchair-attached-to-the-top-of-his-head-are-two-devices-each-with-a-cable-extending-away.jpg?id=65504719&width=1245&height=700&coordinates=0%2C187%2C0%2C188"/><br/><br/><p><strong><span><em></em></span>Scott Imbrie vividly remembers</strong> the first time he used a robotic arm to shake someone’s hand and felt the robotic limb as if it were his own. “I still get goosebumps when I think about that initial contact,” he says. “It’s just unexplainable.” The moment came courtesy of a brain implant: an array of electrodes that let him control a robotic arm and receive tactile sensations back to the brain.</p><div class="rm-embed embed-media"><iframe height="110px" id="noa-web-audio-player" src="https://embed-player.newsoveraudio.com/v4?key=q5m19e&id=https://spectrum.ieee.org/bci-user-experience&bgColor=F5F5F5&color=1b1b1c&playColor=1b1b1c&progressBgColor=F5F5F5&progressBorderColor=bdbbbb&titleColor=1b1b1c&timeColor=1b1b1c&speedColor=1b1b1c&noaLinkColor=556B7D&noaLinkHighlightColor=FF4B00&feedbackButton=true" style="border: none" width="100%"></iframe></div><p><span>Getting there took decades. In 1985, Imbrie had woken up in the hospital after a car accident with a broken neck and a doctor telling him he’d never use his hands or legs again. His response was an expletive, he says—and a decision. “I’m not going to allow someone to tell me what I can and can’t do.” With the determination of a head-strong 22-year-old, Imbrie gradually regained the ability to walk and some limited arm movement. Aware of how unusual his recovery was, the Illinois-native wanted to help others in similar situations and began looking for research projects related to spinal cord injuries. For decades, though, he wasn’t the right fit, until in 2020 he was finally accepted into a </span><a href="https://news.uchicago.edu/story/uchicago-researchers-re-create-sense-touch-and-motor-control-paralyzed-patient" target="_blank">University of Chicago trial</a><span>.</span></p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Elderly person in orange sweater sits as robotic arm with black hand extends forward" class="rm-shortcode" data-rm-shortcode-id="e63c60845055b0ac0aaa5b32194b121b" data-rm-shortcode-name="rebelmouse-image" id="11ece" loading="lazy" src="https://spectrum.ieee.org/media-library/elderly-person-in-orange-sweater-sits-as-robotic-arm-with-black-hand-extends-forward.jpg?id=65504759&width=980"/></p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Two photos. The first shows a man sitting in a chair with a large robotic arm extending in front of him. The second is a close-up of implants on the surface of a brain.  " class="rm-shortcode" data-rm-shortcode-id="908fc96ae84be7cc9033eadb8be951d9" data-rm-shortcode-name="rebelmouse-image" id="5304e" loading="lazy" src="https://spectrum.ieee.org/media-library/two-photos-the-first-shows-a-man-sitting-in-a-chair-with-a-large-robotic-arm-extending-in-front-of-him-the-second-is-a-close-u.jpg?id=65504756&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Scott Imbrie has shaken hands with a robotic arm controlled by a brain implant. The electrodes record neural signals that enable him to move the device and receive tactile feedback. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Top: 60 Minutes/CBS News; Bottom: University of Chicago </small></p><p>Imbrie is part of a rarefied group: More people have gone to space than have received advanced brain-computer interfaces (<a href="https://spectrum.ieee.org/tag/bci" target="_self">BCI</a>) like his. But a growing number of companies are now attempting to move the devices out of neuroscience labs and into mainstream medical care, where they could help millions of people with paralysis and other neurological conditions. Some companies even hope that BCIs will eventually become a consumer technology.</p><p>None of that will be possible without people like Imbrie. He’s a member of the <a href="https://bcipioneers.org/" target="_blank">BCI Pioneers Coalition</a>, an advocacy group founded in 2018 by <a href="https://bcipioneers.org/" target="_blank">Ian Burkhart</a>, the first quadriplegic to regain hand movement using a brain implant.</p><p>That life-changing experience convinced Burkhart that BCIs will make the leap from lab to real world only if users help shape the technology by sharing their perspectives on what works, what doesn’t, and how the devices fit into daily life. The coalition aims to ensure that companies, clinicians, and regulators hear directly from trial participants.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Two images. The first is a photo of a man sitting in a wheelchair; attached to the top of his head is a device with a cable attached. The second is a medical image showing the location of electrodes in the brain.  " class="rm-shortcode" data-rm-shortcode-id="338aabff57ac096c71e5d462f4959535" data-rm-shortcode-name="rebelmouse-image" id="3e41e" loading="lazy" src="https://spectrum.ieee.org/media-library/two-images-the-first-is-a-photo-of-a-man-sitting-in-a-wheelchair-attached-to-the-top-of-his-head-is-a-device-with-a-cable-atta.jpg?id=65504780&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Ian Burkhart founded the BCI Pioneers Coalition to ensure that companies developing brain implants hear directly from the people using them.  </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Left: Andrew Spear/Redux; Right: Ian Burkhart</small></p><p>The group also serves as a peer-support network for trial participants. That’s crucial, because despite the steady drumbeat of miraculous results from BCI trials, receiving a brain implant comes with significant risks. Surgical complications, such as bleeding or infection in the brain, are possible. Even more concerning is the potential psychological toll if the implant fails to work as expected or if life-changing improvements are eventually withdrawn.</p><p>Researchers spell this out upfront, and many are put off, says <a href="https://biologicalsciences.uchicago.edu/faculty/john-downey" target="_blank">John Downey</a>, an assistant professor of neurological surgery at the University of Chicago and the lead on Imbrie’s clinical trial. “I would say, the number of people I talk to about doing it is probably 10 to 20 times the number of people that actually end up doing it,” he says.</p><h2>What Happens in a BCI Trial? </h2><p>BCI pioneers arrive at their unique status via a number of paths, including spinal cord injuries, stroke-induced paralysis, and amyotrophic lateral sclerosis (ALS). The implants they receive come from <a href="https://blackrockneurotech.com/" target="_blank">Blackrock Neurotech</a>, <a href="https://neuralink.com/" target="_blank">Neuralink</a>, <a href="https://synchron.com/" target="_blank">Synchron</a>, and other companies, and are being tested for restoring limb function, controlling computers and robotic arms, and even restoring speech.</p><p>Many of the implants record signals from the motor cortex—the part of the brain that controls voluntary movements—to move external devices. Some others target the <a href="https://www.simplypsychology.org/somatosensory-cortex.html" target="_blank">somatosensory cortex</a>, which processes sensory signals from the body, including touch, pain, temperature, and limb position, to re-create tactile sensation.</p><h3>BCI Designs Used by Today’s Pioneers</h3><br/><img alt="Diagram comparing three brain-computer interface implants from Blackrock, Neuralink, Synchron." class="rm-shortcode" data-rm-shortcode-id="75d2979c205ebe19a1ea4e94507973c3" data-rm-shortcode-name="rebelmouse-image" id="4c076" loading="lazy" src="https://spectrum.ieee.org/media-library/diagram-comparing-three-brain-computer-interface-implants-from-blackrock-neuralink-synchron.png?id=65514139&width=980"/><p>Ease of use depends heavily on the application. Restoring function to a user’s own limbs or controlling robotic arms involves the most difficult learning curve. In early sessions, participants watch a virtual arm reach for objects while they imagine or attempt the same movement. Researchers record related brain signals and use them to train “decoder” software, which translates neural activity into control signals for a robotic arm or stimulation patterns for the user’s nerves or muscles.</p><p>Paralyzed in a 2010 swimming accident, Burkhart took part in a trial conducted by <a href="https://www.battelle.org/" target="_blank">Battelle Memorial Institute</a> and <a href="https://wexnermedical.osu.edu/" target="_blank">Ohio State University</a> from 2014 to 2021. His implant recorded signals from his motor cortex as he attempted to move his hand, and the system relayed those commands to electrodes in his arm that stimulated the muscles controlling his fingers.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A man seated at a desk has electronics wrapped around his right arm. He\u2019s holding a device shaped like a guitar and looking at a screen showing the fretboard of a guitar. " class="rm-shortcode" data-rm-shortcode-id="2d4609ca465d88f228401cf0e56f91e9" data-rm-shortcode-name="rebelmouse-image" id="6b47b" loading="lazy" src="https://spectrum.ieee.org/media-library/a-man-seated-at-a-desk-has-electronics-wrapped-around-his-right-arm-he-u2019s-holding-a-device-shaped-like-a-guitar-and-looking.jpg?id=65504802&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Ian Burkhart, who is paralyzed from the chest down, received a brain implant that routed neural signals through a computer to his paralyzed muscles, enabling him to play a video game. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Battelle</small></p><p>Getting the system to work seamlessly took time, says Burkhart, and initially required intense concentration. Eventually, he could shift his focus from each individual finger movement to the overall task, allowing him to swipe a credit card, pour from a bottle, and <a href="https://spectrum.ieee.org/brain-implants-and-wearables-let-paralyzed-people-move-again" target="_self">even play <em>Guitar Hero</em></a>.</p><p>Training a decoder is also not a one-and-done process. Systems must be regularly recalibrated to account for “neural drift”—the gradual shift in a person’s neural activity patterns over time. For complex tasks like robotic arm control, researchers may have to essentially train an entirely new decoder before each session, which can take up to an hour.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A man sits in a wheelchair surrounded by screens and electrical equipment. A device is attached to the top of his head, and a wire extends from it. Two other men stand in the room wearing masks.  " class="rm-shortcode" data-rm-shortcode-id="ec5eab3dfd4996ed87bf71eb84333f3d" data-rm-shortcode-name="rebelmouse-image" id="0cba2" loading="lazy" src="https://spectrum.ieee.org/media-library/a-man-sits-in-a-wheelchair-surrounded-by-screens-and-electrical-equipment-a-device-is-attached-to-the-top-of-his-head-and-a-wi.jpg?id=65504805&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Austin Beggin says that testing a BCI is hard work, but he adds that moments like petting his dog make it all worth it.  </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Daniel Lozada/The New York Times/Redux </small></p><p>Even after the system is ready, using the device can be taxing, says <a href="https://www.tiktok.com/@60minutes/video/7215008411992395054" target="_blank">Austin Beggin</a>, who was paralyzed in a swimming accident in 2015 and now participates in a Case Western Reserve University trial <a href="https://www.nytimes.com/2022/12/13/health/elon-musk-brain-implants-paralysis.html" target="_blank">aimed at restoring hand movement.</a> “The mental work of just trying to do something like shaking hands or feeding yourself is 100-fold versus you guys that don’t even think about it,” he says.</p><p>It’s also a serious time commitment. Beggin travels more than 2 hours from his home in Lima, Ohio, to Cleveland for two weeks every month to take part in experiments. All the equipment is set up in the house he stays in, and he typically works with the researchers for 3 to 4 hours a day. The majority of the experiments are not actually task-focused, he says, and instead are aimed at adjusting the control software or better understanding his neural responses to different stimuli.</p><p>But the BCI users say the hard work is worth it. Beyond the hope of restoring lost function, many feel a strong moral obligation to advance a technology that could help others. Beggin compares the pioneers to the early astronauts who laid the groundwork for the lunar landings. “We’re some of the first astronauts just to get shot up for a couple of hours and come back down to earth,” he says.</p><h2>The Emotional Impact of BCIs </h2><p>Speak to BCI early adopters and a pattern emerges: The biggest benefits are often more emotional than practical. Using a robotic arm to feed oneself or control a computer is clearly useful, but many pioneers say the most meaningful moments are the ones the experiment wasn’t even trying to produce. Beggin counts shaking his parents’ hands for the first time since his injury and stroking his pet dachshund as among his favorite moments. “That stuff is absolutely incredible,” he says.</p><p>Neuralink participant <a href="https://x.com/neuralink/status/1983263349715734982" target="_blank">Alex Conley</a>, who broke his neck in a car accident in 2021, uses his implant to control both a robotic arm and computers, enabling him to open doors, feed himself, and handle a smartphone. But he says the biggest boost has come from using computer-aided design software.</p><p>A former mechanic, Conley began using the software within days of receiving his implant to design parts that could be fabricated on a 3D printer. He has designed everything from replacement parts for his uncle’s power tools to bumpers for his brother-in-law’s truck. “I was a very big problem solver before my accident, I was able to fix people’s things,” he says. “This gives me that same little burst of joy.”</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Two photos show former U.S. president Barack Obama with a man seated in a wheelchair that has a robotic arm mounted to it. The first photo shows their whole bodies, the second is a close-up of a fist bump between Obama and the robotic hand. " class="rm-shortcode" data-rm-shortcode-id="df516748294d196a5ece83e680f3f325" data-rm-shortcode-name="rebelmouse-image" id="5acf9" loading="lazy" src="https://spectrum.ieee.org/media-library/two-photos-show-former-u-s-president-barack-obama-with-a-man-seated-in-a-wheelchair-that-has-a-robotic-arm-mounted-to-it-the-f.jpg?id=65504806&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">BCI user Nathan Copeland used a robotic arm to get a fist bump from then-President Barack Obama in 2016. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Jim Watson/AFP/Getty Images </small></p><p>The outside world often underestimates those little wins, says <a href="https://blackrockneurotech.com/insights/nathan-copeland-bci-pioneer/" target="_blank">Nathan Copeland</a>, who holds the record for the longest functional brain implant. After breaking his neck in a car accident in 2004, he joined a University of Pittsburgh BCI trial in 2015 and has since used the device to control both computers and a robotic arm.</p><p>After he uploaded a <a href="https://www.reddit.com/r/ffxiv/comments/dn1thj/i_thought_some_of_you_might_like_this_video_of_me/" target="_blank">video to Reddit</a> of himself playing <em><em>Final Fantasy XIV</em></em>, one commenter criticized him for not using his device for more practical tasks. Copeland says people don’t understand that those lighthearted activities also matter. “A lot of tasks that people think are mundane or frivolous are probably the tasks that have the most impact on someone that can’t do them,” he says. “Agency and freedom of expression, I think, are the things that impact a person’s life the most.”</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="49f2951c7484b0262253be4677639333" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/WjNHkRH0Dus?rel=0&start=90" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span><small class="image-media media-caption" placeholder="Add Photo Caption...">Nathan Copeland plays <i>Final Fantasy XIV</i> using his brain implant to control the game character.</small></p><h2>When Brain Implants Become Life-Changing </h2><p>This perspective resonates with Neuralink’s first user, <a href="https://newmobility.com/noland-arbaughs-life-as-the-first-neuralink-recipient/" target="_blank">Noland Arbaugh</a>—paralyzed from the neck down after a swimming accident in 2016. After receiving his implant in January 2024, he was able to control a cursor within minutes of the device being switched on. A few days later, the engineers let him play the video game <em><em>Civilisation VI</em></em>, and the technology’s potential suddenly felt real. “I played it for 8 hours or 12 hours straight,” he says. “It made me feel so independent and so free.”</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A man seated in a wheelchair looks at the screen of a laptop that\u2019s mounted on his wheelchair.  " class="rm-shortcode" data-rm-shortcode-id="30ce199de3390779d6767954025723e9" data-rm-shortcode-name="rebelmouse-image" id="a9d03" loading="lazy" src="https://spectrum.ieee.org/media-library/a-man-seated-in-a-wheelchair-looks-at-the-screen-of-a-laptop-that-u2019s-mounted-on-his-wheelchair.jpg?id=65504815&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Before receiving his Neuralink implant, Noland Arbaugh used mouth-operated devices to control a computer. He says the BCI is more reliable and enables him to do many more things on his own.  </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Rebecca Noble/The New York Times/Redux </small></p><p>But the technology is also providing more practical benefits. Before his implant, Arbaugh relied on a mouth-held typing stick and a mouth-controlled joystick called a quadstick, which uses sip-or-puff sensors to issue commands. But the fiddliness of this equipment required constant caregiver support. The Neuralink implant has dramatically increased the number of things he can do independently. He says he finds great value in not needing his family “to come in and help me 100 times a day.”</p><p>For <a href="https://www.als.org/blog/advances-brain-computer-interface-technology-help-one-man-find-his-voice" target="_blank">Casey Harrell</a>, the technology has been even more transformative. Diagnosed with ALS in 2020, the climate activist had just welcomed a baby daughter and was in the midst of a major campaign, pressuring a financial firm to divest from companies that had poor environmental records.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Person in a wheelchair outdoors, surrounded by green foliage and soft sunlight." class="rm-shortcode" data-rm-shortcode-id="041a1f40b02e5d01a72d117a237634d5" data-rm-shortcode-name="rebelmouse-image" id="45c80" loading="lazy" src="https://spectrum.ieee.org/media-library/person-in-a-wheelchair-outdoors-surrounded-by-green-foliage-and-soft-sunlight.jpg?id=65504832&width=980"/></p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Bald head with wired brain-computer interface sensors attached in front of a monitor" class="rm-shortcode" data-rm-shortcode-id="1cb7a1d971cd5ac70f674874ee93d27e" data-rm-shortcode-name="rebelmouse-image" id="b377a" loading="lazy" src="https://spectrum.ieee.org/media-library/bald-head-with-wired-brain-computer-interface-sensors-attached-in-front-of-a-monitor.jpg?id=65504831&width=980"/></p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Person using a brain-computer interface to control text on a monitor." class="rm-shortcode" data-rm-shortcode-id="4fbce330d12387f5661b1d6d9badcc55" data-rm-shortcode-name="rebelmouse-image" id="3940e" loading="lazy" src="https://spectrum.ieee.org/media-library/person-using-a-brain-computer-interface-to-control-text-on-a-monitor.jpg?id=65504835&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Casey Harrell was able to communicate again within 30 minutes of his BCI being switched on. The device translates his neural signals quickly enough for him to hold conversations. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Ian Bates/The New York Times/Redux </small></p><p>“Every morning we’d wake up and there’d be a new thing he couldn’t do, a new part of his body that didn’t work,” says his wife, Levana Saxon. Most alarming was his rapid loss of speech, which, among other things, left him unable to indicate when he was in pain. Then a relative alerted him to a <a href="https://health.ucdavis.edu/news/headlines/new-brain-computer-interface-allows-man-with-als-to-speak-again/2024/08" target="_blank">clinical trial</a> at the University of California, Davis, using BCIs to restore speech. He immediately signed up.</p><p>The device, implanted in July 2023, records from the brain region that controls muscles involved in talking and translates these signals into instructions for a voice synthesizer. Within 30 minutes of it being switched on, Harrell could communicate again. “I was absolutely overwhelmed with the thought of how this would impact my life and allow me to talk to my family and friends and better interact with my daughter,” he says. “It just was so overwhelming that I began to cry.”</p><p>While earlier assistive technology limited him to short, direct commands, Harrell says the BCI is fast enough that he can hold a proper conversation, and he’s been able to resume work part-time.</p><h2>What’s Holding BCI Technology Back? </h2><p>BCI technology still has limits. Most trial participants using Blackrock Neurotech implants can operate their devices only in the lab because the systems rely on wired connections and racks of computer hardware. Some users, including Copeland and Harrell, have had the equipment installed at home, but they still can’t leave the house with it. “That would be a big unlock if I was able to do so,” says Harrell.</p><p>The academic nature of many trials creates additional constraints. Pressure to publish and secure funding pushes researchers to demonstrate peak performance on narrow tasks rather than build more versatile and reliable systems, says <a href="https://utrecht-bci.nl/mariska-vansteensel/" target="_blank">Mariska Vansteensel</a>, who runs BCI studies at the University Medical Center Utrecht in the Netherlands. She says that investigating the technology’s limits or repeating an experiment in new patients is “less rewarded in terms of funding.”</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="c85200fe193b095c24a91c1a07bad088" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/1cqRFU0jx1k?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">In a clinical trial, Scott Imbrie uses a BCI to control a robotic arm, using signals from his motor cortex to make it move a block. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">University of Chicago</small></p><p>One of Imbrie’s biggest frustrations is the rapid turnover in experiments. Just as he begins to get proficient at one task, he’s asked to switch to the next task. Study designs also mean that much of the users’ time is spent on mundane tasks required to fine-tune the system.</p><p>Perhaps the biggest issue is that trials are often time-limited. That’s partly because scar tissue from the body’s immune response to the implant can gradually degrade signal quality. But constraints on funding and researcher availability can also make it impossible for users to keep using their BCIs after their trials end, even when the technology is still functional.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="236fdcf6ef676d6d58154c51ea2ccd07" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/60fAjaRfwnU?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">Ian Burkhart’s BCI enables him to grasp objects, pour from a bottle, and swipe a credit card.</small></p><p>Burkhart has firsthand experience. His trial was extended, but the implant was eventually removed after he got an infection. He always knew the trial would end, but it was nonetheless challenging. “It was a little bit of a tease where I got to see the capability of the restoration of function,” he says. “Now I’m just back to where I was.”</p><h2>The Push to Commercialize BCIs </h2><p>Progress is being made in transitioning the technology from experimental research devices to fully-fledged medical products that could help users in their everyday lives. Most academic BCI research has relied on Blackrock Neurotech’s Utah Arrays, which typically feature 96 needlelike electrodes that penetrate the brain’s surface. The implant is connected to a skull-mounted pedestal that’s wired to external hardware. But some of the newer devices are sleeker and less invasive.</p><p>Neuralink’s implant houses its electronics and rechargeable battery in a coin-size unit connected to flexible electrode threads inserted into the brain by a <a href="https://www.youtube.com/watch?v=wLJKOUzFOEU" target="_blank">robotic “sewing machine.”</a> The implant, which is roughly the size of a quarter or a euro, is mounted in a hole cut into the skull and charges and transfers data wirelessly. <a href="https://spectrum.ieee.org/synchron-bci" target="_self">Synchron takes a different approach</a>, threading a stent-like implant through blood vessels into the motor cortex. This “<a href="https://synchron.com/platform" target="_blank">stentrode</a>” connects by wire to a unit in the chest that powers the implant and transmits data wirelessly.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Bearded person in red T\u2011shirt using a laptop at a kitchen table" class="rm-shortcode" data-rm-shortcode-id="357a50573a7a53f991fe357924b7fa76" data-rm-shortcode-name="rebelmouse-image" id="62405" loading="lazy" src="https://spectrum.ieee.org/media-library/bearded-person-in-red-t-u2011shirt-using-a-laptop-at-a-kitchen-table.jpg?id=65504912&width=980"/></p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Man using a large on-screen keyboard to type messages on a tablet computer" class="rm-shortcode" data-rm-shortcode-id="7dd8055dbf7028c4cd03aedb0b1a55c7" data-rm-shortcode-name="rebelmouse-image" id="c7942" loading="lazy" src="https://spectrum.ieee.org/media-library/man-using-a-large-on-screen-keyboard-to-type-messages-on-a-tablet-computer.jpg?id=65504911&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Rodney Gorham can use his Synchron implant to control not just a computer, but also smart devices in his home like an air conditioner, fan, and smart speaker. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Rodney Decker </small></p><p>Neuralink’s decoder runs on a laptop, while Synchron deploys a smartphone-size signal processing unit as a wireless bridge to the user’s devices, which allows them to use their implants at home and on the move. The companies have also developed adaptive decoders that use machine learning to adjust to neural drift on the fly, reducing the need for recalibration.</p><p>Making these devices truly user-friendly will require technology that can interpret user context, says <a href="https://www.linkedin.com/in/kurt-haggstrom/" target="_blank">Kurt Haggstrom</a>, Synchron’s chief commercial officer—including mood, attention levels, and environmental factors like background noise and location. This approach will require AI that analyzes neural signals alongside other data streams such as audio and visual input.</p><p>Last year, Synchron took a first step by pairing its implant with an <a href="https://spectrum.ieee.org/apple-vision-pro" target="_self">Apple Vision Pro headset</a>. When trial participant <a href="https://www.rdworldonline.com/watch-rodney-a-paralyzed-man-control-his-home-with-tech-from-synchron-nvidia-and-apple/" target="_blank">Rodney Gorham</a> looked at devices such as a fan, a smart speaker, and an air conditioner, the headset overlaid a menu that enabled him to adjust the device’s settings using his implant.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="4d29290cc0251118e8c7c0ed46886e43" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/c-_OVgQ5q7k?rel=0&start=72" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">Rodney Gorham uses his Synchron implant to turn on music, feed his dog, and more. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Synchron BCI</small></p><p>Another way to reduce cognitive load is to detect high-order signals of intent in neural data rather than low-level motor commands, says <a href="https://www.linkedin.com/in/florian-solzbacher-aa971015/" target="_blank">Florian Solzbacher</a>, cofounder and chief scientific officer of Blackrock Neurotech. For instance, rather than manually navigating to an email app and typing, the user could simply think about sending an email and the system would then open it with content already prepopulated, he says.</p><p>Durability may prove a thornier problem to solve, UChicago’s Downey says. Current implants last around a decade—well short of a lifelong solution. And with limited real estate in the brain, replacement is only possible once or twice, he says.</p><p>Rapid technological progress also raises difficult decisions about whether to get a BCI implant now or wait for a more advanced device. This was a major concern for Gorham’s wife, Caroline. “I was hesitant. I didn’t want him to go on the trial but maybe a future one,” she says. “It was my fear of missing out on future upgrades.”</p><h2>Will Brain Implants Ever Become Consumer Tech? </h2><p>Some executives have raised the prospect of BCIs eventually becoming consumer devices. Neuralink founder <a href="https://spectrum.ieee.org/tag/elon-musk" target="_self">Elon Musk</a> has been particularly vocal, suggesting that the company’s implants could <a href="https://x.com/elonmusk/status/1802517673584341082?" target="_blank">replace smartphones</a>, let people <a href="https://www.theregister.com/2022/01/31/neuralink_job_ad/" target="_blank">save and replay memories</a>, or even achieve <a href="https://www.businessinsider.com/neuralink" target="_blank">“symbiosis” with AI</a>.</p><p>This kind of talk inspires mixed feelings in users. The hype brings visibility and funding, says Beggin, but could divert attention from medical users’ needs. Copeland worries that consumer branding could strip the devices of insurance coverage and that rising demand may make it harder to access qualified surgeons.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="A man, seen in profile, sits in a wheelchair. " class="rm-shortcode" data-rm-shortcode-id="e5928c73c49d9ab511ccc4c1187c5148" data-rm-shortcode-name="rebelmouse-image" id="437c0" loading="lazy" src="https://spectrum.ieee.org/media-library/a-man-seen-in-profile-sits-in-a-wheelchair.jpg?id=65504925&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Noland Arbaugh, the first recipient of Neuralink’s BCI, says that using the implant to control a computer made him feel independent and free. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Steve Craft/Guardian/eyevine/Redux </small></p><p>There are also concerns about how data collected by BCI companies will be handled if the devices go mainstream. As a trial participant, Arbaugh says he’s comfortable signing away his data rights to advance the technology, but he thinks stronger legal protections will be needed in the future. “Does that data still belong to Neuralink? Does it belong to each person? And can that data be sold?” he asks.</p><p>Blackrock’s Solzbacher says the company remains focused on the medical applications of the technology. But he also believes it is building a “universal interface to any kind of a computerized system” that may have broader applications in the future. And he says the company owes it to users not to limit them to a bare-bones assistive technology. “Why would somebody who’s got a medical condition want to get less than something that somebody who’s able-bodied would possibly also take?” says Solzbacher.</p><p>The ever-optimistic Imbrie heartily agrees. Medical devices are invariably expensive, he says, but targeting consumer applications could push companies to keep devices simple and affordable while continuing to add features. “I truly believe that making it a consumer-available product will just enhance the product’s capabilities for the medical field,” he says.</p><p>Imbrie is on a mission to refocus the conversation around BCIs on the positives. While concerns about risks are valid, he worries that the alarming language often used to describe brain implants discourages people from volunteering for trials that could help them.</p><p>“I remember laying there in the bed and not being able to move,” he says, “and it was really dehumanizing having to ask someone to do everything for you. As humans, we want to be independent.” <span class="ieee-end-mark"></span></p><p><em>This article appears in the May 2026 print issue as “<span>Life With an </span>Experimental Brain Implant.”</em></p>]]></description><pubDate>Tue, 14 Apr 2026 13:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/bci-user-experience</guid><category>Bci</category><category>Clinical-trials</category><category>Brain-computer-interfaces</category><category>User-experience</category><category>Brain-implants</category><category>Assistive-technology</category><dc:creator>Edd Gent</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-close-up-shows-a-man-seated-in-a-wheelchair-attached-to-the-top-of-his-head-are-two-devices-each-with-a-cable-extending-away.jpg?id=65504719&amp;width=980"></media:content></item><item><title>Squishy Photonic Switches Promise Fast Low-Power Logic</title><link>https://spectrum.ieee.org/soft-photonics</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/illustration-of-a-micropipette-piercing-through-a-hemisphere-shaped-membrane-to-inject-a-droplet-at-its-core.jpg?id=65506297&width=1245&height=700&coordinates=0%2C187%2C0%2C188"/><br/><br/><p><span>Photonic devices, which rely on light instead of electricity, have the potential to be faster and more energy efficient than today’s electronics. They also present a unique opportunity to develop devices using <a href="https://spectrum.ieee.org/soft-robot-actuators-bugs" target="_self">soft materials</a>, such as polymers and gels, which are poor conductors of electricity but are easier to manufacture and more environmentally friendly. The development of these potentially squishy, <a href="https://spectrum.ieee.org/wearable-sensors" target="_self">flexible photonics</a>, however, requires the ability to manipulate light using only light, not electricity.</span></p><p>In soft matter, that’s been done primarily by changing the physical properties of optical materials or by using intense light pulses to change the direction of light. Now, an international team of scientists has developed a new way of controlling light with light using very low light intensities and without changing any of the physical properties of materials. </p><p><a href="https://musevic.fmf.uni-lj.si/" target="_blank"><span>Igor Muševič</span></a>, a professor of physics at the University of Ljubljana who led the project, says that he first got the idea for the device while at a conference in San Francisco, listening to a talk by <a href="https://www.nobelprize.org/prizes/chemistry/2014/hell/facts/" target="_blank">Stefan W. Hell </a>about stimulated emission depletion (STED) microscopy. The imaging technique, for which Hell won a <a href="https://www.nobelprize.org/prizes/chemistry/2014/summary/" target="_blank">Nobel Prize in Chemistry in 2014</a>, uses two lasers to produce an extremely small light beam to scan objects. “When I saw this, I said, This is manipulation light by light, right?” Muševič recalls.</p><p><span>His realization inspired a device into which a laser pulse is fired. Whether or not this beam makes it out of the device depends on whether or not a second pulse is fired less than a nanosecond afterwards.</span></p><h2>A liquid crystal photonic switch</h2><p><span>The device consists of a spherically shaped bead of liquid crystal, held in shape by its elastic material properties and the forces between its molecules, infused with a fluorescent dye and trapped between four upright cone-shaped polymer structures that guide light in and out of the device. When a laser pulse is sent through one of the four polymer waveguides, the light is quickly transferred into the liquid crystal, exciting the fluorescent dye. In a process known as whispering gallery mode resonance, the photons inside the liquid crystal are reflected back inside each time they hit the liquid’s spherical surface. The result is that light circulates inside the cavity until it is eventually reflected into one of the waveguides, which then emits the photons out in a laser beam. </span></p><p>The team realized that sending a second laser pulse of a different color into the waveguides before the liquid crystal started emitting light from the first laser pulse resulted in stimulated emission of the excited dye molecules. The photons from the second laser pulse, which had to be fired into the waveguides after the first laser pulse, interact with the already-excited dye molecules. The interaction causes the dye to emit photons identical to those in the second pulse while depleting the energy from the first pulse. The second laser beam, called the STED beam, is amplified by the process, while the light from the first pulse is so diminished that it isn’t emitted at all. Because the outcome of the first laser pulse could be controlled using the second laser pulse, the team had successfully demonstrated the control of light by light.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="0cb7a5df3d8c2896d2f429edfd746f29" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/mImgOT2zJ0I?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Vandna Sharma, Jaka Zaplotnik, et al.</small> </p><p><span>According to the Ljubljana team, the energy efficiency of the liquid crystal approach is much better than previous soft-matter techniques, which had typically involved using intense light fields to change material properties of the soft matter, such as the index of refraction. The new method reduces the energy needed by more than a factor of a hundred. Because the STED laser pulse circulates repeatedly in the crystal, a single photon can deplete many dye molecules of the energy from the first laser pulse.</span> </p><p><a href="https://ravnik.fmf.uni-lj.si/" target="_blank">Miha Ravnik</a>, a theoretical physicist also at the University of Ljubljana who worked on the project, explains that control of light by light is essential in soft-matter photonic logic gates. “You can very much control when [light] is generated and in which direction,” Ravnik says of the light shined into the polymer waveguides. “And this gives you, then, this capability that you create logical operations with light.”</p><p>Aside from its potential in photonic logical circuits, the team’s approach presents several technical advantages over photonics made from silicon or other hard materials, Muševič says. For example, using soft matter greatly simplifies the manufacturing process. The liquid crystal in the team’s device can be inserted in less than a second, but manufacturing a similar structure with hard materials is difficult. Additionally, soft-matter devices can be manufactured at much lower temperatures than silicon and other hard materials. Muševič also points out that soft matter presents an opportunity to experiment with the geometry of the device. With liquid crystals “you can make many different kinds of cavities,” says Muševič. “You have, I would say, a lot of engineering space.”</p><p>Ravnik is excited for the potential of the team’s breakthrough, particularly as a step toward <a href="https://spectrum.ieee.org/generative-optical-ai-nature-ucla" target="_self">photonic computing</a> and even photonic neural networks. But, he recognizes that these developments are far down the line. “There’s no way this technology can compete with current neural network implementation at all,” he admits. Still, the possibilities are tantalizing. “The energy losses are predicted to be extremely low, the speeds for calculation extremely high.”</p>]]></description><pubDate>Mon, 13 Apr 2026 12:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/soft-photonics</guid><category>Flexible-circuits</category><category>Photonics</category><category>Optical-switch</category><dc:creator>Velvet Wu</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/illustration-of-a-micropipette-piercing-through-a-hemisphere-shaped-membrane-to-inject-a-droplet-at-its-core.jpg?id=65506297&amp;width=980"></media:content></item><item><title>Working With More Experienced Engineers Can Fast-Track Career Growth</title><link>https://spectrum.ieee.org/using-feedback-engineering</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/an-illustration-of-stylized-people-wearing-business-casual-clothing.webp?id=65257424&width=1245&height=700&coordinates=0%2C112%2C0%2C113"/><br/><br/><p><em>This article is crossposted from </em>IEEE Spectrum<em>’s careers newsletter. <a href="https://engage.ieee.org/Career-Alert-Sign-Up.html" rel="noopener noreferrer" target="_blank"><em>Sign up now</em></a><em> to get insider tips, expert advice, and practical strategies, <em><em>written i<em>n partnership with tech career development company <a href="https://www.parsity.io/" rel="noopener noreferrer" target="_blank">Parsity</a> and </em></em></em>delivered to your inbox for free!</em></em></p><h2>The Worst Engineer in the Room</h2><p>My salary doubled. My confidence tanked. </p><p>That’s what happened when I had just joined a five-person startup in San Francisco in my third year as a software engineer. Two of the founders had been recognized in Forbes 30 Under 30. The team was exceptional by any measure.</p><p>On my first day, someone made a joke about Dijkstra’s algorithm. Everyone laughed. I smiled along, then looked it up afterward so I could understand why it was funny. Dijkstra’s algorithm finds the shortest path between 2 points—the math underlying GPS navigation. It’s a foundational concept in virtually every formal computer science curriculum. I had never encountered it.</p><p>That moment reflected a broader pattern. Conversations about system design and tradeoffs often felt just out of reach. I could follow parts of them, but not enough to contribute meaningfully.</p><p>I was mostly self-taught. Wide coverage, shallow roots. The engineers around me had roots. You could feel it in how they reasoned through problems, how they talked about tradeoffs, how they debugged with patience instead of pure panic.</p><h2>The Advice That Sounds Good Until You’re Living It</h2><p>You’ve heard the phrase: “If you’re the smartest person in the room, you’re in the wrong room.”</p><p>It sounds aspirational. What nobody tells you is what it actually feels like to be in that room. It feels like barely following system design conversations. Like nodding along to discussions you can only partially decode. Like shipping solutions through trial and error and hoping nobody looks too closely.</p><p>Being the weakest engineer in the room is genuinely uncomfortable. It surfaces every gap. And if you’re not careful, it pushes you in exactly the wrong direction.</p><p>My instinct was to make myself smaller. On a team of five, every voice mattered. I stopped offering mine. I rushed toward working solutions without real understanding, hoping velocity would compensate for depth.</p><p>I was working harder and, at the same time, I was not improving.</p><p>The turning point came when one of the most senior engineers left. Before departing, he told me it was difficult to work with me because I lacked foundational programming knowledge, listing out the concepts he saw me struggle with.</p><p>For the first time, what had felt like vague inadequacy became something specific.</p><h2>What the Cliché Misses</h2><p>Proximity to stronger engineers is not sufficient on its own. You won’t absorb their skill through osmosis. The engineers who thrive when they’re outmatched are not the ones who wait for confidence to arrive. They treat the discomfort as diagnostic information.</p><p>What can they answer that I can’t? What do they see in a system that I’m missing?</p><p>I defined a clear picture of the engineer I wanted to become and compared it to where I was. I wrote down what I did not know. I identified how I would close each gap with books, tutorials and small projects. I asked for recommendations from the same engineer who gave me the hard feedback.</p><p>I figured out the gaps. Then the bridges. Then I worked through each of them.</p><p>Over time, conversations became clearer. Debugging became more systematic. I started contributing meaningfully rather than just executing tasks.</p><h2>The Other Room Nobody Warns You About</h2><p>There’s a less-obvious version of this same problem: when you’re the strongest engineer in the room. </p><p>It can feel rewarding. Less friction, more validation. But there’s also less growth. When you’re at the ceiling, there’s no external pressure to raise your own floor. The feedback loops that sharpen judgment go quiet. Some engineers spend years there without noticing. They’re good. They’re comfortable. They stop getting better.</p><p>Both rooms carry risk. One threatens your confidence. The other threatens your trajectory.</p><p>Being the weakest engineer in a strong room is an advantage, but only if you treat it like one. It gives you a clear benchmark. But the room doesn’t do the work for you. You have to name the gaps, build a plan, and follow through.</p><p>And if you ever find yourself in the other room, where you’re clearly the strongest, pay attention to how long you’ve been there.</p><p>Both rooms are trying to tell you something.</p><p>—Brian</p><h2><a href="https://spectrum.ieee.org/us-engineering-phd-enrollment-drop" target="_self">Are U.S. Engineering Ph.D. Programs Losing Students?</a></h2><p>Not every engineer has a doctorate, but Ph.D. engineers are an essential part of the workforce, researching and designing tomorrow’s high-tech products and systems. In the United States, early signs are emerging that Ph.D. programs in electrical engineering and related fields may be shrinking. Political and economic uncertainty mean some universities are now seeing smaller applicant pools and graduate cohorts. </p><p><a href="https://spectrum.ieee.org/us-engineering-phd-enrollment-drop" target="_blank">Read more here. </a></p><h2><a href="https://spectrum.ieee.org/ai-community-engagement" target="_self">What Happens When You Host an AI Cafe</a></h2><p>Last November, three professors at Auburn University in Ala. hosted a gathering at a coffee shop to confront students’ concerns about AI. The event, which they call an “AI Café,” was meant to create an environment “where scholars engage their communities in genuine dialogue about AI. Not to lecture about technical capabilities, but to listen, learn, and co-create a vision for AI that serves the public interest.” In a guest article, they share what they learned at the event and tips for starting your own AI Café. </p><p><a href="https://spectrum.ieee.org/ai-community-engagement" target="_blank">Read more here. </a></p><h2><a href="https://newsletter.pragmaticengineer.com/p/what-is-inference-engineering" rel="noopener noreferrer" target="_blank">What Is Inference Engineering?</a></h2>Inference, the process of running a trained AI model on new data, is increasingly <a href="https://spectrum.ieee.org/nvidia-groq-3" target="_self">becoming a focus</a> in the world of AI engineering. The growth of open LLMs means that more engineers can now tweak the models to perform better at inference. Given this trend, a recent issue of the Substack “The Pragmatic Engineer” does a deep dive on inference engineering—what it is, when it’s needed, and how to do it. <p><a href="https://newsletter.pragmaticengineer.com/p/what-is-inference-engineering" target="_blank">Read more here. </a></p>]]></description><pubDate>Fri, 10 Apr 2026 18:49:00 +0000</pubDate><guid>https://spectrum.ieee.org/using-feedback-engineering</guid><category>Careers</category><category>Careers-newsletter</category><dc:creator>Brian Jenney</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/an-illustration-of-stylized-people-wearing-business-casual-clothing.webp?id=65257424&amp;width=980"></media:content></item><item><title>Remembering Gus Gaynor: A Devoted IEEE Volunteer</title><link>https://spectrum.ieee.org/remembering-gus-gaynor</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/black-and-white-photograph-of-a-white-high-school-boy-lowering-a-radio-systems-needle-onto-a-vinyl-record.jpg?id=65492955&width=1245&height=700&coordinates=0%2C187%2C0%2C188"/><br/><br/><p><a href="https://life.ieee.org/an-amazing-career-gerard-gus-gaynor/" rel="noopener noreferrer" target="_blank">Gerard “Gus” Gaynor</a>, a long-serving IEEE volunteer and former engineering director at <a href="https://www.3m.com/" rel="noopener noreferrer" target="_blank">3M</a>, died on 9 March. The IEEE Life Fellow was 104.</p><p>Readers of <a href="https://spectrum.ieee.org/the-institute/" target="_blank"><em><em>The Institute</em></em></a> might remember Gus from his 2022 profile: “<a href="https://spectrum.ieee.org/gus-gaynor-profile" target="_self">From Fixing Farm Equipment to Becoming a Director at 3M</a>.” Just last year, he and I coauthored two<a href="https://spectrum.ieee.org/influence-your-career" target="_blank">articles. One </a>discusses <a href="https://spectrum.ieee.org/influence-your-career" target="_blank">how to leverage relationships to boost your career growth</a>. The other weighs the <a href="https://spectrum.ieee.org/management-versus-technical-track" target="_blank">pros and cons of pursuing a technical or managerial career path</a>. He was 103 years old then. How many IEEE members can claim a centenarian coauthor?</p><p>I first met Gus in 2009 at the <a href="https://technical-community-spotlight.ieee.org/what-is-the-ieee-technical-activities-board-tab/" rel="noopener noreferrer" target="_blank">IEEE Technical Activities Board</a> (TAB) meeting in San Juan, Puerto Rico. We sat together in the airplane on our way back to Minneapolis, our hometown. At home I told many of my friends about the remarkable person—who was 87 years young at the time—with whom I chatted during our six-hour flight.</p><p>A decade later, he and I met for lunch in Minneapolis. He drove himself to the restaurant, just asking for a hand to navigate the snowy sidewalk.</p><h2>A dedicated IEEE volunteer</h2><p>Gus’s involvement with IEEE predates the organization. He joined the <a href="https://ethw.org/IRE_History_1912-1963#History_of_the_Institute_of_Radio_Engineers_1912-1963" rel="noopener noreferrer" target="_blank">Institute of Radio Engineers</a>, a predecessor society, as a student member in 1942. Twenty years later he became an active IEEE volunteer.</p><p>He served on the TAB’s finance committee and the <a href="https://pspb.ieee.org/" rel="noopener noreferrer" target="_blank">Publications Services and Products Board</a>. He was president of the IEEE Engineering Management Society (now the <a href="https://www.ieee-tems.org/" rel="noopener noreferrer" target="_blank">Technology and Engineering Management Society</a> ), and he was the <a href="https://www.ieee-tems.org/publications-of-the-technology-management-council/" rel="noopener noreferrer" target="_blank">Technology Management Council</a>’s first president. He was the founding editor of <a href="https://ieeeusa.org/" rel="noopener noreferrer" target="_blank">IEEE-USA</a>’s online magazine <a href="https://ieeeusa.org/product/the-best-of-todays-engineer-on-innovation/" rel="noopener noreferrer" target="_blank"><em><em>Today’s Engineer</em></em></a>, which reported on government legislation and issues affecting U.S. members’ careers. The magazine is now available as the e-newsletter <a href="https://insight.ieeeusa.org/about/" rel="noopener noreferrer" target="_blank"><em>IEEE-USA InSight</em></a>.</p><p>He authored several books on technology management and other topics, published by IEEE-USA and IEEE-Wiley.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="An elderly white man smiling in a dress shirt against a background of bookshelves." class="rm-shortcode" data-rm-shortcode-id="438ad571c4c9c78266d24b251480a736" data-rm-shortcode-name="rebelmouse-image" id="d6fab" loading="lazy" src="https://spectrum.ieee.org/media-library/an-elderly-white-man-smiling-in-a-dress-shirt-against-a-background-of-bookshelves.jpg?id=65492995&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">IEEE Life Fellow Gerard “Gus” Gaynor died on 9 March.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">The Gaynor Family</small></p><p>Most recently, after the formation of TEMS in 2015, he became an active member of its executive committee. He served two terms as vice president of publications.</p><p>At 100 years old, he led the launch of a new publication, <a href="https://www.ieee-tems.org/ieee-tems-leadership-briefs/" rel="noopener noreferrer" target="_blank"><em><em>TEMS Leadership Briefs</em></em></a>, a novel short-format open-access publication aimed at technology leaders.</p><p>Gus, who is a former member of <em>The Institute</em>’s editorial advisory board, also worked with <a href="https://spectrum.ieee.org/u/kathy-pretz" target="_self">Kathy Pretz</a>, <em>The Institute’s</em> editor in chief, to start an ongoing series of TEMS-sponsored career-interest articles. He coauthored several of them.</p><p>Throughout his 64 years as an IEEE volunteer, he received several honors. They include IEEE EMS’s Engineering Manager of the Year Award, the IEEE TEMS Career Achievement Award, and the IEEE-USA <a href="https://ieeeusa.org/volunteers/awards-recognition/professionalism/mcclure/" target="_blank">McClure Citation of Honor</a>. In 2014 he was inducted into the <a href="https://www.ieee.org/about/tab-hall-of-honor" rel="noopener noreferrer" target="_blank">IEEE Technical Activities Board Hall of Honor</a>.</p><h2>A 25-year career at 3M</h2><p>Gus received a degree in electrical engineering in 1950 from the <a href="https://umich.edu/" rel="noopener noreferrer" target="_blank">University of Michigan</a> in Ann Arbor. He worked for several companies including <a href="https://en.wikipedia.org/wiki/Automatic_Electric" rel="noopener noreferrer" target="_blank">Automatic Electric</a> (now part of <a href="https://www.nokia.com/" rel="noopener noreferrer" target="_blank">Nokia</a>) and Johnson Farebox (now part of <a href="https://genfare.com/" rel="noopener noreferrer" target="_blank">Genfare</a>), before joining 3M in 1962.</p><p>During his successful 25-year career at 3M, he served as chief engineer for a division in Italy, established the innovation department, and led the design and installation of the company’s first computerized manufacturing facilities. He retired as director of engineering in 1987.</p><p>Last year, IEEE Life Fellow <a href="https://www.linkedin.com/in/michael-condry-79931a" rel="noopener noreferrer" target="_blank">Michael Condry</a>, a former TEMS president, organized a Zoom call with Gus and other leaders of the society to celebrate Gus’s 104th birthday. Gus looked well and was his usual upbeat self, telling everyone: “I’m good. Everything’s well. I can’t complain.”</p><p>Gus was married to <a href="https://www.washburn-mcreavy.com/m/obituaries/Shirley-Gaynor/" rel="noopener noreferrer" target="_blank">Shirley Margaret Karrels Gaynor</a>, who passed away in 2018. He lives on in the hearts and minds of his seven children, seven grandchildren, two great-grandchildren, and innumerable friends and IEEE colleagues.</p>]]></description><pubDate>Thu, 09 Apr 2026 18:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/remembering-gus-gaynor</guid><category>Ieee-member-news</category><category>In-memoriam</category><category>Tribute</category><category>Ieee-technology-and-engineering</category><category>Careers</category><category>Type-ti</category><dc:creator>Tariq Samad</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/black-and-white-photograph-of-a-white-high-school-boy-lowering-a-radio-systems-needle-onto-a-vinyl-record.jpg?id=65492955&amp;width=980"></media:content></item><item><title>GoZTASP: A Zero-Trust Platform for Governing Autonomous Systems at Mission Scale</title><link>https://content.knowledgehub.wiley.com/goztasp-a-zero-trust-platform-for-governing-autonomous-systems-at-mission-scale/</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/technology-innovation-institute-logo-with-stylized-tii-and-curved-line.png?id=65498963&width=980"/><br/><br/><p>ZTASP is a mission-scale assurance and governance platform designed for autonomous systems operating in real-world environments. It integrates heterogeneous systems—including drones, robots, sensors, and human operators—into a unified zero-trust architecture. Through Secure Runtime Assurance (SRTA) and Secure Spatio-Temporal Reasoning (SSTR), ZTASP continuously verifies system integrity, enforces safety constraints, and enables resilient operation even under degraded conditions.</p><p>ZTASP has progressed beyond conceptual design, with operational validation at Technology Readiness Level (TRL) 7 in mission critical environments. Core components, including Saluki secure flight controllers, have reached TRL8 and are deployed in customer systems. While initially developed for high-consequence mission environments, the same assurance challenges are increasingly present across domains such as healthcare, transportation, and critical infrastructure.</p><p><span><a href="https://content.knowledgehub.wiley.com/goztasp-a-zero-trust-platform-for-governing-autonomous-systems-at-mission-scale/" target="_blank">Download this free whitepaper now!</a></span></p>]]></description><pubDate>Thu, 09 Apr 2026 15:06:39 +0000</pubDate><guid>https://content.knowledgehub.wiley.com/goztasp-a-zero-trust-platform-for-governing-autonomous-systems-at-mission-scale/</guid><category>Autonomous-systems</category><category>Drones</category><category>Sensors</category><category>Transportation</category><category>Type-whitepaper</category><dc:creator>Technology Innovation Institute</dc:creator><media:content medium="image" type="image/png" url="https://assets.rbl.ms/65498963/origin.png"></media:content></item><item><title>Chip Can Project Video the Size of a Grain of Sand</title><link>https://spectrum.ieee.org/mems-photonics</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/an-array-of-tiny-metallic-cantilevers-curving-away-from-the-surface-of-a-photonic-chip.jpg?id=65493217&width=1245&height=700&coordinates=0%2C156%2C0%2C157"/><br/><br/><p><span>By many estimates, quantum computers will need <a href="https://spectrum.ieee.org/neutral-atom-quantum-computing" target="_blank">millions of qubits </a>to realize their potential applications in cybersecurity, drug development, and other industries. The problem is, anyone who has wanted to simultaneously control millions of a certain kind of qubit has run into the problem of trying to control millions of laser beams. </span> </p><p><span>That’s exactly the challenge that was faced by scientists working on the <a href="https://www.mitre.org/resources/quantum-moonshot" target="_blank">MITRE Quantum Moonshot project</a>, which brought together scientists from MITRE, MIT, the University of Colorado at Boulder, and Sandia National Laboratories. The solution they developed came in the form of an image projection technology that they realized could also be the fix for a host of other challenges in augmented reality, biomedical imaging, and elsewhere. The device is a 1-square-millimeter photonic chip capable of projecting the Mona Lisa onto an area smaller than the size of two human egg <a href="https://spectrum.ieee.org/embryo-electrode-array" target="_blank">cells</a>. </span> </p><p><span>“When we started, we certainly never would have anticipated that we would be making a technology that might revolutionize imaging,” says Matt Eichenfield, one of the leaders of the Quantum Moonshot project, a collaborative research effort focused on developing a scalable, diamond-based quantum computer, and a professor of quantum engineering at the University of Colorado at Boulder. Each second, their chip is capable of projecting 68.6 million individual spots of light—called scannable pixels—to differentiate them from physical pixels. That’s more than 50 times the capability of previous technology, such as <a href="https://spectrum.ieee.org/mems-lidar" target="_blank">micro-electromechanical systems (MEMS) micromirror arrays</a>.</span></p><p> <span>“We have now made a scannable pixel that is at the absolute limit of what diffraction allows,” says <a href="https://www.linkedin.com/in/y-henry-wen-2b41979/" target="_blank">Henry Wen</a>, a visiting researcher at MIT and a photonics engineer at <a href="https://www.quera.com/" target="_blank">QuEra Computing</a>.</span></p><p>The chip’s distinguishing feature is an array of tiny microscale cantilevers, which curve away from the plane of the chip in response to voltage and act as miniature “ski jumps” for light. Light is channeled along the length of each cantilever via a waveguide and exits at its tip. The cantilevers contain a thin layer of aluminum nitride, a piezoelectric that expands or contracts under voltage, thus moving the micromachine up and down and enabling the array to scan beams of light over a two-dimensional area.</p><p>Despite the magnitude of the team’s achievement, Eichenfield says that the process of engineering the cantilevers was “pretty smooth.” Each cantilever is composed of a stack of several submicrometer layers of material and curls approximately 90 degrees out of the plane at rest. To achieve such a high curvature, the team took advantage of differences in the contraction and expansion of individual layers caused by physical stresses in the material resulting from the fabrication process. The materials are first deposited flat onto the chip. Then, a layer in the chip below the cantilever is removed, allowing the material stresses to take effect, releasing the cantilever from the chip and allowing it to curl out. The top layer of each cantilever also features a series of silicon dioxide bars running perpendicular to the waveguide, which keep the cantilever from curling along its width while also improving its lengthwise curvature.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="5525c992b93704c6dfdada2cd2c1d9c2" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/A4-ZqQTZauw?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">A micro-cantilever wiggles and waggles to project light in the right place.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Matt Saha, Y. Henry Wen, et al.</small></p><p>What was more of a challenge than engineering the chip itself was figuring out the details of actually making the chip project images and videos. Working out the process of synchronizing and timing the cantilevers’ motion and light beams to generate the right colors at the right time was a substantial effort, according to <a href="https://www.linkedin.com/in/agreenspon/" target="_blank">Andy Greenspon</a>, a researcher at MITRE who also worked on the project. Now, the team has successfully projected a variety of videos from a single cantilever, including clips from the movie <em><em><a href="https://www.youtube.com/watch?v=GPG3zSgm_Qo&list=PLnvfBuirq7alZgA0yGBnNObE5CeJTpUW4" target="_blank">A Charlie Brown Christmas</a></em></em>. </p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="A warped projection of the Mona Lisa." class="rm-shortcode" data-rm-shortcode-id="a4e5294e1a010872e545dbc18fb0e208" data-rm-shortcode-name="rebelmouse-image" id="a1039" loading="lazy" src="https://spectrum.ieee.org/media-library/a-warped-projection-of-the-mona-lisa.jpg?id=65493253&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The chip projected a roughly 125-micrometer image of the Mona Lisa.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit..."><a href="https://www.nature.com/articles/s41586-025-10038-6" target="_blank">Matt Saha, Y. Henry Wen, et al.</a></small></p><p>Because the chip can project so many more spots in any given time interval than any previous beam scanners, it could also be used to control many more qubits in quantum computers. The Quantum Moonshot program’s mission is to build a quantum computer that can be scaled to millions of qubits. So clearly, it needs a scalable way of controlling each one, explains Wen. Instead of using one laser per qubit, the team realized that not every qubit needed to be controlled at every given moment. The chip’s ability to move light beams over a two-dimensional area would allow them to control all of the qubits with many fewer lasers. </p><p>Another process that Wen thinks the chip could improve is scanning objects for <a href="https://spectrum.ieee.org/3d-printed-linear-motor" target="_blank">3D printing</a>. Today, that typically involves using a single laser to scan over the entire surface of an object. The new chip, however, could potentially employ thousands of laser beams. “I think now you can take a process that would have taken hours and maybe bring it down to minutes,” says Wen. </p><p>Wen is also excited to explore the potential of different cantilever shapes. By changing the orientations of the bars perpendicular to the waveguide, the team has been able to make the cantilevers curl into helixes. Wen says that such unusual shapes could be useful in making a <a href="https://spectrum.ieee.org/neurobot-living-robot-nervous-system" target="_blank">lab-on-a-chip for cell biology</a> or <a href="https://spectrum.ieee.org/lab-on-a-chip-grippers" target="_blank">drug development</a>. “A lot of this stuff is imaging, scanning a laser across something, either to image it or to stimulate some response. And so we could have one of these ski jumps curl not just up, but actually curl back around, and then move around and scan over a sample,” Wen explains. “If you can imagine a structure that will be useful for you, we should try it.”</p>]]></description><pubDate>Thu, 09 Apr 2026 13:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/mems-photonics</guid><category>Microarray</category><category>Digital-micromirror-device</category><category>Mems</category><category>Quantum-computers</category><category>Nitrogen-vacancy-defects-diamond</category><dc:creator>Velvet Wu</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/an-array-of-tiny-metallic-cantilevers-curving-away-from-the-surface-of-a-photonic-chip.jpg?id=65493217&amp;width=980"></media:content></item><item><title>Temple University Student Highlights IEEE Membership Perks</title><link>https://spectrum.ieee.org/temple-university-student-membership-perks</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-young-white-man-smiling-and-crossing-his-arms-in-a-workshop.jpg?id=65485944&width=1245&height=700&coordinates=0%2C62%2C0%2C63"/><br/><br/><p><a href="https://www.linkedin.com/in/kyle-mcginley112/" rel="noopener noreferrer" target="_blank">Kyle McGinley</a> graduated from high school in 2018 and, like many teenagers, he was unsure what career he wanted to pursue. Recuperating from a sports injury led him to consider becoming a physical therapist for athletes. But he was skilled at repairing cars and fixing things around the house, so he thought about becoming an engineer, like his father.</p><p>McGinley, who lives in Sellersville, Pa., took some classes at <a href="https://www.mc3.edu/" rel="noopener noreferrer" target="_blank">Montgomery County Community College</a> in Blue Bell, while also working. During his years at the college, he took a variety of courses and was drawn to electrical engineering and computing, he says. He left to pursue a bachelor’s degree in electrical and computer engineering in Philadelphia at <a href="https://engineering.temple.edu/" rel="noopener noreferrer" target="_blank">Temple University</a>, where he is currently a junior.</p><h3>Kyle McGinley</h3><br/><h2><strong>MEMBER GRADE</strong></h2><p>Student member</p><p><strong>UNIVERSITY</strong></p><p>Temple, in Philadelphia</p><p><strong>MAJOR</strong></p><p><strong></strong> Electrical and computer engineering</p><p>The 26-year-old is also a teaching assistant and a research assistant at Temple. His research focuses on applying artificial intelligence to electrical hardware and robotics. He helped build an AI-integrated <a href="https://spectrum.ieee.org/honda-p2-robot-ieee-milestone" target="_self">android</a> companion to assist in-home caregivers.</p><p>Temple recognized McGinley’s efforts last year with its <a href="https://engineering.temple.edu/students/our-students/scholarships#:~:text=The%20College%20of%20Engineering%20at%20Temple%20University,credit%20hours%20in%20engineering%20or%20engineering%20technology" rel="noopener noreferrer" target="_blank">Butz scholarship</a>, which is awarded annually to an electrical and computer engineering undergraduate with an interest in software development, AI development systems, health education software, or a similar field.</p><p>An IEEE <a href="https://students.ieee.org/membership-benefits/" rel="noopener noreferrer" target="_blank">student member</a>, he is active within the university’s student branch.</p><p>“Dr. Brian Butz, the late professor emeritus, dedicated his research to artificial intelligence,” McGinley says. “The scholarship he and his wife Susan established helps allow students to pursue research in AI. Their generous donation has helped fund my research.” </p><h2>Building a robot aide</h2><p>McGinley is a teaching assistant for his digital circuit design course. In a class of 35 students, it can be a struggle for some to digest the professor’s words, he says.</p><p>“My job is to answer students’ questions if they are having problems following the professor’s lecture or are confused about any of the topics,” he says. “In the lab, I help students debug code or with hardware issues they have on the FPGA [field-programmable gate array] boards.”</p><p>He also conducts research for the university’s <a href="https://cfl-temple.github.io/" rel="noopener noreferrer" target="_blank">Computer Fusion Lab</a> under the supervision of <a href="https://engineering.temple.edu/directory/li-bai-lbai" rel="noopener noreferrer" target="_blank">IEEE Senior Member Li Bai</a>, a professor of electrical and computer engineering. McGinley writes software programs at the lab.</p><p class="pull-quote">“In school, they don’t teach you how to communicate with people. They only teach you how to remember stuff. Working well with people is one of the most underrated skills that a lot of students don’t understand is important.” </p><p>One such assignment was working with the <a href="https://cph.temple.edu/" target="_blank">Temple School of Social Work at the Barnett College of Public Health</a> to build a robot companion integrated with AI to assist individuals with <a href="https://spectrum.ieee.org/parkinsons-disease-pen" target="_self">Parkinson’s disease</a> and their caregivers.</p><p>“I realized the need for this with my grandmother, when she was taking care of my grandfather,” he says. “It was a lot for her, trying to remember everything.”</p><p>Using the latest software and hardware, he and three classmates rebuilt an older lab robot. They installed an operating system and used <a href="https://spectrum.ieee.org/top-programming-languages-2025" target="_self">Python and C++</a> for its control, perception, and behavior, he says. The students also incorporated Google’s <a href="https://gemini.google.com/" rel="noopener noreferrer" target="_blank">Gemini AI</a> to help with routine tasks such as scheduling medication reminders and setting alarms for upcoming doctor visits.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A small humanoid robot standing on a kitchen counter." class="rm-shortcode" data-rm-shortcode-id="004f8c672a90c8b1cd738b7bc9d7f84a" data-rm-shortcode-name="rebelmouse-image" id="09e49" loading="lazy" src="https://spectrum.ieee.org/media-library/a-small-humanoid-robot-standing-on-a-kitchen-counter.jpg?id=65486403&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Kyle McGinley helped build an AI-integrated android to assist individuals with Parkinson’s disease and their caregivers.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Temple University of Public Health</small></p><p>The AI-integrated android was intended to assist, not replace, the caregivers by handling the mental load of remembering tasks, he says.</p><p>“This was one of the cool things that drew me to working in the robotics field,” he says. “Something where AI could be used to help caregivers do simple tasks.</p><p>“My career ambition after I graduate is to gain real-world experience in the engineering industry to learn skills outside of academia, Long term, I want to do project management or work in a technical lead role, with the primary goal of creating impactful projects that I can be proud of.”</p><h2>The benefits of a student branch</h2><p>McGinley joined <a href="https://www.instagram.com/temple_ieee/" target="_blank">Temple’s IEEE student branch</a> last year after one of his professors offered extra credit to students who did so. After attending meetings and participating in a few workshops, he found he really liked the club, he says, adding that he made new friends and enjoyed the camaraderie with other engineering students.</p><p>After the student branch’s board members got to know McGinley better, they asked him to become the club’s historian and manage its social media account. He also helps with event planning, creating and posting fliers, taking pictures, and shooting videos of the gatherings.</p><p>The branch has benefited from McGinley’s involvement, but he says it’s a two-way street.</p><p>“The biggest things I’ve learned are being held accountable and being reliable,” he says. “I am responsible for other people knowing what’s going on.”</p><p>Being an active volunteer has improved his communication skills, he says.</p><p>“Learning to clearly communicate with other people to make sure everyone is on the same page is important,” he says. “In school, they don’t teach you how to communicate with people. They only teach you how to remember stuff. Working well with people is one of the most underrated skills that a lot of students don’t understand is important.”</p><p>He encourages students to join their <a href="https://students.ieee.org/student-branches/" target="_blank">university’s IEEE branch</a>.</p><p>“I know it can be scary because you might not know anyone, but it honestly can’t hurt you; it could actually benefit you,” he says. “Being active is going to help you with a lot of skills that you need.</p><p>“You’ll definitely get opportunities that you would have never known about, like a scholarship or working in the research lab. I would have never gotten these opportunities if I hadn’t shown up. Joining IEEE and being active is the best thing you can do for your career.”</p><p><em>This article was updated on 9 April 2026.</em></p>]]></description><pubDate>Tue, 07 Apr 2026 18:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/temple-university-student-membership-perks</guid><category>Robotics</category><category>Ieee-member-news</category><category>Artificial-intelligence</category><category>Careers</category><category>Student-members</category><category>Temple-university</category><category>Type-ti</category><dc:creator>Kathy Pretz</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-young-white-man-smiling-and-crossing-his-arms-in-a-workshop.jpg?id=65485944&amp;width=980"></media:content></item><item><title>Decentralized Training Can Help Solve AI’s Energy Woes</title><link>https://spectrum.ieee.org/decentralized-ai-training-2676670858</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/illustration-of-several-data-servers-interconnected-across-long-distances.jpg?id=65477795&width=1245&height=700&coordinates=0%2C156%2C0%2C157"/><br/><br/><p> <a href="https://spectrum.ieee.org/topic/artificial-intelligence/" target="_self">Artificial intelligence</a> harbors an enormous <a href="https://spectrum.ieee.org/topic/energy/" target="_self">energy</a> appetite. Such constant cravings are evident in the <a href="https://spectrum.ieee.org/ai-index-2025" target="_self">hefty carbon footprint</a> of the <a href="https://spectrum.ieee.org/tag/data-centers" target="_self">data centers</a> behind the AI boom and the steady increase over time of <a href="https://spectrum.ieee.org/tag/carbon-emissions" target="_self">carbon emissions</a> from training frontier <a href="https://spectrum.ieee.org/tag/ai-models" target="_self">AI models</a>.</p><p>No wonder big tech companies are warming up to <a href="https://spectrum.ieee.org/tag/nuclear-energy" target="_self">nuclear energy</a>, envisioning a future fueled by reliable, carbon-free sources. But while <a href="https://spectrum.ieee.org/nuclear-powered-data-center" target="_self">nuclear-powered data centers</a> might still be years away, some in the research and industry spheres are taking action right now to curb AI’s growing energy demands. They’re tackling training as one of the most energy-intensive phases in a model’s life cycle, focusing their efforts on decentralization.</p><p>Decentralization allocates model training across a network of independent nodes rather than relying on one platform or provider. It allows compute to go where the energy is—be it a dormant server sitting in a research lab or a computer in a <a href="https://spectrum.ieee.org/tag/solar-power" target="_self">solar-powered</a> home. Instead of constructing more data centers that require <a href="https://spectrum.ieee.org/tag/power-grid" target="_self">electric grids</a> to scale up their infrastructure and capacity, decentralization harnesses energy from existing sources, avoiding adding more power into the mix.</p><h2>Hardware in harmony</h2><p>Training AI models is a huge data center sport, synchronized across clusters of closely connected <a href="https://spectrum.ieee.org/tag/gpus" target="_self">GPUs</a>. But as <a href="https://spectrum.ieee.org/mlperf-trends" target="_self">hardware improvements struggle to keep up</a> with the swift rise in the size of <a href="https://spectrum.ieee.org/tag/large-language-models" target="_self">large language models</a>, even massive single data centers are no longer cutting it.</p><p>Tech firms are turning to the pooled power of multiple data centers—no matter their location. <a href="https://spectrum.ieee.org/tag/nvidia" target="_self">Nvidia</a>, for instance, launched the <a href="https://developer.nvidia.com/blog/how-to-connect-distributed-data-centers-into-large-ai-factories-with-scale-across-networking/" target="_blank">Spectrum-XGS Ethernet for scale-across networking</a>, which “can deliver the performance needed for large-scale single job AI training and inference across geographically separated data centers.” Similarly, <a href="https://spectrum.ieee.org/tag/cisco" target="_self">Cisco</a> introduced its <a href="https://blogs.cisco.com/sp/the-new-benchmark-for-distributed-ai-networking" target="_blank">8223 router</a> designed to “connect geographically dispersed AI clusters.”</p><p>Other companies are harvesting idle compute in <a href="https://spectrum.ieee.org/tag/servers" target="_self">servers</a>, sparking the emergence of a <a href="https://spectrum.ieee.org/gpu-as-a-service" target="_self">GPU-as-a-Service</a> business model. Take <a href="https://akash.network/" rel="noopener noreferrer" target="_blank">Akash Network</a>, a peer-to-peer <a href="https://spectrum.ieee.org/tag/cloud-computing" target="_self">cloud computing</a> marketplace that bills itself as the “Airbnb for data centers.” Those with unused or underused GPUs in offices and smaller data centers register as providers, while those in need of computing power are considered as tenants who can choose among providers and rent their GPUs.</p><p>“If you look at [AI] training today, it’s very dependent on the latest and greatest GPUs,” says Akash cofounder and CEO <a href="https://www.linkedin.com/in/gosuri" rel="noopener noreferrer" target="_blank">Greg Osuri</a>. “The world is transitioning, fortunately, from only relying on large, high-density GPUs to now considering smaller GPUs.”</p><h2>Software in sync</h2><p>In addition to orchestrating the <a href="https://spectrum.ieee.org/tag/hardware" target="_self">hardware</a>, decentralized AI training also requires algorithmic changes on the <a href="https://spectrum.ieee.org/tag/software" target="_self">software</a> side. This is where <a href="https://cloud.google.com/discover/what-is-federated-learning" rel="noopener noreferrer" target="_blank">federated learning</a>, a form of distributed <a href="https://spectrum.ieee.org/tag/machine-learning" target="_self">machine learning</a>, comes in.</p><p>It starts with an initial version of a global AI model housed in a trusted entity such as a central server. The server distributes the model to participating organizations, which train it locally on their data and share only the model weights with the trusted entity, explains <a href="https://www.csail.mit.edu/person/lalana-kagal" rel="noopener noreferrer" target="_blank">Lalana Kagal</a>, a principal research scientist at <a href="https://www.csail.mit.edu/" rel="noopener noreferrer" target="_blank">MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL)</a> who leads the <a href="https://www.csail.mit.edu/research/decentralized-information-group-dig" rel="noopener noreferrer" target="_blank">Decentralized Information Group</a>. The trusted entity then aggregates the weights, often by averaging them, integrates them into the global model, and sends the updated model back to the participants. This collaborative training cycle repeats until the model is considered fully trained.</p><p>But there are drawbacks to distributing both data and computation. The constant back-and-forth exchanges of model weights, for instance, result in high communication costs. Fault tolerance is another issue.</p><p>“A big thing about AI is that every training step is not fault-tolerant,” Osuri says. “That means if one node goes down, you have to restore the whole batch again.”</p><p>To overcome these hurdles, researchers at <a href="https://deepmind.google/" rel="noopener noreferrer" target="_blank">Google DeepMind</a> developed <a href="https://arxiv.org/abs/2311.08105" rel="noopener noreferrer" target="_blank">DiLoCo</a>, a distributed low-communication optimization <a href="https://spectrum.ieee.org/tag/algorithms" target="_self">algorithm</a>. DiLoCo forms what <a href="https://spectrum.ieee.org/tag/google-deepmind" target="_self">Google DeepMind</a> research scientist <a href="https://arthurdouillard.com/" rel="noopener noreferrer" target="_blank">Arthur Douillard</a> calls “islands of compute,” where each island consists of a group of <a href="https://spectrum.ieee.org/tag/chips" target="_self">chips</a>. Every island holds a different chip type, but chips within an island must be of the same type. Islands are decoupled from each other, and synchronizing knowledge between them happens once in a while. This decoupling means islands can perform training steps independently without communicating as often, and chips can fail without having to interrupt the remaining healthy chips. However, the team’s experiments found diminishing performance after eight islands.</p><p>An improved version, dubbed <a href="https://arxiv.org/abs/2501.18512" rel="noopener noreferrer" target="_blank">Streaming DiLoCo</a>, further reduces the bandwidth requirement by synchronizing knowledge “in a streaming fashion across several steps and without stopping for communicating,” says Douillard. The mechanism is akin to watching a video even if it hasn’t been fully downloaded yet. “In Streaming DiLoCo, as you do computational work, the knowledge is being synchronized gradually in the background,” he adds.</p><p>AI development platform <a href="https://www.primeintellect.ai/" rel="noopener noreferrer" target="_blank">Prime Intellect</a> implemented a variant of the DiLoCo algorithm as a vital component of its 10-billion-parameter <a href="https://www.primeintellect.ai/blog/intellect-1-release" rel="noopener noreferrer" target="_blank">INTELLECT-1</a> model trained across five countries spanning three continents. Upping the ante, <a href="https://0g.ai/" rel="noopener noreferrer" target="_blank">0G Labs</a>, makers of a decentralized AI <a href="https://spectrum.ieee.org/tag/operating-system" target="_self">operating system</a>, <a href="https://0g.ai/blog/worlds-first-distributed-100b-parameter-ai" rel="noopener noreferrer" target="_blank">adapted DiLoCo to train a 107-billion-parameter foundation model</a> under a network of segregated clusters with limited bandwidth. Meanwhile, popular <a href="https://spectrum.ieee.org/tag/open-source" target="_self">open-source</a> <a href="https://spectrum.ieee.org/tag/deep-learning" target="_self">deep learning</a> framework <a href="https://pytorch.org/projects/pytorch/" rel="noopener noreferrer" target="_blank">PyTorch</a> included DiLoCo in its <a href="https://meta-pytorch.org/torchft/" rel="noopener noreferrer" target="_blank">repository of fault-tolerance techniques</a>.</p><p>“A lot of engineering has been done by the community to take our DiLoCo paper and integrate it in a system learning over consumer-grade internet,” Douillard says. “I’m very excited to see my research being useful.”</p><h2>A more energy-efficient way to train AI</h2><p>With hardware and software enhancements in place, decentralized AI training is primed to help solve AI’s energy problem. This approach offers the option of training models “in a cheaper, more resource-efficient, more energy-efficient way,” says MIT CSAIL’s Kagal.</p><p>And while Douillard admits that “training methods like DiLoCo are arguably more complex, they provide an interesting trade-off of system efficiency.” For instance, you can now use data centers across far apart locations without needing to build ultrafast bandwidth in between. Douillard adds that fault tolerance is baked in because “the blast radius of a chip failing is limited to its island of compute.”</p><p>Even better, companies can take advantage of existing underutilized processing capacity rather than continuously building new energy-hungry data centers. Betting big on such an opportunity, Akash created its <a href="https://www.youtube.com/watch?v=zAj41xSNPeI" rel="noopener noreferrer" target="_blank">Starcluster program</a>. One of the program’s aims involves tapping into solar-powered homes and employing the desktops and laptops within them to train AI models. “We want to convert your home into a fully functional data center,” Osuri says.</p><p>Osuri acknowledges that participating in Starcluster will not be trivial. Beyond solar panels and devices equipped with consumer-grade GPUs, participants would also need to invest in <a href="https://spectrum.ieee.org/tag/batteries" target="_self">batteries</a> for backup power and redundant internet to prevent downtime. The Starcluster program is figuring out ways to package all these aspects together and make it easier for homeowners, including collaborating with industry partners to subsidize battery costs.</p><p>Back-end work is already underway to enable <a href="https://akash.network/roadmap/aep-60/" rel="noopener noreferrer" target="_blank">homes to participate as providers in the Akash Network</a>, and the team hopes to reach its target by 2027. The Starcluster program also envisions expanding into other solar-powered locations, such as schools and local community sites.</p><p>Decentralized AI training holds much promise to steer AI toward a more environmentally sustainable future. For Osuri, such potential lies in moving AI “to where the energy is instead of moving the energy to where AI is.”</p>]]></description><pubDate>Tue, 07 Apr 2026 14:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/decentralized-ai-training-2676670858</guid><category>Training</category><category>Ai-energy</category><category>Data-center</category><category>Large-language-models</category><dc:creator>Rina Diane Caballar</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/illustration-of-several-data-servers-interconnected-across-long-distances.jpg?id=65477795&amp;width=980"></media:content></item><item><title>Why AI Systems Fail Quietly</title><link>https://spectrum.ieee.org/ai-reliability</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-series-of-135-green-dots-slowly-transition-from-bright-green-to-black.png?id=65461614&width=1245&height=700&coordinates=0%2C55%2C0%2C55"/><br/><br/><p>In late-stage testing of a distributed AI platform, engineers sometimes encounter a perplexing situation: Every monitoring dashboard reads “healthy,” yet users report that the system’s decisions are slowly becoming wrong.</p><div class="rm-embed embed-media"><iframe height="110px" id="noa-web-audio-player" src="https://embed-player.newsoveraudio.com/v4?key=q5m19e&id=https://spectrum.ieee.org/ai-reliability&bgColor=F5F5F5&color=1b1b1c&playColor=1b1b1c&progressBgColor=F5F5F5&progressBorderColor=bdbbbb&titleColor=1b1b1c&timeColor=1b1b1c&speedColor=1b1b1c&noaLinkColor=556B7D&noaLinkHighlightColor=FF4B00&feedbackButton=true" style="border: none" width="100%"></iframe></div><p><span>Engineers are trained to recognize </span><a href="https://spectrum.ieee.org/amp/it-management-software-failures-2674305315" target="_blank">failure</a><span> in familiar ways: a service crashes, a sensor stops responding, a constraint violation triggers a shutdown. Something breaks, and the system tells you. But a growing class of software failures looks very different. The system keeps running, logs appear normal, and monitoring dashboards stay green. Yet the system’s behavior quietly drifts away from what it was designed to do.</span></p><p>This pattern is becoming more common as autonomy spreads across software systems. Quiet failure is emerging as one of the defining engineering challenges of autonomous systems because correctness now depends on coordination, timing, and feedback across entire systems.</p><h2>When Systems Fail Without Breaking</h2><p>Consider a hypothetical enterprise AI assistant designed to summarize regulatory updates for financial analysts. The system retrieves documents from internal repositories, synthesizes them using a language model, and distributes summaries across internal channels.</p><p>Technically, everything works. The system retrieves valid documents, generates coherent summaries, and delivers them without issue.</p><p>But over time, something slips. Maybe an updated document repository isn’t added to the retrieval pipeline. The assistant keeps producing summaries that are coherent and internally consistent, but they’re increasingly based on obsolete information. Nothing crashes, no alerts fire, every component behaves as designed. The problem is that the overall result is wrong.</p><p>From the outside, the system looks operational. From the perspective of the organization relying on it, the system is quietly failing.</p><h2>The Limits of Traditional Observability</h2><p>One reason quiet failures are difficult to detect is that traditional systems measure the wrong signals. Operational dashboards track uptime, latency, and error rates, the core elements of modern <a href="https://www.ibm.com/think/topics/observability" target="_blank">observability</a>. These metrics are well-suited for transactional applications where requests are processed independently, and correctness can often be verified immediately.</p><p>Autonomous systems behave differently. Many AI-driven systems operate through continuous reasoning loops, where each decision influences subsequent actions. Correctness emerges not from a single computation but from sequences of interactions across components and over time. A retrieval system may return contextually inappropriate and technically valid information. A <a href="https://spectrum.ieee.org/ai-agent-benchmarks" target="_blank">planning agent</a> may generate steps that are locally reasonable but globally unsafe. A distributed decision system may execute correct actions in the wrong order.</p><p>None of these conditions necessarily produces errors. From the perspective of conventional observability, the system appears healthy. From the perspective of its intended purpose, it may already be failing.</p><h2>Why Autonomy Changes Failure</h2><p>The deeper issue is architectural. Traditional software systems were built around discrete operations: a request arrives, the system processes it, and the result is returned. Control is episodic and externally initiated by a user, scheduler, or external trigger.</p><p>Autonomous systems change that structure. Instead of responding to individual requests, they observe, reason, and act continuously. AI agents maintain context across interactions. Infrastructure systems adjust resources in real time. Automated workflows trigger additional actions without human input.</p><p>In these systems, correctness depends less on whether any single component works and more on coordination across time.</p><p>Distributed-systems engineers have long wrestled with issues of coordination. But this is coordination of a new kind. It’s no longer about things like keeping data consistent across services. It’s about ensuring that a stream of decisions—made by models, reasoning engines, planning algorithms, and tools, all operating with partial context—adds up to the right outcome.</p><p>A modern AI system may evaluate thousands of signals, generate candidate actions, and execute them across a distributed infrastructure. Each action changes the environment in which the next decision is made. Under these conditions, small <a href="https://spectrum.ieee.org/ai-mistakes-schneier" target="_blank">mistakes</a> can compound. A step that is locally reasonable can still push the system further off course.</p><p>Engineers are beginning to confront what might be called behavioral reliability: whether an autonomous system’s actions remain aligned with its intended purpose over time.</p><h2>The Missing Layer: Behavioral Control</h2><p>When organizations encounter quiet failures, the initial instinct is to improve monitoring: deeper logs, better tracing, more analytics. Observability is essential, but it only shows that the behavior has already diverged—it doesn’t correct it.</p><p>Quiet failures require something different: the ability to shape system behavior while it is still unfolding. In other words, autonomous systems increasingly need control architectures, not just monitoring.</p><p>Engineers in industrial domains have long relied on <a href="https://en.wikipedia.org/wiki/Supervisory_control" target="_blank">supervisory control systems</a>. These are software layers that continuously evaluate a system’s status and intervene when behavior drifts outside safe bounds. Aircraft flight-control systems, power-grid operations, and large manufacturing plants all rely on such supervisory loops. Software systems historically avoided them because most applications didn’t need them. Autonomous systems increasingly do.</p><p>Behavioral monitoring in AI systems focuses on whether actions remain aligned with intended purpose, not just whether components are functioning. Instead of relying only on metrics such as latency or error rates, engineers look for signs of behavior drift: <a href="https://en.wikipedia.org/wiki/Concept_drift" target="_blank">shifts in outputs</a>, inconsistent handling of similar inputs, or changes in how multistep tasks are carried out. An AI assistant that begins citing outdated sources, or an automated system that takes corrective actions more often than expected, may signal that the system is no longer using the right information to make decisions. In practice, this means tracking outcomes and patterns of behavior over time.</p><p>Supervisory control builds on these signals by intervening while the system is running. A supervisory layer checks whether ongoing actions remain within acceptable bounds and can respond by delaying or blocking actions, limiting the system to safer operating modes, or routing decisions for review. In more advanced setups, it can adjust behavior in real time—for example, by restricting data access, tightening constraints on outputs, or requiring extra confirmation for high-impact actions.</p><p>Together, these approaches turn reliability into an active process. Systems don’t just run, they are continuously checked and steered. Quiet failures may still occur, but they can be detected earlier and corrected while the system is operating.</p><h2>A Shift in Engineering Thinking</h2><p>Preventing quiet failures requires a shift in how engineers think about reliability: from ensuring components work correctly to ensuring system behavior stays aligned over time. Rather than assuming that correct behavior will emerge automatically from component design, engineers must increasingly treat behavior as something that needs active supervision.</p><p>As AI systems become more autonomous, this shift will likely spread across many domains of computing, including cloud infrastructure, robotics, and large-scale decision systems. The hardest engineering challenge may no longer be building systems that work, but ensuring that they continue to do the right thing over time.</p>]]></description><pubDate>Tue, 07 Apr 2026 13:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/ai-reliability</guid><category>Software-failure</category><category>Software-reliability</category><category>Software-engineering</category><category>Cloud-computing</category><category>Autonomous-systems</category><dc:creator>Varun Raj</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/a-series-of-135-green-dots-slowly-transition-from-bright-green-to-black.png?id=65461614&amp;width=980"></media:content></item><item><title>Over-the-Air Computation Uses Radio Interference to Crunch Data</title><link>https://spectrum.ieee.org/wireless-network-over-air-computation</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/abstract-wavy-lines-and-geometric-circles-forming-a-colorful-fluid-layered-pattern.png?id=65476058&width=1245&height=700&coordinates=0%2C770%2C0%2C771"/><br/><br/><p><strong>Picture a highway with</strong> networked autonomous cars driving along it. On a serene, cloudless day, these cars need only exchange thimblefuls of data with one another. Now picture the same stretch in a sudden snow squall: The cars rapidly need to share vast amounts of essential new data about slippery roads, emergency braking, and changing conditions.</p><p>These two very different scenarios involve vehicle networks with very different computational loads. Eavesdropping on network traffic using a ham radio, you wouldn’t hear much static on the line on a clear, calm day. On the other hand, sudden whiteout conditions on a wintry day would sound like a cacophony of sensor readings and network chatter.</p><p>Normally this cacophony would mean two simultaneous problems: congested communications and a rising demand for computing power to handle all the data. But what if the network itself could expand its processing capabilities with every rising decibel of chatter and with every sensor’s chirp?</p><p>Traditional wireless networks treat communication as separate from computation. First you move data, then you process it. However, an emerging new paradigm called over-the-air computation (OAC) could fundamentally change the game. First <a href="https://bobaknazer.github.io/files/bn_mg_allerton05.pdf" target="_blank">proposed in 2005</a> and recently <a href="https://ieeexplore.ieee.org/abstract/document/11358822" target="_blank">developed and prototyped</a> by a <a href="https://arxiv.org/abs/2311.06829" target="_blank">number of teams</a> around the world, <a href="https://ieeexplore.ieee.org/document/11119744" target="_blank">including ours</a>, OAC combines communication and computation into a single framework. This means that an OAC sensor network—whether shared among <a href="https://spectrum.ieee.org/tag/autonomous-vehicles" target="_self">autonomous vehicles</a>, <a href="https://spectrum.ieee.org/tag/internet-of-things" target="_self">Internet-of-Things</a> sensors, <a href="https://spectrum.ieee.org/tag/smart-home" target="_self">smart-home</a> devices, or <a href="https://spectrum.ieee.org/tag/smart-cities" target="_self">smart-city</a> infrastructure—can carry some of the network’s computing burden as conditions demand.</p><p>The idea takes advantage of a basic physical fact of electromagnetic radiation: When multiple devices transmit simultaneously, their wireless signals naturally combine in the air. Normally, such cross talk is seen as interference, which radios are designed to suppress—especially digital radios with their error-correcting schemes and inherent resistance to low-level noise.</p><p><span>But if we carefully design the transmissions, cross talk can enable a wireless network to directly perform some calculations, such as a sum or an average. </span><a href="https://ieeexplore.ieee.org/document/9663107" target="_blank">Some prototypes today</a><span> do this with </span><a href="https://arxiv.org/abs/2212.06596" target="_blank">analog-style signaling</a><span> on otherwise digital radios—so that the superimposed waveforms represent numbers that have been added before digital signal processing takes place.</span></p><p>Researchers are also beginning to explore <a href="https://arxiv.org/abs/2405.15969" target="_blank">digital, over-the-air computation schemes</a>, which embed the same ideas <a href="https://dl.acm.org/doi/abs/10.1109/TWC.2025.3540455" target="_blank">into digital formats</a>, ultimately allowing the prototype schemes to coexist with today’s digital radio protocols. These various over-the-air computation techniques can help networks scale gracefully, enabling new classes of real-time, data-intensive services while making more efficient use of wireless spectrum.</p><p>OAC, in other words, turns signal interference from a problem into a feature, one that can help wireless systems support massive growth.</p><h2>Reimagining radio interference as infrastructure</h2><p>For<em> </em>decades, engineers designed radio communications protocols with <a href="https://en.wikipedia.org/wiki/Channel_access_method" target="_blank">one overriding goal</a>: to isolate each signal and recover each message cleanly. Today’s networks face a different set of pressures. They must coordinate large groups of devices on shared tasks—such as AI model training or combining disparate sensor readings, also known as <a href="https://spectrum.ieee.org/tag/sensor-fusion" target="_self">sensor fusion</a>—while exchanging as little raw data as possible, to improve both efficiency and privacy. For these reasons, a new approach to transmitting and receiving data may be worth considering, one that doesn’t rely on collecting and storing every individual device’s contributions.</p><p>By turning interference into computation, OAC transforms the wireless medium from a contested battlefield into a collaborative workspace. This paradigm shift has far-reaching consequences: Signals no longer compete for isolation; they cooperate to achieve shared outcomes. OAC cuts through layers of digital processing, reduces latency, and lowers energy consumption.</p><p>Even very simple operations, such as addition, can be the building blocks of surprisingly powerful computations. Many complex processes can be broken down into combinations of simpler pieces, much like how a rich sound can be re-created by combining a few basic tones. By carefully shaping what devices transmit and how the result is interpreted at the receiver, the wireless channel running OAC can carry out other calculations beyond addition. In practice, this means that with the right design, wireless signals can compute a number of key functions that modern algorithms rely on.</p><h3>THE PROBLEM (TRADITIONAL APPROACH) </h3><br/><img alt="Diagram of cars at mixed speeds with complex dashed feedback loops between them" class="rm-shortcode" data-rm-shortcode-id="bfb6f90a49f60c28d337ca50c3da7bb5" data-rm-shortcode-name="rebelmouse-image" id="774d5" loading="lazy" src="https://spectrum.ieee.org/media-library/diagram-of-cars-at-mixed-speeds-with-complex-dashed-feedback-loops-between-them.png?id=65476280&width=980"/><h3></h3><br/><p>For instance, many key tasks in modern networks don’t require the logging and storage of every individual network transmission. Rather, the goal is instead to infer properties about aggregate patterns of network traffic—<a href="https://ieeexplore.ieee.org/document/4118472" target="_blank">reaching agreement or identifying what matters most</a> about the traffic. <a href="https://lamport.azurewebsites.net/pubs/paxos-simple.pdf" target="_blank">Consensus algorithms</a> rely on majority voting to <a href="https://openreview.net/pdf?id=BJxhijAcY7" target="_blank">ensure reliable decisions,</a> even when some devices fail. Artificial intelligence systems depend on <a href="https://proceedings.neurips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf" target="_blank">matrix reduction and simplification operations</a> such as “<a href="https://en.wikipedia.org/wiki/Pooling_layer#Max_pooling" target="_blank">max pooling</a>” (keeping only peak values) to <a href="https://pages.ucsd.edu/~ztu/publication/pami_gpooling.pdf" target="_blank">extract the most useful signals</a> from noisy data.</p><p>In smart cities and smart grids, what <a href="https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1736081" target="_blank">matters most</a> is often not individual readings but <a href="https://www.sciencedirect.com/science/article/abs/pii/S1364032123006159?via%3Dihub" rel="noopener noreferrer" target="_blank">distribution</a>. How many devices report each traffic condition? What is the range of demand across neighborhoods? These are histogram questions—summaries of the device counts per category.</p><p>With type-based multiple access (TBMA), an over-the-air computation <a href="https://ieeexplore.ieee.org/document/1576988" rel="noopener noreferrer" target="_blank">method we use</a>, devices reporting a given condition transmit together over a shared channel. Their signals add up, and the receiver sees only the total signal strength per category. In a single transmission, the entire histogram emerges without ever identifying individual devices. And the more devices there are, the better the estimate. The result is greater spectrum efficiency, with lower latency and scalable, privacy-friendly operations—all from letting the wireless medium do the aggregating and counting.</p><p>It’s easy to imagine how analog values transmitted over the air could be summed via superposition. The amplitudes from different signals add together, so the values those amplitudes represent also simply add together. The more challenging question concerns preserving that additive magic, but with <em>digital </em>signals.</p><p>Here’s how OAC does it. Consider, for instance, one TBMA approach for a network of sensors that gives each possible sensor reading its own dedicated frequency channel. Every sensor on the network that reads “4” transmits on frequency four; every sensor that reads “7” transmits on frequency seven. When multiple devices share the same reading, their amplitudes combine. The stronger the combined signal at a given frequency, the more devices there are reporting that particular value.</p><p>A <a href="https://en.wikipedia.org/wiki/Orthogonal_frequency-division_multiplexing" rel="noopener noreferrer" target="_blank">receiver equipped with a bank of filters tuned to each frequency</a> reads out a count of votes for every possible sensor value. In a single, simultaneous transmission, the whole network has reported its state.</p><p>It might seem paradoxical—digital computation riding atop what appears to be an analog physical effect. But this is also true of all “digital” radio. A Wi-Fi transmitter does not launch ones and zeroes into the air; it modulates electromagnetic waves whose amplitudes and phases encode digital data. The “digital” label ultimately refers to the information layer, not the physics. What makes OAC digital, in the same sense, is that the values being computed—each sensor reading, each frequency-bin count—are discrete and quantized from the start. And because they are discrete, the same <a href="https://arxiv.org/abs/0908.2119" rel="noopener noreferrer" target="_blank">error-correction machinery</a> that has made digital communications robust for decades can be applied here too.</p><p>Synchronization is where OAC’s demands diverge most sharply from digital wireless conventions. Many OAC variants today require something akin to a shared clock at nanosecond precision: Every signal’s phase must be synchronized, or the superposition runs the risk of collapsing into destructive interference. While TBMA relaxes this burden a bit—devices need only share a time window—real engineering challenges lie ahead regardless, before over-the-air computation is ready for the mobile world.</p><h2>How will over-the-air computation work in the field?</h2><p>Over-the-air computation has in recent years moved from theory to initial proofs-of-concept and network test runs. Our research teams in South Carolina and Spain have built working prototypes that deliver repeatable results—with no cables and no external timing sources such as GPS-locked references. All synchronization is handled within the radios themselves.</p><p>Our team at the University of South Carolina (led by Sahin) started with off-the-shelf <a href="https://spectrum.ieee.org/hardware-for-your-software-radio" target="_self">software-defined radios</a>—Analog Devices’ <a href="https://www.analog.com/en/resources/evaluation-hardware-and-software/evaluation-boards-kits/adalm-pluto.html#eb-overview" rel="noopener noreferrer" target="_blank">Adalm-Pluto</a>. We modified the devices’ <a href="https://spectrum.ieee.org/painless-fpga-programming" target="_self">field-programmable gate array</a> hardware inside each radio so it can respond to a trigger signal transmitted from another radio. This simple hack enabled simultaneous transmission, a core requirement for OAC. Our setup used five radios acting as edge devices and one acting as a base station. The task involved training a neural network to perform image recognition over the air. Our system, whose results we <a href="https://ieeexplore.ieee.org/document/10008778" rel="noopener noreferrer" target="_blank">first reported in 2022</a>, achieved a 95 percent accuracy in image recognition without ever moving raw data across the network.</p><h3>THE OVER-THE-AIR COMPUTATION (OAC) APPROACH</h3><br/><img alt="Illustration of cars adjusting speed with colored dashed lines indicating traffic signal control." class="rm-shortcode" data-rm-shortcode-id="05f47093d9693ac5b148c8e62fbb1374" data-rm-shortcode-name="rebelmouse-image" id="eb61f" loading="lazy" src="https://spectrum.ieee.org/media-library/illustration-of-cars-adjusting-speed-with-colored-dashed-lines-indicating-traffic-signal-control.png?id=65487320&width=980"/><h3></h3><br/><p>We also <a href="https://mentor.ieee.org/802.11/dcn/22/11-22-1483-01-aiml-wireless-for-ml-over-the-air-computation.pptx" target="_blank">demonstrated our initial OAC setup</a> at a March 2025 <a href="https://1.ieee802.org/march-2025-plenary-session-in-atlanta-ga-usa/" target="_blank">IEEE 802.11 working group meeting,</a> where an <a href="https://www.ieee802.org/11/Reports/aiml_update.htm" target="_blank">IEEE committee was studying AI and machine learning capabilities</a> for future Wi-Fi standards. As we showed, OAC’s road ahead doesn’t necessarily require reinventing wireless technology. Rather, it can also build on and repurpose existing protocols already in Wi-Fi and 5G.</p><p>However, before OAC can become a routine feature of commercial wireless systems, networks must provide finer-tuned coordination of timing and signal power levels. Mobility is a difficult problem, too. When mobile devices move around, phase synchronization degrades quickly, and computational accuracy can suffer. Present-day OAC tests work in controlled lab environments. But making them robust in dynamic, real-world settings—vehicles on highways, sensors scattered across cities—remains a new frontier for this emerging technology.</p><p>Both of our teams are now scaling up our prototypes and demonstrations. We are together aiming to understand how over-the-air computation performs as the number of devices increases beyond lab-bench scales. Turning prototypes and test-beds into production systems for autonomous vehicles and smart cities will require anticipating tomorrow’s mobility and synchronization problems—and no doubt a range of other challenges down the road.</p><h2>Where OAC goes from here</h2><p>To realize the technological ambitions of over-the-air computation, nanosecond timing and exquisite RF signal design will be crucial. Fortunately, recent engineering advances have made substantial progress in both of these fields.</p><p>Because OAC demands waveform superposition, it benefits from tight coordination in time, frequency, phase, and amplitude among RF transmitters. Such requirements build naturally on decades of work in wireless communication systems designed for shared access. Modern networks <a href="https://www.mdpi.com/2673-4001/5/1/4" target="_blank">already synchronize large numbers of devices</a> using <a href="https://ieeexplore.ieee.org/document/10637136" rel="noopener noreferrer" target="_blank">high-precision timing </a>and <a href="https://peerj.com/articles/cs-2687/" rel="noopener noreferrer" target="_blank">uplink coordination</a>.</p><p>OAC uses the same synchronization techniques already in cellular and Wi-Fi systems. But to actually run over-the-air computations, more precision still will be needed. <a href="https://ieeexplore.ieee.org/document/4657149" rel="noopener noreferrer" target="_blank">Power control</a>, <a href="https://ieeexplore.ieee.org/document/5118192" rel="noopener noreferrer" target="_blank">gain adjustment</a>, and <a href="https://link.springer.com/article/10.1186/s13638-016-0670-9" rel="noopener noreferrer" target="_blank">timing calibration</a> are <a href="https://ieeexplore.ieee.org/document/11016910" rel="noopener noreferrer" target="_blank">standard tools</a> today. We expect that engineers will further refine these existing methods to begin to meet OAC’s more stringent accuracy demands.</p><h3>THE OAC RESULT </h3><br/><img alt="OAC result bar chart: slow 1 (blue), medium 3 (green), fast 1 (red)." class="rm-shortcode" data-rm-shortcode-id="3042c6dc72ca2f66e275f68504ac4f6a" data-rm-shortcode-name="rebelmouse-image" id="b72bb" loading="lazy" src="https://spectrum.ieee.org/media-library/oac-result-bar-chart-slow-1-blue-medium-3-green-fast-1-red.png?id=65476295&width=980"/><p><span>In some cases, in fact, imperfect timing standards may be all that’s needed. Designs and emerging standards in 5G and 6G wireless systems today use </span><a href="https://ieeexplore.ieee.org/abstract/document/9834918" target="_blank">clever encoding that tolerates imperfect synchronization</a><span>. Minor timing errors, </span><a href="https://en.wikipedia.org/wiki/Frequency_drift" target="_blank">frequency drift</a><span>, and signal overlap can in some cases still work capably within an OAC protocol, we anticipate. Instead of fighting messiness, over-the-air computation may sometimes simply be able to roll with it.</span></p><p>Another challenge ahead concerns shifting processing to the transmitter. Instead of the receiver trying to clean up overlapping signals, a better and more efficient approach would involve each transmitter fixing its own signal before sending. Such “pre-compensation” techniques are <a href="https://ieeexplore.ieee.org/document/4350229" target="_blank">already used in MIMO technology</a> (<a href="https://arxiv.org/abs/1902.07678" target="_blank">multi-antenna systems</a> in modern <a href="https://standards.ieee.org/beyond-standards/the-evolution-of-wi-fi-technology-and-standards/" target="_blank">Wi-Fi</a> and cellular networks). OAC would just be repurposing techniques that have already been developed for 5G and 6G technologies.</p><p>Materials science can also help OAC efforts ahead. New generations of <a href="https://spectrum.ieee.org/metamaterials-could-solve-one-of-6gs-big-problems" target="_self">reconfigurable intelligent surfaces</a> shape signals via tiny adjustable elements in the antenna. The surfaces catch radio signals and reshape them as they bounce around. Reconfigurable surfaces can <a href="https://ieeexplore.ieee.org/document/9140329/" target="_blank">strengthen useful signals, eliminate interference, and synchronize wavefront arrivals</a> that would otherwise be out of sync. OAC stands to benefit from these and other emerging capabilities that intelligent surfaces will provide.</p><p>At the system level, OAC will represent a fundamental shift in wireless network system design. Wireless engineers have <a href="https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_avoidance" target="_blank">traditionally tried to avoid</a> designing devices that transmit at the same time. But over-the-air systems will flip the old, familiar design standards on their head.</p><p>One might object that OAC stands to upend decades of existing wireless signal standards that have always presumed data pipes to be data pipes only—not microcomputers as well. Yet we do not anticipate much difficulty merging OAC with existing wireless standards. In a sense, in fact, the <a href="https://www.ieee802.org/11/" target="_blank">IEEE 802.11</a> and <a href="https://www.3gpp.org/" target="_blank">3GPP</a> (3rd Generation Partnership Project) standards bodies have already shown the way.</p><p>A network can set aside certain brief time windows or narrow slices of bandwidth for over‑the‑air computation, and use the rest for ordinary data. From the radio’s point of view, OAC just becomes another operating mode that is turned on when needed and left off the rest of the time.</p><p>Over the past decade, both the IEEE and 3GPP have <a href="https://ieeexplore.ieee.org/document/6515173" target="_blank">integrated once-experimental technologies</a> into their wireless standards—for example, <a href="https://ieeexplore.ieee.org/document/6732923" target="_blank">millimeter-wave mobile communications</a>, <a href="https://link.springer.com/article/10.1155/2011/496763" target="_blank">multiuser MIMO</a>, <a href="https://ieeexplore.ieee.org/document/8458146" target="_blank">beamforming</a>, and <a href="https://ieeexplore.ieee.org/document/7926923" target="_blank">network slicing</a>—by defining each new technological advance as an optional feature. OAC, we suggest, can also operate alongside conventional wireless data traffic as an optional service. Because OAC places high demands on timing and accuracy, networks will need the ability to enable or disable over‑the‑air computation on a per‑application basis.</p><p>With continued progress, OAC will evolve from lab prototype to standardized wireless capability through the 2020s and into the decade ahead. In the process, the wireless medium will transform from a passive data carrier into an active computational partner—providing essential infrastructure for the real-time intelligent systems that future wireless technologies will demand.</p><p>So on that snowy highway sometime in the 2030s, vehicles and sensors won’t wait for permission to think together. Using the emerging over-the-air computation protocols that we’re helping to pioneer, simultaneous computation will be the new default. The networks will work as one.<span class="ieee-end-mark"></span></p><p><em>This article appears in the May 2026 print issue as “<span>Teaching </span><span>Radio Waves </span>to Compute.”</em></p>]]></description><pubDate>Tue, 07 Apr 2026 13:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/wireless-network-over-air-computation</guid><category>Wireless-networks</category><category>Network-infrastructure</category><category>Autonomous-vehicles</category><category>Smart-cities</category><category>Interference</category><category>Computational-resources</category><dc:creator>Ana I. Pérez-Neira</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/abstract-wavy-lines-and-geometric-circles-forming-a-colorful-fluid-layered-pattern.png?id=65476058&amp;width=980"></media:content></item><item><title>AI Is Insatiable</title><link>https://spectrum.ieee.org/high-bandwidth-memory-shortage</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/robot-hand-catching-falling-computer-chips-from-an-open-snack-bag-in-pop-art-style.png?id=65425799&width=1245&height=700&coordinates=0%2C228%2C0%2C228"/><br/><br/><p>While browsing our website a few weeks ago, I stumbled upon “<a href="https://spectrum.ieee.org/dram-shortage" target="_self">How and When the Memory Chip Shortage Will End</a>” by Senior Editor Samuel K. Moore. His analysis focuses on the current DRAM shortage caused by AI hyperscalers’ ravenous appetite for memory, a major constraint on the speed at which large language models run. Moore provides a clear explanation of the shortage, particularly for high bandwidth memory (HBM).</p><p>As we and the rest of the tech media have documented, AI is a resource hog. AI <a href="https://spectrum.ieee.org/data-center-sustainability-metrics" target="_self">electricity consumption</a> could account for up to 12 percent of all U.S. power by 2028. <a href="https://spectrum.ieee.org/ai-energy-use" target="_self">Generative AI queries</a> consumed 15 terawatt-hours in 2025 and are projected to consume 347 TWh by 2030. <a href="https://spectrum.ieee.org/data-centers-pollution" target="_self">Water consumption for cooling AI data centers</a> is predicted to double or even quadruple by 2028 compared to 2023.</p><p>But Moore’s reporting shines a light on an obscure corner of the AI boom. <a href="https://spectrum.ieee.org/processing-in-dram-accelerates-ai" target="_self">HBM</a> is a particular type of memory product tailor-made to serve AI processors. Makers of those processors, notably Nvidia and AMD, are demanding more and more memory for each of their chips, driven by the needs and wants of firms like Google, Microsoft, OpenAI, and Anthropic, which are underwriting an unprecedented buildout of data centers. And some of these facilities are colossal: You can read about the engineering challenges of building Meta’s mind-boggling 5-gigawatt Hyperion site in Louisiana, in “<a href="https://spectrum.ieee.org/5gw-data-center" target="_blank">What Will It Take to Build the World’s Largest Data Center?</a>”</p><p>We realized that Moore’s HBM story was both important and unique, and so we decided to include it in this issue, with some updates since the original published on 10 February. We paired it with a recent story by Contributing Editor Matthew S. Smith exploring how the memory-chip shortage is driving up the price of low-cost computers like the <a href="https://www.raspberrypi.com/" rel="noopener noreferrer" target="_blank">Raspberry Pi</a>. The result is “<a href="https://spectrum.ieee.org/dram-shortage" target="_blank">AI Is a Memory Hog</a>.”</p><p>The big question now is, When will the shortage end? Price pressure caused by AI hyperscaler demand on all kinds of consumer electronics is being masked by stubborn inflation combined with a perpetually shifting tariff regime, at least here in the United States. So I asked Moore what indicators he’s looking for that would signal an easing of the memory shortage.</p><p>“On the supply side, I’d say that if any of the big three HBM companies—<a href="https://www.micron.com/" rel="noopener noreferrer" target="_blank">Micron</a>, <a href="https://semiconductor.samsung.com/dram/" rel="noopener noreferrer" target="_blank">Samsung</a>, and <a href="https://www.skhynix.com/" rel="noopener noreferrer" target="_blank">SK Hynix</a>—say that they are adjusting the schedule of the arrival of new production, that’d be an important signal,” Moore told me. “On the demand side, it will be interesting to see how tech companies adapt up and down the supply chain. Data centers might steer toward hardware that sacrifices some performance for less memory. Startups developing all sorts of products might pivot toward creative redesigns that use less memory. Constraints like shortages can lead to interesting technology solutions, so I’m looking forward to covering those.”</p><p><span>To be sure you don’t miss any of Moore’s analysis of this topic and to stay current on the entire spectrum of technology development, <a href="https://spectrum.ieee.org/newsletters/" target="_blank">sign up for our weekly newsletter, Tech Alert.</a></span></p>]]></description><pubDate>Mon, 06 Apr 2026 14:22:58 +0000</pubDate><guid>https://spectrum.ieee.org/high-bandwidth-memory-shortage</guid><category>Semiconductors</category><category>Dram</category><category>Memory</category><category>Chips</category><category>Ai</category><category>Data-centers</category><dc:creator>Harry Goldstein</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/robot-hand-catching-falling-computer-chips-from-an-open-snack-bag-in-pop-art-style.png?id=65425799&amp;width=980"></media:content></item><item><title>What Happened When We Set Up a Robotics Lab in a Mall</title><link>https://spectrum.ieee.org/boston-dynamics-spot-interaction</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/families-watch-a-colorful-robotic-dog-demo-at-a-robotics-and-ai-institute-exhibit.jpg?id=65453180&width=1245&height=700&coordinates=0%2C62%2C0%2C63"/><br/><br/><p>Building the next generation of robots for successful integration into our homes, offices, and factories is more than just solving the hardware and software problems—we also need to understand how they will be perceived and how they can work effectively with people in those spaces.</p> <p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <a href="https://rai-inst.com/"></a><a class="shortcode-media-lightbox__toggle shortcode-media-controls__button material-icons" title="Select for lightbox">aspect_ratio</a><a href="https://rai-inst.com/" target="_blank"><img alt="Robotics and AI Institute logo with text about post originally appearing there" class="rm-shortcode" data-rm-shortcode-id="09961581414b810cff45f77932185cb3" data-rm-shortcode-name="rebelmouse-image" id="89ff0" loading="lazy" src="https://spectrum.ieee.org/media-library/robotics-and-ai-institute-logo-with-text-about-post-originally-appearing-there.png?id=65453513&width=980"/></a> </p><p>In summer 2025, <a href="https://spectrum.ieee.org/boston-dynamics-ai-institute-hyundai" target="_blank">RAI Institute</a> set up a free pop-up robot experience in the CambridgeSide mall, designed to let people experience state-of-the-art robotics first hand. While news stories about robots and AI are common, with some being overly critical and some overly optimistic, most people have not encountered robots in the flesh (or metal) as it were. With no direct experience, their opinions are largely shaped by pop culture and social media, both of which are more focused on sensational stories instead of accurate information about how the robots might be used effectively and where the technology still falls short. Our goal with the pop-up was twofold: first, to give people an opportunity to see robots that they would otherwise not have a chance to experience; and second, to better understand how the public feels about interacting with these robots.</p><h2>Designing a Robot Experience for the General Public</h2><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Three experimental robotic prototypes displayed behind barriers in a bright gallery." class="rm-shortcode" data-rm-shortcode-id="a1fe59976ca74226f29b65137649c4d4" data-rm-shortcode-name="rebelmouse-image" id="c9163" loading="lazy" src="https://spectrum.ieee.org/media-library/three-experimental-robotic-prototypes-displayed-behind-barriers-in-a-bright-gallery.webp?id=65453673&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Some earlier version legged robots, built by the RAI Institute’s Executive Director, Marc Raibert</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">RAI Institute</small></p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Red robot dog and electric bike displayed in glass cases inside a modern mall." class="rm-shortcode" data-rm-shortcode-id="f0c655444535aac7e11e20510c8bbbae" data-rm-shortcode-name="rebelmouse-image" id="6b96a" loading="lazy" src="https://spectrum.ieee.org/media-library/red-robot-dog-and-electric-bike-displayed-in-glass-cases-inside-a-modern-mall.webp?id=65453671&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The ANYmal by ANYrobotics (left) and a previous model of the RAI Institute’s UMV (right)</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">RAI Institute</small></p><p><span>The pop-up space had two areas: a museum area where people could see historical and modern robots, including some <a href="https://spectrum.ieee.org/marc-raibert-boston-dynamics-instutute" target="_blank">RAI Institute</a> builds like the </span><a href="https://rai-inst.com/resources/blog/designing-wheeled-robotic-systems/" target="_blank">UMV</a>,<span> and an interactive experience called “Drive-a-Spot.” This area was a driving arena where anyone who came by could take the controls of a Spot quadruped, one of the more recognizable, commercially available robots today.</span></p><p>The guest robot drivers used a custom controller built on an adaptive video game controller that was designed so that anyone of any age could use it. It featured basic controls: move forward, back, left, right, adjust height, sit, stand, and tilt. The buttons were large so that tiny or elderly hands could use the controller, and the people who drove Spot ranged in age from 2 to over 90.<br/></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Adaptive gaming controller with large programmable buttons on a black table." class="rm-shortcode" data-rm-shortcode-id="d191483045e332282c7d73dac0962f80" data-rm-shortcode-name="rebelmouse-image" id="2545f" loading="lazy" src="https://spectrum.ieee.org/media-library/adaptive-gaming-controller-with-large-programmable-buttons-on-a-black-table.jpg?id=65453210&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The guest robot drivers used a custom controller built on an adaptive video game controller that was designed so that anyone of any age could use it.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">RAI Institute</small></p><p>The demo area was designed to be a bit challenging for the Spot robot to maneuver in—it contained tight passages, low obstacles to step over, a barrier to crouch under, and taller objects the robot had to avoid. Much to the surprise of many of our guests, Spot is able to autonomously adjust itself to traverse and avoid those obstacles when being supervised by the joystick.<br/></p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="1c2dcee3b7a437fc3f967b9095f81e91" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/dPjUkJGC5Xg?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">RAI Institute</small> </p><p><span>The driving arena’s theme rotated every few weeks across four scenarios: a factory, a home, a hospital, and an outdoor/disaster environment. These were chosen to contrast settings where robots are broadly accepted (industrial, emergency response) with settings where public ambivalence is well documented (domestic, healthcare).</span></p><h3></h3><br/><div class="rblad-ieee_in_content"></div><p>The visitors who chose to drive the Spot robot could also participate in a short survey before and after their driving experience. The survey focused on two core dimensions:</p><ul><li><strong>Comfort: How comfortable would you feel if you encountered a robot in a factory, home, hospital, office, or outdoor/disaster scenario?</strong></li><li><strong>Suitability: How well would this robot work in each of those contexts?</strong> </li></ul><p>The survey also recorded emotional reactions immediately after driving, likelihood to recommend the experience, and open-ended responses about what they found memorable or surprising. The researchers were careful to separate the environment participants drove through from the scenarios they were asked to evaluate in the survey. This distinction is important for interpreting the results given below.</p><h2>Did Interacting With the Robot Change People’s Feelings about Robots?</h2><p><span></span><span>Out of approximately 10,000 guests that visited the Robot Lab, 10 percent of those drove the Spot and opted in to our surveys. Of those surveyed, more than 65 percent of people had seen images or videos of Spot robots online, but most had never seen one of the robots in person.</span></p><h3>Increased Comfort Through Experience</h3><p>Across all five contexts presented in the survey (factory, home, hospital, office, and outdoor/disaster scenarios), comfort scores increased significantly after the driving session. The effects were small to moderate in magnitude, but they were consistent and statistically robust after correcting for multiple comparisons across all participants spanning children to older adults.</p><p>The largest gain appeared in the outdoor/disaster context, which started with low comfort despite high perceived suitability. People already thought Spot would be useful in search-and-rescue scenarios; they just weren’t comfortable with it performing in that scenario. This discomfort may stem from media portrayals of quadruped robots in military contexts. A few minutes of hands-on control appears to partially dissolve that apprehension.</p><p>Participants who drove through the factory-themed arena showed no significant increase in comfort, but this scenario already had the highest rating of any rated context at baseline, leaving little room for improvement.</p><p>No matter their previous experience, most people were neutral about having a Spot robot in their home before their driving experience. However, after the experience of controlling the Spot robot, people had a statistically significant increase in their comfort at having a Spot in their home and also felt that a Spot robot was more suitable for work in any environment, not just the one they had driven it in.</p><h3>Better Understanding of Where Robots Can Fit Into Daily Life</h3><p>Perceived suitability for Spot to operate in each context also increased. However, the pattern in the data is different. The largest gains weren’t in the high-baseline industrial and outdoor contexts. They were in home, office, and hospital—the very environments where people started out most skeptical.</p><p>Participants who drove the Spot robot in a home-themed environment didn’t just consider homes more suitable for robots; they also rated hospitals and offices as more suitable. This result suggests that hands-on control alters something more fundamental than just context-specific familiarity. It may change a person’s underlying understanding of a robot’s capabilities and, consequently, where they believe robots are appropriate.</p><h3>Results by Demographic</h3><p>The hands-on experience seems to be similarly effective across genders, although it does not completely eliminate existing disparities. For example, men reported higher baseline comfort than women across all five contexts. However, all genders improved at similar rates after interaction. The gap didn’t significantly widen or close in most contexts, though it did narrow in factory and office settings.</p><p>Age effects were more context dependent. Children (aged 8–17) rated factory environments as less comfortable and less suitable before the study. However, this could be because most children do not have experience with factory settings or industrial environments. After interaction, this gap largely persisted. By contrast, children showed stronger gains in office comfort than older adults and entered the study rating home contexts more favorably than adults did.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Stacked bar chart of survey participants by age group and gender categories." class="rm-shortcode" data-rm-shortcode-id="91a6e3f855ba0152f034182d4710df9d" data-rm-shortcode-name="rebelmouse-image" id="313e6" loading="lazy" src="https://spectrum.ieee.org/media-library/stacked-bar-chart-of-survey-participants-by-age-group-and-gender-categories.jpg?id=65453246&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Participants ranged from age 8 to over age 75.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">RAI Institute</small></p><p><span>Participants who had previously driven Spot (mainly robotics professionals) began with higher comfort across the board. But after the hands-on session, people with no prior exposure caught up to experienced drivers. This level of familiarity would be difficult to replicate with images and videos alone.</span></p><h3>Post-Interaction Results</h3><p>Post-interaction emotional data was overwhelmingly positive. “Excitement” was reported by 74 percent of participants, “happiness” by 50 percent, and only 12 percent reported “nervousness.” Over 55 percent rated the experience as “brilliant,” and 62 percent said they were very likely to recommend it to a friend.</p><p>The open-ended responses added a lot more color. The most commonly mentioned moments were locomotion and terrain adaptation (22 percent). This included the way Spot navigated steps, tight spaces, and uneven ground and expressive tilt movements (22 percent), which people found surprisingly doglike or dancelike. A smaller set of responses (3 percent) described anthropomorphic reactions: worrying about “hurting” the robot or finding its behavior “silly” in a way that prompted genuine emotional response.</p><p>When asked what tasks they’d want a robot to perform, responses shifted meaningfully. Before driving, answers clustered around domestic assistance and heavy or hazardous labor. After driving, domestic help remained prominent, but entertainment and play jumped from 7.5 percent to 19.4 percent. Companionship also appeared at 5 percent. References to hazardous or industrial tasks declined as people who had operated the robot began imagining it as a companion and playmate, not just a labor-replacement tool.</p><h2>Key Takeaways from the Robot Lab</h2><p>In the not-so-distant future, robots will become more common in public and private spaces. But whether that integration into daily life will be accepted by the general public remains to be seen. The standard approach to building acceptance has been passive exposure such as videos, exhibits, and articles. This study suggests giving people agency and letting them actually operate a robot is a qualitatively different intervention.</p><p>Short, well-designed, hands-on encounters can raise comfort in precisely the social domains where ambivalence is highest and where future robotics deployment will likely take place. This hands-on experience shouldn’t be limited to tech conferences and museums, as it may be more valuable than just entertaining.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Children control a robot car at a tech booth with staff and jungle-themed backdrop" class="rm-shortcode" data-rm-shortcode-id="561f653ae87e1468c7ac31ac92d0fe00" data-rm-shortcode-name="rebelmouse-image" id="a32d5" loading="lazy" src="https://spectrum.ieee.org/media-library/children-control-a-robot-car-at-a-tech-booth-with-staff-and-jungle-themed-backdrop.jpg?id=65453264&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Fun for all ages!</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">RAI Institute</small></p><p><span>We consider the pop-up a success, but as with all experiments, we also learned a lot along the way. For our takeaways, in addition to the increased comfort with robots, we also found that the guests to our space really enjoyed talking to the robotics experts who staffed the location. For many people, the opportunity to talk to a roboticist was as unique as the opportunity to drive a robot, and in the future, we are excited to continue to share our technical work as well as the experiences of our humans, in addition to our humanoids.</span></p><p>Does building a space where folks can experience robots firsthand have the potential to create meaningful, long-term attitude shifts? That remains an open question. But the effect’s direction and consistency across different situations, ages, and genders are hard to ignore.</p><div class="horizontal-rule"></div><p><a href="https://rai-inst.com/wp-content/uploads/2026/03/HRI26-Pop-Up_Encounters_with_Spot.pdf" target="_blank">Pop-Up Encounters With Spot: Shaping Public Perceptions of Robots Through Hands-On Experience</a>, by Hae Won Park, Georgia Van de Zande, Xiajie Zhang, Dawn Wendell, and Jessica Hodgins from the RAI Institute and the MIT Media Lab, was presented last month at the <a href="https://humanrobotinteraction.org/2026/" target="_blank">2026 ACM/IEEE International Conference on Human-Robot Interaction</a> in Edinburgh, Scotland.</p>]]></description><pubDate>Sun, 05 Apr 2026 13:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/boston-dynamics-spot-interaction</guid><category>Boston-dynamics</category><category>Legged-robots</category><category>Spot-robot</category><dc:creator>Dawn Wendell</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/families-watch-a-colorful-robotic-dog-demo-at-a-robotics-and-ai-institute-exhibit.jpg?id=65453180&amp;width=980"></media:content></item><item><title>Video Friday: Digit Learns to Dance—Virtually Overnight</title><link>https://spectrum.ieee.org/video-humanoid-dancing</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/bipedal-teal-robot-practices-side-to-side-dance-move-with-arm-movement.gif?id=65460048&width=1245&height=700&coordinates=0%2C47%2C0%2C47"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://2026.ieee-icra.org/">ICRA 2026</a>: 1–5 June 2026, VIENNA</h5><h5><a href="https://roboticsconference.org/">RSS 2026</a>: 13–17 July 2026, SYDNEY</h5><h5><a href="https://mrs.fel.cvut.cz/summer-school-2026/">Summer School on Multi-Robot Systems</a>: 29 July–4 August 2026, PRAGUE</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="pc-n6aciusu"><em>Getting Digit to dance takes more than putting on some fancy shoes—our AI Team can teach Digit new whole-body control capabilities overnight. Using raw motion data from mocap, animation, and teleop methods, Digit gets new skills through sim-to-real reinforcement training.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="4477bcbaf1f5072afe88c2c0015eebd1" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Pc-n6ACIuSU?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.agilityrobotics.com/">Agility</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="sy2xyrmv44y"><em>We’ve created GEN-1, our latest milestone in scaling robot learning. We believe it to be the first general-purpose AI model that crosses a new performance threshold: mastery of simple physical tasks. It improves average success rates to 99% on tasks where previous models achieve 64%, completes tasks roughly 3x faster than state of the art, and requires only 1 hour of robot data for each of these results. GEN-1 unlocks commercial viability across a broad range of applications—and while it cannot solve all tasks today, it is a significant step towards our mission of creating generalist intelligence for the physical world.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="bbbeecb0e15f3b78f50b3ebf230ecf33" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/SY2xyrmV44Y?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://generalistai.com/blog/apr-02-2026-GEN-1">Generalist</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="pn_bj5-qyw8"><em>Unitree open-sources UnifoLM-WBT-Dataset—high-quality real-world humanoid robot <a data-linked-post="2650273084" href="https://spectrum.ieee.org/mit-humanoid-robot-teleoperation-dynamic-tasks" target="_blank">whole-body teleoperation</a> (WBT) dataset for open environments. Publicly available since March 5, 2026, the dataset will continue to receive high-frequency rolling updates. It aims to establish the most comprehensive real-world humanoid robot dataset in terms of scenario coverage, task complexity, and manipulation diversity.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="bd19da6e3dfeb2ede20007b534d1b9a6" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/pN_bj5-QyW8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://huggingface.co/collections/unitreerobotics/unifolm-wbt-dataset">Hugging Face</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="79mr-_-a9js"><em>Autonomous mobile robots operating in human-shared indoor environments often require paths that reflect human spatial intentions, such as avoiding interference with pedestrian flow or maintaining comfortable clearance. This paper presents MRReP, a Mixed Reality-based interface that enables users to draw a Hand-drawn Reference Path (HRP) directly on the physical floor using hand gestures.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="783457e452248043a5ec6e2898ae5289" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/79mR-_-a9js?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://mertcookimg.github.io/mrrep/">MRReP</a> ]</p><p>Thanks, Masato!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="97qialc5hnm"><em>Eye contact, even momentarily between strangers, plays a pivotal role in fostering human connection, promoting happiness, and enhancing belonging. Through autonomous navigation and adaptive mirror control, Mirrorbot facilitates serendipitous, nonverbal interactions by dynamically transitioning reflections from self-focused to mutual recognition, sparking eye contact, shared awareness, and playful engagement.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="232f93e3a45a2e11d81366bb7ed95286" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/97qIaLC5hNM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://arl.human.cornell.edu/research-MirrorBot.html">ARL</a> ] via [ <a href="https://news.cornell.edu/stories/2026/04/mirrorbot-fostering-human-connection">Cornell University</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="jya06ffonyg"><em>Experience PAL Robotics’ new teleoperation system for TIAGo Pro, the AI-ready mobile manipulator designed for advanced research. This real-time VR teleoperation setup allows precise control of TIAGo Pro’s dual arms in Cartesian space, ideal for remote manipulation, AI data collection, and robot learning.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="86699af54f2bfd064590b0cd59aa3f8c" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/jya06FFONyg?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://pal-robotics.com/robot/tiago-pro/">PAL Robotics</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="t52sq8gk5ks">Utter brilliance from Robust AI. No notes.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="71e7d47e220a5b61b914c1491f1df3dc" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/T52SQ8Gk5Ks?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.robust.ai/">Robust AI</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="w8lqu8dkvp4"><em>Come along with our Senior Test Engineer, Nick L., as he takes us on a tour of the <a data-linked-post="2650277831" href="https://spectrum.ieee.org/qa-irobot-roomba-i7" target="_blank">Home Test Labs</a> inside the iRobot HQ.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="56a753f2b7e0640f199e35246a22843f" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/W8lQU8dKvP4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.irobot.com/en_US/our-story.html">iRobot</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="gjukjrwjpxg"><em>By automating the final “magic 5%” of production—the precise trimming of swim goggles’ silicone gaskets based on individual face scans—UR cobots allow THEMAGIC5 to deliver affordable, custom-fit goggles, enabling the company to scale from a Kickstarter sensation to selling over 400,000 goggles worldwide.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="76ebeda03bf930b9cd576a8e870f8dad" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/GJukJRWjPxg?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.universal-robots.com/case-stories/non-stop-robot-precision-for-7-years-cobots-deliver-the-last-magic-5-in-swim-goggle-production/">Universal Robots</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="x16ht1erjhk"><em>Sanctuary AI has once again demonstrated its industry-leading approach to training dexterous manipulation policies for its advanced hydraulic hands. In this video, their proprietary hydraulic hand autonomously manipulates a lettered cube, continuously reorienting it to match a specified goal (displayed in the bottom-left corner of the video).</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ad1d77f7ce4f331c7e74b0b779ff6cae" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/X16Ht1ERjHk?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.sanctuary.ai/">Sanctuary AI</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="r3toz2pgppy"><em>China’s Yuxing 3-06 commercial experimental satellite, the first of its kind to be equipped with a flexible robotic arm, has recently completed an in-orbit refueling test and verification of key technologies. The test paves the way for Yuxing 3-06, dubbed a “space refueling station,” to refuel other satellites in orbit, manage space debris, and provide other in-orbit services.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="eaf9d2765bb1e0ebff60f038ccba42fd" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/R3TOZ2PgPPY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://mp.weixin.qq.com/s/1c-9aNwuXv_p-VhojMkwwA">Sanyuan Aerospace</a> ] via [ <a href="https://spacenews.com/chinese-startup-tests-flexible-robotic-arm-in-space-for-on-orbit-servicing/">Space News</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="z4poalprrhe"><em>This is a demonstration of natural walking, whole-body teleoperation, and motion tracking with our custom-built humanoid robot. The control policies are trained using large-scale parallel reinforcement learning (RL). By deploying robust policies learned in a physics simulator onto the real hardware, we achieve dynamic and stable whole-body motions.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="703bacdcb0167fb3aa9bfe36e1da07ac" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/z4POaLPRRhE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://robotics.tokyo/">Tokyo Robotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="5olcwku7l9u"><em>Faced with aging railway infrastructure, a shrinking workforce and rising construction costs, Japan Railway West asked construction innovator Serendix to replace an old wooden building at its Hatsushima railway station using its 3D printing technology. An ABB robot enabled the company to assemble the new building in a single night ready for the first train service the next day.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="031eec5b200f86cdad72129d9a002cfc" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/5olcWkU7l9U?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.abb.com/global/en/news/134689">ABB</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="1k1phiqcfty"><em>Humanoid, SAP, and Martur Fompak team up to test humanoid robots in automotive manufacturing logistics. This joint proof of concept explores how robots can streamline operations, improve efficiency, and shape the future of smart factories.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="cc54aa14687108db3bc231b8cc456fea" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/1K1phiQCftY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://thehumanoid.ai/">Humanoid</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="oqglmefwbt8">This MIT Robotics Seminar is from Dario Floreano at EPFL, on “Avian Inspired Drones.”</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="7013e7fe97df52eb328681b647c9fddc" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/oqglMEFWBt8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://robotics.mit.edu/robotics-seminar/">MIT</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="etk5es0jvm4">This MIT Robotics Seminar is from Ken Goldberg at UC Berkeley: “Good Old-Fashioned Engineering Can Close the 100,000 Year ‘Data Gap’ in Robotics.”</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="710bc514cbab6092dc5f439cf03127c6" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/EtK5es0jVM4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://robotics.mit.edu/robotics-seminar/">MIT</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 03 Apr 2026 16:30:01 +0000</pubDate><guid>https://spectrum.ieee.org/video-humanoid-dancing</guid><category>Humanoid-robots</category><category>Video-friday</category><category>Robot-ai</category><category>Human-robot-interaction</category><category>Teleoperation</category><category>Industrial-robots</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/gif" url="https://spectrum.ieee.org/media-library/bipedal-teal-robot-practices-side-to-side-dance-move-with-arm-movement.gif?id=65460048&amp;width=980"></media:content></item><item><title>Andrew Ng: Unbiggen AI</title><link>https://spectrum.ieee.org/andrew-ng-data-centric-ai</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/andrew-ng-listens-during-the-power-of-data-sooner-than-you-think-global-technology-conference-in-brooklyn-new-york-on-wednes.jpg?id=29206806&width=1245&height=700&coordinates=0%2C0%2C0%2C474"/><br/><br/><p><strong><a href="https://en.wikipedia.org/wiki/Andrew_Ng" rel="noopener noreferrer" target="_blank">Andrew Ng</a> has serious street cred</strong> in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at <a href="https://stanfordmlgroup.github.io/" rel="noopener noreferrer" target="_blank">Stanford University</a>, cofounded <a href="https://research.google/teams/brain/" rel="noopener noreferrer" target="_blank">Google Brain</a> in 2011, and then served for three years as chief scientist for <a href="https://ir.baidu.com/" rel="noopener noreferrer" target="_blank">Baidu</a>, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told <em>IEEE Spectrum</em> in an exclusive Q&A.</p><hr/><p>
	Ng’s current efforts are focused on his company 
	<a href="https://landing.ai/about/" rel="noopener noreferrer" target="_blank">Landing AI</a>, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the <a href="https://www.youtube.com/watch?v=06-AZXmwHjo" target="_blank">data-centric AI movement</a>, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.
</p><p>
	Andrew Ng on...
</p><ul>
<li><a href="#big">What’s next for really big models</a></li>
<li><a href="#career">The career advice he didn’t listen to</a></li>
<li><a href="#defining">Defining the data-centric AI movement</a></li>
<li><a href="#synthetic">Synthetic data</a></li>
<li><a href="#work">Why Landing AI asks its customers to do the work</a></li>
</ul><p>
<strong>The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an <a href="https://spectrum.ieee.org/deep-learning-computational-cost" target="_self">unsustainable trajectory</a>. Do you agree that it can’t go on that way?</strong>
</p><p>
<strong>Andrew Ng: </strong>This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.
</p><p>
<strong>When you say you want a foundation model for computer vision, what do you mean by that?</strong>
</p><p>
<strong>Ng:</strong> This is a term coined by <a href="https://cs.stanford.edu/~pliang/" rel="noopener noreferrer" target="_blank">Percy Liang</a> and <a href="https://crfm.stanford.edu/" rel="noopener noreferrer" target="_blank">some of my friends at Stanford</a> to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, <a href="https://spectrum.ieee.org/open-ais-powerful-text-generating-tool-is-ready-for-business" target="_self">GPT-3</a> is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.
</p><p>
<strong>What needs to happen for someone to build a foundation model for video?</strong>
</p><p>
<strong>Ng:</strong> I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.
</p><p>
	Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.
</p><p>
<a href="#top">Back to top</a>
</p><p>
<strong>It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.</strong>
</p><p>
<strong>Ng: </strong>Over a decade ago, when I proposed starting the <a href="https://research.google/teams/brain/" rel="noopener noreferrer" target="_blank">Google Brain</a> project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.
</p><p class="pull-quote">
	“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”<br/>
	—Andrew Ng, CEO & Founder, Landing AI
</p><p>
	I remember when my students and I published the first 
	<a href="https://nips.cc/" rel="noopener noreferrer" target="_blank">NeurIPS</a> workshop paper advocating using <a href="https://developer.nvidia.com/cuda-zone" rel="noopener noreferrer" target="_blank">CUDA</a>, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.
</p><p>
<strong>I expect they’re both convinced now.</strong>
</p><p>
<strong>Ng:</strong> I think so, yes.
</p><p>
	Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”
</p><p>
<a href="#top">Back to top</a>
</p><p>
<strong>How do you define data-centric AI, and why do you consider it a movement?</strong>
</p><p>
<strong>Ng:</strong> Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.
</p><p>
	When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.
</p><p>
	The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a 
	<a href="https://neurips.cc/virtual/2021/workshop/21860" rel="noopener noreferrer" target="_blank">data-centric AI workshop at NeurIPS</a>, and I was really delighted at the number of authors and presenters that showed up.
</p><p>
<strong>You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?</strong>
</p><p>
<strong>Ng: </strong>You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.
</p><p>
<strong>When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?</strong>
</p><p>
<strong>Ng: </strong>Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of <a href="https://developers.arcgis.com/python/guide/how-retinanet-works/" rel="noopener noreferrer" target="_blank">RetinaNet</a>. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.
</p><p class="pull-quote">
	“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”<br/>
	—Andrew Ng
</p><p>
	For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.
</p><p>
<strong>Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?</strong>
</p><p>
<strong>Ng:</strong> Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, <a href="https://www.cs.princeton.edu/~olgarus/" rel="noopener noreferrer" target="_blank">Olga Russakovsky</a> gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed <a href="https://neurips.cc/virtual/2021/invited-talk/22281" rel="noopener noreferrer" target="_blank">Mary Gray’s presentation,</a> which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like <a href="https://www.microsoft.com/en-us/research/project/datasheets-for-datasets/" rel="noopener noreferrer" target="_blank">Datasheets for Datasets</a> also seem like an important piece of the puzzle.
</p><p>
	One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.
</p><p>
<strong>When you talk about engineering the data, what do you mean exactly?</strong>
</p><p>
<strong>Ng: </strong>In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a <a href="https://jupyter.org/" rel="noopener noreferrer" target="_blank">Jupyter notebook</a> and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.
</p><p>
	For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.
</p><p>
<a href="#top">Back to top</a>
</p><p>
<strong>What about using synthetic data, is that often a good solution?</strong>
</p><p>
<strong>Ng: </strong>I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, <a href="https://tensorlab.cms.caltech.edu/users/anima/" rel="noopener noreferrer" target="_blank">Anima Anandkumar</a> gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.
</p><p>
<strong>Do you mean that synthetic data would allow you to try the model on more data sets?</strong>
</p><p>
<strong>Ng: </strong>Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.
</p><p class="pull-quote">
	“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”<br/>
	—Andrew Ng
</p><p>
	Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.
</p><p>
<a href="#top">Back to top</a>
</p><p>
<strong>To make these issues more concrete, can you walk me through an example? When a company approaches <a href="https://landing.ai/" rel="noopener noreferrer" target="_blank">Landing AI</a> and says it has a problem with visual inspection, how do you onboard them and work toward deployment?</strong>
</p><p>
<strong>Ng: </strong>When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the <a href="https://landing.ai/platform/" rel="noopener noreferrer" target="_blank">LandingLens</a> platform. We often advise them on the methodology of data-centric AI and help them label the data.
</p><p>
	One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.
</p><p>
<strong>How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?</strong>
</p><p>
<strong>Ng:</strong> It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.
</p><p>
	In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?
</p><p>
<strong>So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.</strong>
</p><p>
<strong>Ng: </strong>Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.
</p><p>
<strong>Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?</strong>
</p><p>
<strong>Ng: </strong>In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.
</p><p>
<a href="#top">Back to top</a>
</p><p><em>This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist</em><em>.”</em></p>]]></description><pubDate>Wed, 09 Feb 2022 15:31:12 +0000</pubDate><guid>https://spectrum.ieee.org/andrew-ng-data-centric-ai</guid><category>Deep-learning</category><category>Artificial-intelligence</category><category>Andrew-ng</category><category>Type-cover</category><dc:creator>Eliza Strickland</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/andrew-ng-listens-during-the-power-of-data-sooner-than-you-think-global-technology-conference-in-brooklyn-new-york-on-wednes.jpg?id=29206806&amp;width=980"></media:content></item><item><title>How AI Will Change Chip Design</title><link>https://spectrum.ieee.org/ai-chip-design-matlab</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/layered-rendering-of-colorful-semiconductor-wafers-with-a-bright-white-light-sitting-on-one.jpg?id=29285079&width=1245&height=700&coordinates=0%2C156%2C0%2C156"/><br/><br/><p>The end of <a href="https://spectrum.ieee.org/on-beyond-moores-law-4-new-laws-of-computing" target="_self">Moore’s Law</a> is looming. Engineers and designers can do only so much to <a href="https://spectrum.ieee.org/ibm-introduces-the-worlds-first-2nm-node-chip" target="_self">miniaturize transistors</a> and <a href="https://spectrum.ieee.org/cerebras-giant-ai-chip-now-has-a-trillions-more-transistors" target="_self">pack as many of them as possible into chips</a>. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.</p><p>Samsung, for instance, is <a href="https://spectrum.ieee.org/processing-in-dram-accelerates-ai" target="_self">adding AI to its memory chips</a> to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has <a href="https://spectrum.ieee.org/heres-how-googles-tpu-v4-ai-chip-stacked-up-in-training-tests" target="_self">doubled its processing power</a> compared with that of  its previous version.</p><p>But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with <a href="https://www.linkedin.com/in/heather-gorr-phd" rel="noopener noreferrer" target="_blank">Heather Gorr</a>, senior product manager for <a href="https://www.mathworks.com/" rel="noopener noreferrer" target="_blank">MathWorks</a>’ MATLAB platform.</p><p><strong>How is AI currently being used to design the next generation of chips?</strong></p><p><strong>Heather Gorr:</strong> AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-resized-container rm-resized-container-25 rm-float-left" data-rm-resized-container="25%" style="float: left;">
<img alt="Portrait of a woman with blonde-red hair smiling at the camera" class="rm-shortcode rm-resized-image" data-rm-shortcode-id="1f18a02ccaf51f5c766af2ebc4af18e1" data-rm-shortcode-name="rebelmouse-image" id="2dc00" loading="lazy" src="https://spectrum.ieee.org/media-library/portrait-of-a-woman-with-blonde-red-hair-smiling-at-the-camera.jpg?id=29288554&width=980" style="max-width: 100%"/>
<small class="image-media media-caption" placeholder="Add Photo Caption..." style="max-width: 100%;">Heather Gorr</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit..." style="max-width: 100%;">MathWorks</small></p><p>Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see  something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.</p><p><strong>What are the benefits of using AI for chip design?</strong></p><p><strong>Gorr:</strong> Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a <a href="https://en.wikipedia.org/wiki/Model_order_reduction" rel="noopener noreferrer" target="_blank">reduced order model</a>, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your <a href="https://www.ibm.com/cloud/learn/monte-carlo-simulation" rel="noopener noreferrer" target="_blank">Monte Carlo simulations</a> using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.</p><p><strong>So it’s like having a digital twin in a sense?</strong></p><p><strong>Gorr:</strong> Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.</p><p><strong>So, it’s going to be more efficient and, as you said, cheaper?</strong></p><p><strong>Gorr:</strong> Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.</p><p><strong>We’ve talked about the benefits. How about the drawbacks?</strong></p><p><strong>Gorr: </strong>The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years.</p><p>Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together.</p><p>One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.</p><p><strong>How can engineers use AI to better prepare and extract insights from hardware or sensor data?</strong></p><p><strong>Gorr: </strong>We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.</p><p>One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on <a href="https://github.com/" rel="noopener noreferrer" target="_blank">GitHub</a> or <a href="https://www.mathworks.com/matlabcentral/" rel="noopener noreferrer" target="_blank">MATLAB Central</a>, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.</p><p><strong>What should engineers and designers consider wh</strong><strong>en using AI for chip design?</strong></p><p><strong>Gorr:</strong> Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.</p><p><strong>How do you think AI will affect chip designers’ jobs?</strong></p><p><strong>Gorr:</strong> It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.</p><p><strong>How do you envision the future of AI and chip design?</strong></p><p><strong>Gorr</strong><strong>:</strong> It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.</p>]]></description><pubDate>Tue, 08 Feb 2022 14:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/ai-chip-design-matlab</guid><category>Chip-fabrication</category><category>Matlab</category><category>Moores-law</category><category>Chip-design</category><category>Ai</category><category>Digital-twins</category><dc:creator>Rina Diane Caballar</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/layered-rendering-of-colorful-semiconductor-wafers-with-a-bright-white-light-sitting-on-one.jpg?id=29285079&amp;width=980"></media:content></item><item><title>Atomically Thin Materials Significantly Shrink Qubits</title><link>https://spectrum.ieee.org/2d-hbn-qubit</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-golden-square-package-holds-a-small-processor-sitting-on-top-is-a-metal-square-with-mit-etched-into-it.jpg?id=29281587&width=1245&height=700&coordinates=0%2C156%2C0%2C156"/><br/><br/><p>Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.</p><p>IBM has adopted the superconducting qubit road map of <a href="https://spectrum.ieee.org/ibms-envisons-the-road-to-quantum-computing-like-an-apollo-mission" target="_self">reaching a 1,121-qubit processor by 2023</a>, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.</p><p>Now researchers at <a href="https://www.nature.com/articles/s41563-021-01187-w" rel="noopener noreferrer" target="_blank">MIT have been able to both reduce the size of the qubits</a> and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.</p><p>“We are addressing both qubit miniaturization and quality,” said <a href="https://equs.mit.edu/william-d-oliver/" rel="noopener noreferrer" target="_blank">William Oliver</a>, the director for the <a href="https://cqe.mit.edu/" target="_blank">Center for Quantum Engineering</a> at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”</p><p>The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.</p><p>Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-resized-container rm-resized-container-25 rm-float-left" data-rm-resized-container="25%" style="float: left;">
<img alt="Golden dilution refrigerator hanging vertically" class="rm-shortcode rm-resized-image" data-rm-shortcode-id="694399af8a1c345e51a695ff73909eda" data-rm-shortcode-name="rebelmouse-image" id="6c615" loading="lazy" src="https://spectrum.ieee.org/media-library/golden-dilution-refrigerator-hanging-vertically.jpg?id=29281593&width=980" style="max-width: 100%"/>
<small class="image-media media-caption" placeholder="Add Photo Caption..." style="max-width: 100%;">Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit..." style="max-width: 100%;">Nathan Fiske/MIT</small></p><p>In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.</p><p>As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.</p><p>In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.</p><p>“We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author <a href="https://equs.mit.edu/joel-wang/" rel="noopener noreferrer" target="_blank">Joel Wang</a>, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics. </p><p>On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.</p><p>While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.</p><p>“What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”</p><p>This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.</p><p>“The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.</p><p>Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.</p>]]></description><pubDate>Mon, 07 Feb 2022 16:12:05 +0000</pubDate><guid>https://spectrum.ieee.org/2d-hbn-qubit</guid><category>Quantum-computing</category><category>2d-materials</category><category>Ibm</category><category>Qubits</category><category>Hexagonal-boron-nitride</category><category>Superconducting-qubits</category><category>Mit</category><dc:creator>Dexter Johnson</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-golden-square-package-holds-a-small-processor-sitting-on-top-is-a-metal-square-with-mit-etched-into-it.jpg?id=29281587&amp;width=980"></media:content></item></channel></rss>