<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:media="http://search.yahoo.com/mrss/"><channel><title>IEEE Spectrum</title><link>https://spectrum.ieee.org/</link><description>IEEE Spectrum</description><atom:link href="https://spectrum.ieee.org/feeds/feed.rss" rel="self"></atom:link><language>en-us</language><lastBuildDate>Thu, 16 Apr 2026 18:01:48 -0000</lastBuildDate><item><title>IEEE Entrepreneurship Connects Hardware Startups With Investors</title><link>https://spectrum.ieee.org/ieee-entrepreneurship-hardware-startups-investors</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/groups-of-people-seated-together-at-several-tables-inside-of-a-large-meeting-hall.jpg?id=65559941&width=1200&height=800&coordinates=156%2C0%2C156%2C0"/><br/><br/><p>Roughly 90 percent of <a href="https://bowoftheseus.substack.com/p/what-is-hard-tech" rel="noopener noreferrer" target="_blank">hard tech</a> startups fail due to funding constraints, longer R&D timelines for developing hardware, and the complexity of manufacturing their products, according to a number of studies.</p><p>Generally, these startups require up to 50 percent more investor financing than software ones, according to <a href="https://ehandbook.com/why-is-hardtech-so-effing-hard-a652738c886a" rel="noopener noreferrer" target="_blank">a <em><em>Medium</em></em> article</a>. Typically, they need at least US $30 million, according to <a href="https://www.lucid.now/blog/cost-of-capital-saas-vs-hardware-startups/" rel="noopener noreferrer" target="_blank">a <em><em>Lucid</em></em> article</a>. That’s double the funding needed by software companies on average.</p><p>To help them connect with investors, <a href="https://entrepreneurship.ieee.org/" rel="noopener noreferrer" target="_blank">IEEE Entrepreneurship</a> in 2024 launched its <a href="https://entrepreneurship.ieee.org/venturesummits" rel="noopener noreferrer" target="_blank">Hard Tech Venture Summits</a>. The two-day events connect founders with potential investors and other <a href="https://spectrum.ieee.org/thinking-like-an-entrepreneur" target="_self">entrepreneurs</a>. Attendees include manufacturers, design engineers, and intellectual property lawyers.</p><p>“Even though there are a lot of startup investor conferences, it’s hard to find those focused on hard tech,” says <a href="https://ca.linkedin.com/in/joannewongreddscapital" rel="noopener noreferrer" target="_blank">Joanne Wong</a>, who helped initiate the program and is now the chair. She is a general partner at <a href="https://reddscapital.com/" rel="noopener noreferrer" target="_blank">Redds Capital</a>, a California-based venture capital firm that invests in global early-stage IT startups.</p><p>The IEEE member is also an entrepreneur. She founded <a href="https://spectrum.ieee.org/cloud-software-manages-biomedical-data" target="_self">SciosHub</a> in 2020. The company’s software-as-a-service and informatics platform automates the data-management process for biomedical research labs.</p><p>“Many investors are focused on AI software—which is good,” she says. “But for hard tech companies, it is still hard to find support.”</p><p>The summit also includes a workshop to help founders navigate manufacturing processes and regulatory compliance. The event is open to IEEE members and others.</p><p>IEEE is a natural fit for the program, Wong says, because hard tech is synonymous with electrical engineering.</p><p>“Some of the domains we’re covering are <a href="https://www.ieee-ras.org/" rel="noopener noreferrer" target="_blank">robotics</a>, <a href="https://eds.ieee.org/" rel="noopener noreferrer" target="_blank">semiconductors</a>, and <a href="https://ieee-aess.org/home" rel="noopener noreferrer" target="_blank">aerospace technology</a>. IEEE has societies for all these fields,” she says. “Because of that, there are many resources within the organizations for startups, whether it be mentors or guides on how to commercialize products.”</p><p>There are several venture summits planned for this year. Two are scheduled in collaboration with the <a href="https://ieeesystemscouncil.org/ieee-systems-council-welcome" rel="noopener noreferrer" target="_blank">IEEE Systems Council</a>: this month in <a href="https://entrepreneurship.ieee.org/venturesummitsiliconvalley" rel="noopener noreferrer" target="_blank">Menlo Park, Calif.</a>, and in October in <a href="https://entrepreneurship.ieee.org/venturesummittoronto" rel="noopener noreferrer" target="_blank">Toronto</a>.</p><p>On 10 and 11 June, a third <a href="https://entrepreneurship.ieee.org/venturesummitboston" rel="noopener noreferrer" target="_blank">summit</a> is scheduled to take place in Boston at the <a href="https://mtt.org/" rel="noopener noreferrer" target="_blank">IEEE Microwave Theory and Technology Society</a>’s <a href="https://ims-ieee.org/attend" rel="noopener noreferrer" target="_blank">International Microwave Symposium</a>.</p><p>More events are being planned for next year in Asia, Europe, Latin America, and North America.</p><h2>Networking and a pitch competition</h2><p>Each summit includes keynote speakers, followed by networking roundtables. Each table is composed of people from three to five startups, one or two investors, and a service provider.</p><p>That arrangement helps founders build relationships, which is the summit organizers’ priority, Wong says. Investors at past events have included <a href="https://i3.ventures/" rel="noopener noreferrer" target="_blank">i3 Ventures</a>, <a href="https://monozukuri.vc/" rel="noopener noreferrer" target="_blank">Monozukuri Ventures</a>, and <a href="https://www.tsvcap.com/" rel="noopener noreferrer" target="_blank">TSV Capital</a>.</p><p class="pull-quote">“The connection with the community was fantastic, especially investors and founders in robotics.” <strong>—Mark Boysen, founder of Naware</strong></p><p>Startups present their pitch, which a number of investors evaluate before ranking the business plan and product. The top 10 startups pitch their business to all the investors.</p><p>On the second day, the startup founders participate in a half-day engineering design–to–manufacturing workshop, at which manufacturing engineers teach them how to navigate the process and meet regulations.</p><p>In an exhibition area, participants can see demonstrations from the startups and connect with service providers.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A woman standing next to a presentation screen while speaking to small seated groups during a professional workshop." class="rm-shortcode" data-rm-shortcode-id="9df606a8e1cf9a9702d0c39942224f08" data-rm-shortcode-name="rebelmouse-image" id="5c118" loading="lazy" src="https://spectrum.ieee.org/media-library/a-woman-standing-next-to-a-presentation-screen-while-speaking-to-small-seated-groups-during-a-professional-workshop.jpg?id=65559964&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">The 2025 event’s half-day engineering design–to–manufacturing workshop was led by Liz Taylor, president of DOER Marine. The company manufactures marine equipment.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Larissa Abi Nakhle/IEEE</small></p><h2>Positive feedback from attendees</h2><p>In a survey of past summit attendees, startup founders said the event connected them not only with investors but also with other entrepreneurs having similar struggles.</p><p>“The connection with the community was fantastic, especially investors and founders in robotics,” said <a href="https://www.linkedin.com/in/boysen1/" target="_blank">Mark Boysen</a>, who founded <a href="https://www.linkedin.com/company/naware/about/" target="_blank">Naware</a>. The company, based in Edina, Minn., developed a robot that uses AI to detect and remove weeds from golf courses, parks, and lawns.</p><p>“I loved getting the investors’ perspectives and understanding what they’re looking for,” Boysen said.</p><p><a href="https://www.linkedin.com/in/jeffrey-cook-9501114b/" rel="noopener noreferrer" target="_blank">Jeffrey Cook</a>, who attended a summit in 2024, said he met “a lot of great contacts and saw what the hard tech venture climate is like.”</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="7f6223c19ea1d3522ce4f0fcb46846f1" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/74OJ6CTJ7xE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">Attendees of the Hard Tech Venture Summit spend the first day networking and presenting their pitch to investors.</small> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">IEEE Entrepreneurship</small> </p><p>“Those in the community would benefit from coming to the summit,” said Cook, who founded <a href="https://www.linkedin.com/company/gigantor-technologies-inc/" rel="noopener noreferrer" target="_blank">Gigantor Technologies</a> in Melbourne Beach, Fla. It develops hardware systems for AI-powered devices.</p><p>More than 90 percent of attendees at the 2025 event in San Francisco said they would highly recommend the summit to others, according to a survey.</p><p>Investors and service providers also have found the events successful.</p><p><a href="https://www.linkedin.com/in/ji-ke" rel="noopener noreferrer" target="_blank">Ji Ke</a>, a partner and the chief technology officer of deep tech VC firm <a href="https://sosv.com/" rel="noopener noreferrer" target="_blank">SOSV</a>, attended the 2025 summit.</p><p>“I met a lot of young entrepreneurs tackling some big challenges,” he said. “This is one of the best events to meet some very-early-stage companies.”</p><h2>Making important connections in hard tech</h2><p>Startup founders who want to attend a summit must apply. <a href="https://entrepreneurship.ieee.org/venturesummits" rel="noopener noreferrer" target="_blank">Applications for this year’s events are open</a>. Participants must be founders of preseed, seed, or Series A startups.</p><p>Preseed founders are seeking small investments to get their businesses off the ground. Those in the seed stage have already secured funding from their first investor. Series A startups have obtained funding and are developing their product.</p><p>Applicants are reviewed by a committee of investors to ensure the startups would be a good fit. Those who are approved are matched with investors and service providers based on their specialty.</p><p>“The journey for a hard tech startup is very long and arduous,” Wong says. “Founders need to meet as many investors as possible and other people who support hard tech systems so that they’re able to reach out to them for advice or help.”</p><p>Those interested in learning more about an upcoming event can send a request to <a href="mailto:entrepreneurship@ieee.org" rel="noopener noreferrer" target="_blank">entrepreneurship@ieee.org</a>.</p>]]></description><pubDate>Thu, 16 Apr 2026 18:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/ieee-entrepreneurship-hardware-startups-investors</guid><category>Ieee-news</category><category>Hard-tech</category><category>Startups</category><category>Ieee-entrepreneurship</category><category>Entrepreneurs</category><category>Careers</category><category>Type-ti</category><dc:creator>Joanna Goodrich</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/groups-of-people-seated-together-at-several-tables-inside-of-a-large-meeting-hall.jpg?id=65559941&amp;width=980"></media:content></item><item><title>Stealth Signals Are Bypassing Iran’s Internet Blackout</title><link>https://spectrum.ieee.org/iran-internet-blackout-satellite-tv</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-photo-collage-of-a-woman-in-a-hijab-protests-in-iran-a-satellite-satellite-dishes-and-the-words-you-are-not-connected-to-t.png?id=65521125&width=1200&height=800&coordinates=0%2C675%2C0%2C675"/><br/><br/><p><strong>On 8 January 2026, </strong>the Iranian government imposed a near-total communications shutdown. It was the country’s first full information blackout: For weeks, the internet was off across all provinces while services including the government-run intranet, VPNs, text messaging, mobile calls, and even landlines were severely throttled. It was an unprecedented lockdown that left more than <a href="https://www.chathamhouse.org/2026/01/irans-internet-shutdown-signals-new-stage-digital-isolation" rel="noopener noreferrer" target="_blank">90 million people</a> cut off not only from the world, but from one another.</p><p>Since then, connectivity has never fully returned. Following <a href="https://en.wikipedia.org/wiki/2026_Iran_war" rel="noopener noreferrer" target="_blank">U.S. and Israeli airstrikes</a> in late February, Iran again imposed near-total restrictions, and people inside the country again saw global information flows dry up.</p><p>The original January shutdown came amid nationwide protests over the deepening economic crisis and political repression, in which millions of people chanted antigovernment slogans in the streets. While Iranian protests have become frequent in recent years, this was one of the most significant uprisings since the Islamic Revolution in 1979. The government responded quickly and brutally. One report put the death toll at <a href="https://www.en-hrana.org/the-crimson-winter-a-50-day-record-of-irans-2025-2026-nationwide-protests/" rel="noopener noreferrer" target="_blank">more than 7,000 confirmed deaths</a> and more than 11,000 under investigation. Many sources believe the death toll could exceed 30,000.</p><p>Thirteen days into the January shutdown, we at <a href="https://www.netfreedompioneers.org/" rel="noopener noreferrer" target="_blank">NetFreedom Pioneers</a> (NFP) turned to a system we had built for exactly this kind of moment—one that sends files over ordinary satellite TV signals. During the national information vacuum, our technology, called <a href="https://www.netfreedompioneers.org/toosheh-datacasting-technology/" rel="noopener noreferrer" target="_blank">Toosheh</a>, delivered real-time updates into Iran, offering a lifeline to millions starved of trusted information.</p><h2>How Iran Censors the Internet<br/></h2><p>I joined NetFreedom Pioneers, a nonprofit focused on anticensorship technology, in 2014. Censorship in <a href="https://spectrum.ieee.org/tag/iran" target="_blank">Iran</a> was a defining feature of my youth in the 1990s. After the Islamic Revolution, most Iranians began to lead double lives—one at home, where they could drink, dance, and choose their clothing, and another in public, where everyone had to comply with stifling government laws.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Photo of a helmeted soldier with a machine gun standing in front of an Iranian flag and cell tower." class="rm-shortcode" data-rm-shortcode-id="ef533f84cc5eb097a4cfe78e30b2984b" data-rm-shortcode-name="rebelmouse-image" id="7a368" loading="lazy" src="https://spectrum.ieee.org/media-library/photo-of-a-helmeted-soldier-with-a-machine-gun-standing-in-front-of-an-iranian-flag-and-cell-tower.jpg?id=65520617&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Iran’s internet infrastructure is more centralized than in other parts of the world, making it easier for the government to restrict the flow of information. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Morteza Nikoubazl/NurPhoto/Getty Images</small></p>My first experience with secret communications was when I was five and living in the small city of Fasa in southern Iran. My uncle brought home a satellite dish—dangerously illegal at the time—that allowed us to tune into 12 satellite channels. My favorite was Cartoon Network. Then, during my teenage years, this same uncle introduced me to the internet through dial-up modems. I remember using Yahoo Mail with its 4 megabytes of storage, reading news from around the world, and learning about the Chandra X-ray telescope from NASA’s website. <p><br/><br/><span>That openness didn’t last. As internet use spread in the early 2000s, the Iranian government began reshaping the network itself. Unlike the highly distributed networks in the United States or Europe, where thousands of providers exchange traffic across many independent routes, Iran’s connection to the global internet is relatively centralized. Most international traffic passes through a small number of gateways controlled by state-linked telecom operators. That architecture gives authorities unusual leverage: By restricting or withdrawing those connections, they can sharply reduce the country’s access to the outside world.</span></p><p>Over the past decade, Iran has expanded this control through what it calls the <a href="https://en.wikipedia.org/wiki/National_Information_Network" target="_blank">National Information Network</a>, a domestically routed system designed to keep data inside the country whenever possible. Many government services, banking systems, and local platforms are hosted on this internal network. During periods of unrest, access to the global internet can be throttled or cut off while portions of this domestic network continue to function.</p><h3></h3><br/><div class="rblad-ieee_in_content"></div><p>The government began its censorship campaign by redirecting or blocking websites. As internet use grew, it adopted more sophisticated approaches. For example, the <a href="https://en.wikipedia.org/wiki/Telecommunication_Company_of_Iran" target="_blank">Telecommunication Company of Iran</a> uses a technique called <a href="https://www.fortinet.com/resources/cyberglossary/dpi-deep-packet-inspection" target="_blank">deep packet inspection</a> to analyze the content of data packets in real time. This method enables it to identify and block specific types of traffic, such as VPN connections, messaging apps, social media platforms, and banned websites.</p><h2>The Stealth of Satellite Transmissions<br/></h2><p>Toosheh’s communication workaround builds on a history of satellite TV adoption in Middle Eastern and North African countries. By the early 2000s, satellite dishes were common in Iran; today the majority of households in Iran have access to satellite TV despite its official prohibition.</p><p>Unlike subscription services such as DirecTV and Dish Network, “free-to-air” satellite TV broadcasts are unencrypted and can be received by anyone with a dish and receiver—no subscription required. Because the signals are open, users can also capture and store the data they carry, rather than simply watching it live. Tech-savvy people learned that they could use a digital video broadcasting (DVB) card—a piece of hardware that connects to a computer and tunes into satellite frequencies—to transform a personal computer into a satellite receiver. This way, they could watch and store media locally as well as download data from dedicated channels.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Photo of satellite dishes adorning the side of an apartment building." class="rm-shortcode" data-rm-shortcode-id="a558326e8ca2bd5c645e392fb0166b58" data-rm-shortcode-name="rebelmouse-image" id="577d2" loading="lazy" src="https://spectrum.ieee.org/media-library/photo-of-satellite-dishes-adorning-the-side-of-an-apartment-building.jpg?id=65520620&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Many Iranian citizens have free-to-air satellite dishes, like the ones on this apartment building in Tehran, and can thus download Toosheh transmissions, giving them a lifeline during internet blackouts.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Morteza Nikoubazl/NurPhoto/Getty Images</small></p><p>Toosheh, a Persian word that translates to “knapsack,” is the brainchild of <a href="https://x.com/mehdiy_fa" target="_blank">Mehdi Yahyanejad</a>, an Iranian-American technologist and entrepreneur. Yahyanejad cofounded NetFreedom Pioneers in 2012. He proposed that the satellite-computer connections enabled by a DVB card could be re-created in software, eliminating the need for specialized hardware. He added a simple digital interface to the software to make it easy for anyone to use. The next breakthrough came when the NFP team developed a new transfer protocol that tricks ordinary satellite receivers into downloading data alongside audio and video content. Thus, Toosheh was born.</p><p>Satellite TV uses a file system called an <a href="https://en.wikipedia.org/wiki/MPEG_transport_stream" target="_blank">MPEG transport stream</a> that allows multiple audio, video, or data layers to be packaged into a single stream file. When you tune in to a satellite channel and select an audio option or closed captions, you’re accessing data stored in different parts of this stream. The NFP team’s insight was that, by piggybacking on one of these layers, Toosheh could send an MPEG stream that included documents, videos, and more.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="An illustration of an 8 step process for sending digital files via satellite TV signals." class="rm-shortcode" data-rm-shortcode-id="500fc02c0c38f890606e42dec590ae8f" data-rm-shortcode-name="rebelmouse-image" id="371ea" loading="lazy" src="https://spectrum.ieee.org/media-library/an-illustration-of-an-8-step-process-for-sending-digital-files-via-satellite-tv-signals.png?id=65521138&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">HOW TOOSHEH WORKS: At NetFreedom Pioneers, content curators pull together files—news articles, videos, audio, and software [1]. Toosheh’s encoder software [2] compresses the files into a bundle, in .ts format, creating an MPEG transport stream [3]. From there, it’s uploaded to a server for transmission [4] via a free-to-air TV channel on a Yahsat satellite that’s positioned over the Middle East to provide regional coverage [5]. Satellite receivers [6] directly capture the data streams, which are downloaded to computers, smartphones, and other devices, and decoded by Toosheh software [8].</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Chris Philpot</small></p><p>A satellite receiver can’t tell the difference between our data and normal satellite audio and video data since it only “sees” the MPEG streams, not what’s encoded on them. This means the data can be downloaded and read, watched, and saved on local devices such as computers, smartphones, or storage devices. What’s more, the system is entirely private: No one can detect whether someone has received data through Toosheh; there are no traceable logs of user activity.</p><p>Toosheh doesn’t provide internet access, but rather delivers curated data through satellite technology. The fundamental distinction lies in the way users interact with the system. Unlike traditional internet services, where you type a request into your browser and receive data in response, Toosheh operates more like a combination of radio and television, presenting information in a magazine-like format. Users don’t make requests; instead, they receive 1 to 5 gigabytes of prepackaged, carefully selected data.</p><p class="pull-quote"><span>Access to information is not only about news or politics, but about exposure to possibilities.  </span></p><p>During this year’s internet blackout, we distributed official statements from Iranian opposition leader Crown Prince Reza Pahlavi and the U.S. government. We provided first-aid tutorials for medics and injured protesters. We sent uncensored news reports from BBC Persian, Iran International, IranWire, VOA Farsi, and others. We also shared critical software packages including anticensorship and antisurveillance tools, along with how-to guides to help people securely connect to Starlink satellite terminals, allowing them to stay protected and anonymous as they sent their own communications.</p><h2>How to Combat Signal Interference<br/></h2><p>Because Toosheh relies on one-way satellite broadcasts, it evades the usual tactics governments use to block internet access. However, it remains vulnerable to <a href="https://spectrum.ieee.org/satellite-jamming" target="_blank">satellite signal jamming</a>.</p><p>The Iranian government is notorious for deploying signal jamming, especially in larger cities. In 2009, the government <a href="https://www.dw.com/fa-ir/%D9%86%D8%A7%D8%AA%D9%88%D8%A7%D9%86%DB%8C-%D8%AF%D8%B1-%D9%85%D9%82%D8%A7%D8%A8%D9%84-%D8%A7%D9%85%D9%88%D8%A7%D8%AC-%D9%BE%D8%A7%D8%B1%D8%A7%D8%B2%DB%8C%D8%AA-%D8%A7%D8%B2-%D8%AA%D9%87%D8%B1%D8%A7%D9%86/a-5417209" target="_blank">used uplink interference</a>, which attacks the satellite in orbit by beaming strong noise in the frequency of the satellite’s receiver. This makes it impossible for the satellite to distinguish the information it’s supposed to receive. However, because this type of attack temporarily disables the entire satellite, Iran was threatened with international <a href="https://www.dw.com/fa-ir/%D8%AA%D8%B4%D8%AF%DB%8C%D8%AF-%D8%A7%D9%86%D8%AA%D9%82%D8%A7%D8%AF%D9%87%D8%A7-%D8%A8%D9%87-%D8%A7%D8%B1%D8%B3%D8%A7%D9%84-%D9%BE%D8%A7%D8%B1%D8%A7%D8%B2%DB%8C%D8%AA-%D8%A7%D8%B2-%D8%B3%D9%88%DB%8C-%D8%A7%DB%8C%D8%B1%D8%A7%D9%86/a-5382663" target="_blank">sanctions</a> and in 2012 stopped using the method .</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A chart displayed on a cellphone shows internet connectivity in Iran dropped from almost 100% to 0% on 9 January 2026." class="rm-shortcode" data-rm-shortcode-id="c5f3ef2e60cfa653b7c461cda6d68e0f" data-rm-shortcode-name="rebelmouse-image" id="c778a" loading="lazy" src="https://spectrum.ieee.org/media-library/a-chart-displayed-on-a-cellphone-shows-internet-connectivity-in-iran-dropped-from-almost-100-to-0-on-9-january-2026.jpg?id=65520652&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">A graph of network connectivity in Iran shows that on 9 January 2026, internet access dropped from nearly 100 percent to 0. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Samuel Boivin/NurPhoto/Getty Images</small></p><p>The current method, called terrestrial jamming, uses antennas installed at higher elevations than the surrounding buildings to beam strong noise over a specific area in the frequency range of household receivers. This attack is effective in keeping some of the packets from arriving and damaging others, effectively jamming the transmission. But it’s short-range and requires significant power, so it’s impossible to implement nationwide. There are always people somewhere who can still watch TV, download from Toosheh, or tune into a satellite radio despite the jamming. Even so, we wanted a workaround that would keep our transmissions broadly accessible.</p><p>NFP’s solution was to add redundancy, similar in principle to a data-storage technique called RAID (redundant array of independent disks). Instead of sending each piece of data once, we send extra information that allows missing or corrupted packets to be reconstructed. Under normal circumstances, we often use 5 percent of our bandwidth for this redundancy. During periods of active jamming, we increase that to as much as 25 to 30 percent, improving the chances that users can recover complete files despite interference.</p><h2>From Crisis Response to Public Access<br/></h2><p>Toosheh initially came online in 2015 in Iran and Afghanistan. Its full potential, however, was first realized during the 2019 protests in Iran, which saw the most widespread internet shutdown prior to the blackout this year. <a href="https://www.wired.com/story/iran-news-internet-shutdown/" target="_blank"><em><em>Wired</em></em></a> called the 2019 shutdown “the most severe disconnection” tracked by <a href="https://netblocks.org/" target="_blank">NetBlocks</a> in any country in terms of its “technical complexity and breadth.” Our technology helped thousands of people stay informed. We sent crucial local updates, legal-aid guides, digital security tools, and independent news to satellite receivers all over the country, seeing a sixfold increase in our user base.</p><p>When that wave of protests subsided, the government allowed some communication services to return. People were again able to access the free internet using VPNs and other antifilter software that allowed them to bypass restrictions. Toosheh then became a public access point for news, educational material, and entertainment beyond government filtering.</p><p>Toosheh’s impact is often personal. A traveling teacher in western Iran told NFP that he regularly distributed Toosheh files to students in remote villages. One package included footage of female athletes competing in the Olympic Games, something never broadcast in Iran. For one young girl, it was the first time she realized women could compete professionally in sports. That moment underscores a broader truth: Access to information is not only about news or politics, but about exposure to possibilities.</p><h2>The Cost of Toosheh<br/></h2><p>Unlike internet-based systems, Toosheh’s operational cost remains constant regardless of the number of users. A single TV satellite in geostationary earth orbit, deployed and maintained by an international company such as Eutelsat, can broadcast to an entire continent with no increase in cost to audiences. What’s more, the startup cost for users isn’t high: A satellite dish and receiver in Iran costs less than US $50, which is affordable to many. And it costs nothing for people to use Toosheh’s service and receive its files.</p><p class="pull-quote"><span>We aim not just to build a tool for censorship circumvention, but to redefine access itself. </span></p><p>However, operating the service is costly: NetFreedom Pioneers pays tens of thousands of dollars a month for satellite bandwidth. We had received funding from the U.S. State Department, but in August of 2025, that funding ended, forcing us to suspend services in Iran.</p><p>Then the December protests happened, and broadcasting to Iran became an urgent priority. To turn Toosheh back on, we needed roughly $50,000 a month. With the support of a handful of private donors, we were able to meet these costs and sustain operations in Iran for a few months, though our future there and elsewhere is uncertain.</p><h2>Satellites Against Censorship<br/></h2><p>Toosheh’s revival in Iran came alongside NFP’s ongoing support for deployments of Starlink, a satellite internet service that allows users to connect directly to satellites rather than relying on domestic networks, which the government can shut down. Unlike Toosheh’s one-way broadcasts, <a href="https://spectrum.ieee.org/tag/starlink" target="_blank">Starlink</a> provides full two-way internet access, enabling users to send messages, upload videos, and communicate with the outside world.</p><p>In 2022, we started gathering <a href="https://www.gofundme.com/f/urgent-help-deliver-starlink-and-vpn-access-for-freedom" target="_blank">donations</a> to buy Starlink terminals for Iran. We have delivered more than 300 of the <a href="https://www.theguardian.com/world/2026/jan/13/ecosystem-smuggled-tech-iran-last-link-outside-world-internet" target="_blank">roughly 50,000</a> there, enabling citizens to send encrypted updates and videos to us from inside the country. Because the technology is banned by the government, access remains limited and carries risk; Iranian authorities have recently arrested Starlink users and sellers. And unlike Toosheh’s receive-only broadcasts, Starlink terminals transmit signals back to orbit, creating a radio footprint that can potentially be detected.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A photo of a laptop screen says the user is offline." class="rm-shortcode" data-rm-shortcode-id="2c0caa05d5589d7d25beeb8342db442e" data-rm-shortcode-name="rebelmouse-image" id="103c7" loading="lazy" src="https://spectrum.ieee.org/media-library/a-photo-of-a-laptop-screen-says-the-user-is-offline.png?id=65521782&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The internet shutdown in Iran continued after the attacks by Israel and the United States began in late February, preventing Iranians from communicating with the outside world and with one another.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Fatemeh Bahrami/Anadolu/Getty Images</small></p><p>Looking ahead, we envision Toosheh becoming a foundational part of global digital resilience. It is uncensored, untraceable, and resistant to government shutdowns. Because Toosheh is downlink only, it can sometimes feel hard to explain the value of this technology to those living in the free world, those accustomed to open internet access. Yet, people living under censorship have few other choices when there’s a digital blackout.</p><p>Currently, NFP is developing new features like intelligent content curation and automatically prioritizing data packages based on geographic or situational needs. And we’re experimenting with local sharing tools that allow users who receive Toosheh broadcasts to redistribute those files via Wi-Fi hotspots or other offline networks, which could extend the system’s reach to disaster zones, conflict areas, and climate-impacted regions where infrastructure may be destroyed.</p><p>We’re also looking at other use cases. Following the Taliban’s return to power in Afghanistan, NetFreedom Pioneers designed a satellite-based system to deliver educational materials. Our goal is to enable private, large-scale distribution of coursework to anyone—including the girls who are banned from Afghanistan’s schools. The system is technically ready but has yet to secure funding for deployment.</p><p>We aim not just to build a tool for censorship circumvention, but to redefine access itself. Whether in an Iranian city under surveillance, a Guatemalan village without internet, or a refugee camp in East Africa, Toosheh offers a powerful and practical model for delivering vital information without relying on vulnerable or expensive networks.</p><p>Toosheh is a reminder that innovation doesn’t have to mean complexity. Sometimes, the most transformative ideas are the simplest, like delivering data through the sky, quietly and affordably, into the hands of those who need it most.<span class="ieee-end-mark"></span></p>]]></description><pubDate>Wed, 15 Apr 2026 13:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/iran-internet-blackout-satellite-tv</guid><category>Satellite-communications</category><category>Censorship</category><category>Iran</category><category>Protests</category><category>Democracy</category><category>Internet-shutdowns</category><dc:creator>Evan Alireza Firoozi</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/a-photo-collage-of-a-woman-in-a-hijab-protests-in-iran-a-satellite-satellite-dishes-and-the-words-you-are-not-connected-to-t.png?id=65521125&amp;width=980"></media:content></item><item><title>Crypto Faces Increased Threat From Quantum Attacks</title><link>https://spectrum.ieee.org/quantum-safe-crypto</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/abstract-pixel-art-resembling-a-padlock-and-token.jpg?id=65520763&width=1200&height=800&coordinates=156%2C0%2C156%2C0"/><br/><br/><p>The <a href="https://spectrum.ieee.org/post-quantum-cryptography-standards-nist" target="_self">race</a> to transition online security protocols to ones that can’t be cracked by a quantum computer is already on. The algorithms that are commonly used today to protect data online—<a href="https://en.wikipedia.org/wiki/RSA_cryptosystem" rel="noopener noreferrer" target="_blank">RSA</a> and <a href="https://en.wikipedia.org/wiki/Elliptic-curve_cryptography" rel="noopener noreferrer" target="_blank">elliptic curve cryptography</a>—are uncrackable by supercomputers, but a large enough quantum computer would make quick work of them. There are <a href="https://spectrum.ieee.org/post-quantum-cryptography-2668949802" target="_self">algorithms</a> secure enough to be out of reach for both classical and future quantum machines, called post-quantum cryptography, but transitioning to these is a <a href="https://spectrum.ieee.org/post-quantum-cryptography-2667758178" target="_self">work in progress</a>. </p><p>Late last month, the team at <a href="https://quantumai.google/" rel="noopener noreferrer" target="_blank">Google Quantum AI</a> published a <a href="https://arxiv.org/abs/2603.28846" rel="noopener noreferrer" target="_blank">whitepaper</a> that added significant urgency to this race. In it, the team showed that the size of a quantum computer that would pose a cryptographic threat is approximately 20 times <a href="https://research.google/blog/safeguarding-cryptocurrency-by-disclosing-quantum-vulnerabilities-responsibly/" rel="noopener noreferrer" target="_blank">smaller</a> than previously thought. This is still far from accessible to the quantum computers that exist today: The largest machines currently consist of approximately 1,000 quantum bits, or qubits, and the whitepaper estimated that about 500 times as much is needed. Nonetheless, this shortens the timeline to switch over to post-quantum algorithms. </p><p>The news had a surprising beneficiary: Obscure cryptocurrency <a href="https://algorand.co/" rel="noopener noreferrer" target="_blank">Algorand</a> <a href="https://www.indexbox.io/blog/algorand-price-surges-44-after-google-research-paper-citation/" rel="noopener noreferrer" target="_blank">jumped</a> 44% in price in response. The whitepaper called out Algorand specifically for implementing post-quantum cryptography on their blockchain. We caught up with Algorand’s chief scientific officer and professor of computer science and engineering at the University of Michigan, <a href="https://web.eecs.umich.edu/~cpeikert/" rel="noopener noreferrer" target="_blank">Chris Peikert</a>, to understand how this announcement is impacting cryptography, why cryptocurrencies are feeling the effects, and what the future might hold. Peikert’s early work on a particular type of algorithm known as <a href="https://en.wikipedia.org/wiki/Lattice-based_cryptography" rel="noopener noreferrer" target="_blank">lattice cryptography</a> underlies most post-quantum security today.</p><p><strong>IEEE Spectrum:</strong><span> What is the significance of this Google Quantum AI whitepaper?</span></p><p><strong>Peikert:</strong> The upshot of this paper is that it shows that a quantum computer would be able to break some of the cryptography that is most widely used, especially in blockchains and cryptocurrencies, with much, much fewer resources than had previously been established. Those resources include the time that it would take to do so and the number of qubits (or quantum bits) that it would have to use.</p><p>This cryptography is very central to not just cryptocurrencies, but more broadly to cryptography on the internet. It is also used for secure web connections between web browsers and web servers. Versions of elliptic curve cryptography are used in national security systems and military encryption. It’s very prevalent and pervasive in all modern networks and protocols.</p><p>And not only was this paper improving the algorithms, but there was also a concurrent paper showing that the hardware itself was substantially improved. The claim here was that the number of physical qubits needed to achieve a certain kind of logical qubit was also greatly reduced. These two kinds of improvements are compounding upon each other. It’s a kind of a win-win situation from the quantum computing perspective, but a lose-lose situation for cryptography.</p><p><strong>IEEE Spectrum: </strong>What do Google AI’s findings mean for cryptocurrencies and the broader cybersecurity ecosystem?</p><p><strong>Peikert:</strong> There’s always been this looming threat in the distance of quantum computers breaking a large fraction of the cryptography that’s used throughout the cryptocurrency ecosystem. And I think what this paper did was really the loudest alarm yet that these kinds of quantum attacks might not be as far off as some have suspected, or hoped, in recent years. It’s caused a reevaluation across the industry, and a moving up of the timeline for when quantum computers might be capable of breaking this cryptography.</p><p>When we think about the timelines and when it’s important to have completed these transitions [to post-quantum cryptography], we also need to factor in the unknown improvements that we should expect to see in the coming years. The science of quantum computing will not stay static, and there will be these further breakthroughs. We can’t say exactly what they will be or when they will come, but you can bet that they will be coming.</p><p><strong>IEEE Spectrum:</strong> What is your guess on if or when quantum computers will be able to break cryptography in the real world?</p><p><strong>Peikert:</strong> Instead of thinking about a specific date when we expect them to come, we have to think about the probabilities and the risks as time goes on. There have been huge breakthrough developments, including not only this paper, but also <a href="https://research.google/blog/making-quantum-error-correction-work/" target="_blank">some</a> last year. But even with these, I think that the chance of a cryptographic attack by quantum computers being successful in the next three years is extremely low, maybe less than a percent. But then, as you get out to several years, like five, six, or 10 years, one has to seriously consider a probability, maybe 5 percent or 10 percent or more. So it’s still rather small, but significant enough that we have to worry about the risk, because the value that is protected by this kind of cryptography is really enormous. </p><p>The U.S. government has put 2035 as its target for migrating all of the national security systems to post-quantum cryptography. That seems like a prudent date, given the timelines that it takes to upgrade cryptography. It’s a slow process. It has to be done very deliberately and carefully to make sure that you’re not introducing new vulnerabilities, that you’re not making mistakes, that everything still works properly. So, you know, given the outlook for quantum computers on the horizon, it’s really important that we prepare now, or ideally, yesterday, or a few years ago, for that kind of transition.</p><p><strong>IEEE Spectrum: </strong>Are there significant roadblocks you see to industrial adoption of post-quantum cryptography going forward?</p><p><strong>Peikert:</strong> Cryptography is very hard to change. We’ve only had one or maybe two major transitions in cryptography since the early 1980s or late 1970s, when the field first was invented. We don’t really have a systematic way of transitioning cryptography. </p><p>An additional challenge is that the performance trade-offs are very different in post-quantum cryptography than they are in the legacy systems. Keys and cipher texts and digital signatures are all significantly larger in post-quantum cryptography, but the computations are actually faster, typically. People have optimized cryptography for speed in the past, and we have very good fast speeds now for post-quantum cryptography, but the sizes of the keys are a challenge. </p><p>Especially in blockchain applications, like cryptocurrencies, space on the blockchain is at a premium. So it calls for a reevaluation in many applications of how we integrate the cryptography into the system, and that work is ongoing. And, the blockchain ecosystem uses a lot of advanced cryptography, exotic things like zero-knowledge proofs. In many cases, we have rudimentary constructions of these fancy cryptography tools from post-quantum-type mathematics, but they’re not nearly as mature and industry-ready as the legacy systems that have been deployed. It continues to be an important technical challenge to develop post-quantum versions of these very fancy cryptographic schemes that are used in cutting-edge applications.</p><p><strong>IEEE Spectrum: </strong>As an academic cryptography researcher, what attracted you to work with a cryptocurrency, and Algorand in particular?</p><p><strong>Peikert:</strong> My former Ph.D. advisor is <a href="https://en.wikipedia.org/wiki/Silvio_Micali" target="_blank">Silvio Micali</a>, the inventor of Algorand. The system is very elegant. It is a very high-performing blockchain system, and it uses very little energy, has fast transaction finalization, and a number of other great features. And Silvio appreciated that this quantum threat was real and was coming, and the team approached me about helping to improve the Algorand protocol at the basic levels to become more post-quantum secure in 2021. That was a very exciting opportunity, because it was a difficult engineering and scientific challenge to integrate post-quantum cryptography into all the different technical and cryptographic mechanisms that were underlying the protocol.</p><p><strong>IEEE Spectrum: </strong>What is the current status of post-quantum cryptography in Algorand, and blockchains in general? </p><p><strong>Peikert:</strong> We’ve identified some of the most pressing issues and worked our way through some of them, but it’s a many-faceted problem overall. We started with the integrity of the chain itself, which is the transaction history that everybody has to agree upon. </p><p>Our first major project was developing a system that would add post-quantum security to the history of the chain. We developed a system called <a href="https://dev.algorand.co/concepts/protocol/state-proofs/" rel="noopener noreferrer" target="_blank">state proofs</a> for that, which is a mixture of ordinary post-quantum cryptography and also some more fancy cryptography: It’s a way of taking a large number of signatures and digesting them down into a much smaller number of signatures, while still being confident that these large number of signatures actually exist and are properly formed. We also followed it with other papers and projects that are about adding post-quantum cryptography and security to other aspects of the blockchain in the Algorand ecosystem. </p><p>It’s not a complete project yet. We don’t claim to be fully post-quantum secure. That’s a very challenging target to hit, and there are aspects that we will continue to work on into the near future.</p><p><strong>IEEE Spectrum: </strong>In your view, will we adopt post-quantum cryptography before the risks actually catch up with us? </p><p><strong>Peikert:</strong> I tend to be an optimist about these things. I think that it’s a very good thing that more people in decision-making roles are recognizing that this is an important topic, and that these kinds of migrations have to be done. I think that we can’t be complacent about it, and we can’t kick the can down the road much longer. But I do see that the focus is being put on this important problem, so I’m optimistic that most important systems will eventually have good either mitigations or full migrations in place. </p><p>But it’s also a point on the horizon that we don’t know exactly when it will come. So, there is the possibility that there is a huge breakthrough, and we have many fewer years than we might have hoped for, and that we don’t get all the systems upgraded that we would like to have fixed by the time quantum computers arrive.</p>]]></description><pubDate>Wed, 15 Apr 2026 13:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/quantum-safe-crypto</guid><category>Quantum-computing</category><category>Post-quantum-cryptography</category><category>Cryptocurrency</category><category>Lattice-cryptography</category><category>Security-protocols</category><category>Blockchain</category><category>Cryptography</category><dc:creator>Dina Genkina</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/abstract-pixel-art-resembling-a-padlock-and-token.jpg?id=65520763&amp;width=980"></media:content></item><item><title>Sarang Gupta Builds AI Systems With Real-World Impact</title><link>https://spectrum.ieee.org/openai-engineer-sarang-gupta</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-young-adult-indian-man-smiling-with-his-arms-crossed.png?id=65519413&width=1200&height=800&coordinates=0%2C83%2C0%2C84"/><br/><br/><p>Like many engineers, <a href="https://www.linkedin.com/in/sarang-gupta/" rel="noopener noreferrer" target="_blank">Sarang Gupta</a> spent his childhood tinkering with everyday items around the house. From a young age he gravitated to projects that could make a difference in someone’s everyday life.</p><p>When the family’s microwave plug broke, Gupta and his father figured out how to fix it. When a drawer handle started jiggling annoyingly, the youngster made sure it didn’t do so for long.</p><h3>Sarang Gupta</h3><br/><p><strong>Employer</strong></p><p><strong></strong>OpenAI in San Francisco</p><p><strong>Job</strong></p><p><strong></strong>Data science staff member</p><p><strong>Member grade</strong></p><p>Senior member</p><p><strong>Alma maters </strong></p><p><strong></strong>The Hong Kong University of Science and Technology; Columbia</p><p>By age 11, his interest expanded from nuts and bolts to software. He learned <a data-linked-post="2674010559" href="https://spectrum.ieee.org/top-programming-languages-2025" target="_blank">programming languages</a> such as <a href="https://en.wikipedia.org/wiki/BASIC" rel="noopener noreferrer" target="_blank">Basic</a> and <a href="https://en.wikipedia.org/wiki/Logo_(programming_language)" rel="noopener noreferrer" target="_blank">Logo</a> and designed simple programs including one that helped a local restaurant automate online ordering and billing.</p><p>Gupta, an IEEE senior member, brings his mix of curiosity, hands-on problem-solving, and a desire to make things work better to his role as member of the data science staff at <a href="https://openai.com/" rel="noopener noreferrer" target="_blank">OpenAI</a> in San Francisco. He works with the go-to-market (GTM) team to help businesses adopt <a href="https://chatgpt.com/" rel="noopener noreferrer" target="_blank">ChatGPT</a> and other products. He builds data-driven models and systems that support the sales and marketing divisions.</p><p>Gupta says he tries to ensure his work has an impact. When making decisions about his career, he says, he thinks about what AI solutions he can unlock to improve people’s lives.</p><p>“If I were to sum up my overall goal in one sentence,” he says, “it’s that I want AI’s benefits to reach as many people as possible.”</p><h2>Pursuing engineering through a business lens</h2><p>Gupta’s early interest in tinkering and programming led him to choose physics, chemistry, and math as his higher-level subjects at <a href="https://www.cirschool.org/" rel="noopener noreferrer" target="_blank">Chinmaya International Residential School</a>, in Tamil Nadu, India. As part of the high school’s <a href="https://www.ibo.org/" rel="noopener noreferrer" target="_blank">International Baccalaureate</a> chapter, students select three subjects in which to specialize.</p><p>“I was interested in engineering, including the theoretical part of it,” Gupta says, “But I was always more interested in the applications: how to sell that technology or how it ties to the real world.”</p><p>After graduating in 2012, he moved overseas to attend the <a href="https://hkust.edu.hk/" rel="noopener noreferrer" target="_blank">Hong Kong University of Science and Technology</a>. The university offered a <a href="https://techmgmt.hkust.edu.hk/" rel="noopener noreferrer" target="_blank">dual bachelor’s program</a> that allowed him to earn one degree in industrial engineering and another in business management in just four years.</p><p>In his spare time, Gupta built a smartphone app that let students upload their class schedules and find classmates to eat lunch with. The app didn’t take off, he says, but he enjoyed developing it. He also launched Pulp Ads, a business that printed advertisements for student groups on tissues and paper napkins, which were distributed in the school’s cafeterias. He made some money, he says, but shuttered the business after about a year.</p><p>After graduating from the university in 2016, he decided to work in Hong Kong’s financial hub and joined <a href="https://www.goldmansachs.com/" rel="noopener noreferrer" target="_blank">Goldman Sachs</a> as an analyst in the bank’s operations division.</p><h2>From finance to process optimization at scale</h2><p>After two parties agree on securities transactions, the bank’s operations division ensures that the trade details are recorded correctly, the securities and payments are ready to transfer, and the transaction settles accurately and on time.</p><p>As an analyst, Gupta’s task was to find bottlenecks in the bank’s workflows and fix them. He identified an opportunity to automate trade reconciliation: when analysts would manually compare data across spreadsheets and systems to make sure a transaction’s details were consistent. The process helped ensure financial transactions were recorded accurately and settled correctly.</p><p>Gupta built internal automation tools that pulled trade data from different systems, ran validation checks, and generated reports highlighting any discrepancies.</p><p>“Instead of analysts manually checking large datasets, the tools automatically flagged only the cases that required investigation,” he says. “This helped the team spend less time on repetitive verification tasks and more time resolving complex issues. It was also my first real exposure to how software and data systems could dramatically improve operational workflows.”</p><p class="pull-quote">“Whether it’s helping a person improve a trait like that or driving efficiencies at a business, AI just has so much potential to help. I’m excited to be a little part of that.”</p><p>The experience made him realize he wanted to work more deeply in technology and data-driven systems, he says. He decided to return to school in 2018 to study data science and AI, when the fields were just beginning to surge into broader awareness.</p><p>He discovered that <a href="https://www.columbia.edu/" rel="noopener noreferrer" target="_blank">Columbia</a> offered a dedicated master’s degree program in data science with a focus on AI. After being accepted in 2019, he moved to New York City.</p><p>Throughout the program, he gravitated to the applied side of machine learning, taking courses in applied deep learning and neural networks.</p><p>One of his major academic highlights, he says, was a project he did in 2019 with the <a href="https://brown.columbia.edu/" rel="noopener noreferrer" target="_blank">Brown Institute</a>, a joint research lab between Columbia and <a href="https://www.stanford.edu/" rel="noopener noreferrer" target="_blank">Stanford</a> focused on using technology to improve journalism. The team worked with <a href="https://www.inquirer.com/" rel="noopener noreferrer" target="_blank"><em><em>The Philadelphia Inquirer</em></em></a><em> </em>to help the newsroom staff better understand their coverage from a geographic and social standpoint. The project highlighted “news deserts”—underserved communities for which the newspaper was not providing much coverage—so the publication could redirect its reporting resources.</p><p>To identify those areas, <a href="https://aclanthology.org/2020.nlpcss-1.17.pdf" rel="noopener noreferrer" target="_blank">Gupta and his team built tools that extracted locations such as</a> street names and neighborhoods from news articles and mapped them to visualize where most of the coverage was concentrated. The <em><em>Inquirer</em></em> implemented the tool in several ways including a new <a href="https://medium.com/the-lenfest-local-lab/how-we-built-a-tool-to-spot-geographic-clusters-and-gaps-in-local-news-e553abe88287" rel="noopener noreferrer" target="_blank">web page that aggregated stories about COVID-19 by county</a>.</p><p> “Journalism was an interesting problem set for me, because I really like to read the news every day,” Gupta says. “It was an opportunity to work with a real newsroom on a problem that felt really impactful for both the business and the local community.”</p><h2>The GenAI inflection point</h2><p>After earning his master’s degree in 2020, Gupta moved to San Francisco to join <a href="https://asana.com/" rel="noopener noreferrer" target="_blank">Asana</a>, the company that developed the work management platform by the same name. He was drawn to the opportunity to work for a relatively small company where he could have end-to-end ownership of projects. He joined the organization as a product data scientist, focusing on A/B testing for new platform features.</p><p>Two years later, a new opportunity emerged: He was asked to lead the launch of Asana Intelligence, an internal machine learning team building AI-powered features into the company’s products.</p><p>“I felt I didn’t have enough experience to be the founding data scientist,” he says. “But I was also really interested in the space, and spinning up a whole machine learning program was an opportunity I couldn’t turn down.”</p><p>The Asana Intelligence team was given six months to build several machine learning–powered features to help customers work more efficiently. They included automatic summaries of project updates, insights about potential risks or delays, and recommendations for next steps.</p><p>The team met that goal and launched several other features including <a href="https://help.asana.com/s/article/smart-status" target="_blank">Smart Status</a>, an AI tool that analyzes a project’s tasks, deadlines, and activity, then generates a status update.</p><p>“When you finally launch the thing you’ve been working on, and you see the usage go up, it’s exhilarating,” he says. “You feel like that’s what you were building toward: users actually seeing and benefiting from what you made.”</p><p>Gupta and his team also translated that first wave of work into reusable frameworks and documentation to make it easier to create machine learning features at Asana. He and his colleagues filed several <a href="https://patents.google.com/patent/US20250355685A1/" rel="noopener noreferrer" target="_blank">U.S. patents</a>.</p><p>At the time he took on that role, OpenAI launched ChatGPT. The mainstreaming of generative AI and large language models shifted much of his work at Asana from model development to assessing LLMs.</p><p>OpenAI captured the attention of people around the world, including Gupta. In September 2025 he left Asana to join OpenAI’s data science team.</p><p>The transition has been both energizing and humbling, he says. At OpenAI, he works closely with the marketing team to help guide strategic decisions. His work focuses on developing models to understand the efficiency of different marketing channels, to measure what’s driving impact, and to help the company better reach and serve its customers.</p><p>“The pace is very different from my previous work. Things move quickly,” he says. “The industry is extremely competitive, and there’s a strong expectation to deliver fast. It’s been a great learning experience.”</p><p>Gupta says he plans to stay in the AI space. With technology evolving so rapidly, he says, he sees enormous potential for task automation across industries. AI has already transformed his core software engineering work, he says, and it’s helped him enhance areas that aren’t natural strengths.</p><p>“I’m not a good writer, and AI has been huge in helping me frame my words better and <a href="https://spectrum.ieee.org/engineering-communication" target="_blank">present my work more clearly</a>,” he says. “Whether it’s helping a person improve a trait like that or driving efficiencies at a business, AI just has so much potential to help. I’m excited to be a little part of that.”</p><h2>Exploring IEEE publications and connections</h2><p>Gupta has been an IEEE member since 2024, and he values the organization as both a technical resource and a professional network.</p><p>He regularly turns to IEEE publications and the <a href="https://ieeexplore.ieee.org/Xplore/guesthome.jsp" rel="noopener noreferrer" target="_blank">IEEE Xplore Digital Library</a> to read articles that keep him abreast of the evolution of AI, data science, and the engineering profession.</p><p>IEEE’s <a href="https://cis.ieee.org/activities/membership-activities/ieee-member-directory" rel="noopener noreferrer" target="_blank">member directory</a> tools are another valuable resource that he uses often, he says.</p><p>“It’s been a great way to connect with other engineers in the same or similar fields,” he says. “I love sharing and hearing about what folks are working on. It brings me outside of what I’m doing day to day.</p><p>“It inspires me, and it’s something I really enjoy and cherish.”</p>]]></description><pubDate>Tue, 14 Apr 2026 18:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/openai-engineer-sarang-gupta</guid><category>Ieee-member-news</category><category>Openai</category><category>Generative-ai</category><category>Chatgpt</category><category>Careers</category><category>Type-ti</category><dc:creator>Julianne Pepitone</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/a-young-adult-indian-man-smiling-with-his-arms-crossed.png?id=65519413&amp;width=980"></media:content></item><item><title>What It’s Like to Live With an Experimental Brain Implant</title><link>https://spectrum.ieee.org/bci-user-experience</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-close-up-shows-a-man-seated-in-a-wheelchair-attached-to-the-top-of-his-head-are-two-devices-each-with-a-cable-extending-away.jpg?id=65504719&width=1200&height=800&coordinates=0%2C83%2C0%2C84"/><br/><br/><p><strong><span><em></em></span>Scott Imbrie vividly remembers</strong> the first time he used a robotic arm to shake someone’s hand and felt the robotic limb as if it were his own. “I still get goosebumps when I think about that initial contact,” he says. “It’s just unexplainable.” The moment came courtesy of a brain implant: an array of electrodes that let him control a robotic arm and receive tactile sensations back to the brain.</p><p>Getting there took decades. In 1985, Imbrie had woken up in the hospital after a car accident with a broken neck and a doctor telling him he’d never use his hands or legs again. His response was an expletive, he says—and a decision. “I’m not going to allow someone to tell me what I can and can’t do.” With the determination of a head-strong 22-year-old, Imbrie gradually regained the ability to walk and some limited arm movement. Aware of how unusual his recovery was, the Illinois-native wanted to help others in similar situations and began looking for research projects related to spinal cord injuries. For decades, though, he wasn’t the right fit, until in 2020 he was finally accepted into a <a href="https://news.uchicago.edu/story/uchicago-researchers-re-create-sense-touch-and-motor-control-paralyzed-patient" target="_blank">University of Chicago trial</a>.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Elderly person in orange sweater sits as robotic arm with black hand extends forward" class="rm-shortcode" data-rm-shortcode-id="e63c60845055b0ac0aaa5b32194b121b" data-rm-shortcode-name="rebelmouse-image" id="11ece" loading="lazy" src="https://spectrum.ieee.org/media-library/elderly-person-in-orange-sweater-sits-as-robotic-arm-with-black-hand-extends-forward.jpg?id=65504759&width=980"/></p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Two photos. The first shows a man sitting in a chair with a large robotic arm extending in front of him. The second is a close-up of implants on the surface of a brain.  " class="rm-shortcode" data-rm-shortcode-id="908fc96ae84be7cc9033eadb8be951d9" data-rm-shortcode-name="rebelmouse-image" id="5304e" loading="lazy" src="https://spectrum.ieee.org/media-library/two-photos-the-first-shows-a-man-sitting-in-a-chair-with-a-large-robotic-arm-extending-in-front-of-him-the-second-is-a-close-u.jpg?id=65504756&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Scott Imbrie has shaken hands with a robotic arm controlled by a brain implant. The electrodes record neural signals that enable him to move the device and receive tactile feedback. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Top: 60 Minutes/CBS News; Bottom: University of Chicago </small></p><p>Imbrie is part of a rarefied group: More people have gone to space than have received advanced brain-computer interfaces (<a href="https://spectrum.ieee.org/tag/bci" target="_self">BCI</a>) like his. But a growing number of companies are now attempting to move the devices out of neuroscience labs and into mainstream medical care, where they could help millions of people with paralysis and other neurological conditions. Some companies even hope that BCIs will eventually become a consumer technology.</p><p>None of that will be possible without people like Imbrie. He’s a member of the <a href="https://bcipioneers.org/" target="_blank">BCI Pioneers Coalition</a>, an advocacy group founded in 2018 by <a href="https://bcipioneers.org/" target="_blank">Ian Burkhart</a>, the first quadriplegic to regain hand movement using a brain implant.</p><p>That life-changing experience convinced Burkhart that BCIs will make the leap from lab to real world only if users help shape the technology by sharing their perspectives on what works, what doesn’t, and how the devices fit into daily life. The coalition aims to ensure that companies, clinicians, and regulators hear directly from trial participants.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Two images. The first is a photo of a man sitting in a wheelchair; attached to the top of his head is a device with a cable attached. The second is a medical image showing the location of electrodes in the brain.  " class="rm-shortcode" data-rm-shortcode-id="338aabff57ac096c71e5d462f4959535" data-rm-shortcode-name="rebelmouse-image" id="3e41e" loading="lazy" src="https://spectrum.ieee.org/media-library/two-images-the-first-is-a-photo-of-a-man-sitting-in-a-wheelchair-attached-to-the-top-of-his-head-is-a-device-with-a-cable-atta.jpg?id=65504780&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Ian Burkhart founded the BCI Pioneers Coalition to ensure that companies developing brain implants hear directly from the people using them.  </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Left: Andrew Spear/Redux; Right: Ian Burkhart</small></p><p>The group also serves as a peer-support network for trial participants. That’s crucial, because despite the steady drumbeat of miraculous results from BCI trials, receiving a brain implant comes with significant risks. Surgical complications, such as bleeding or infection in the brain, are possible. Even more concerning is the potential psychological toll if the implant fails to work as expected or if life-changing improvements are eventually withdrawn.</p><p>Researchers spell this out upfront, and many are put off, says <a href="https://biologicalsciences.uchicago.edu/faculty/john-downey" target="_blank">John Downey</a>, an assistant professor of neurological surgery at the University of Chicago and the lead on Imbrie’s clinical trial. “I would say, the number of people I talk to about doing it is probably 10 to 20 times the number of people that actually end up doing it,” he says.</p><h2>What Happens in a BCI Trial? </h2><p>BCI pioneers arrive at their unique status via a number of paths, including spinal cord injuries, stroke-induced paralysis, and amyotrophic lateral sclerosis (ALS). The implants they receive come from <a href="https://blackrockneurotech.com/" target="_blank">Blackrock Neurotech</a>, <a href="https://neuralink.com/" target="_blank">Neuralink</a>, <a href="https://synchron.com/" target="_blank">Synchron</a>, and other companies, and are being tested for restoring limb function, controlling computers and robotic arms, and even restoring speech.</p><p>Many of the implants record signals from the motor cortex—the part of the brain that controls voluntary movements—to move external devices. Some others target the <a href="https://www.simplypsychology.org/somatosensory-cortex.html" target="_blank">somatosensory cortex</a>, which processes sensory signals from the body, including touch, pain, temperature, and limb position, to re-create tactile sensation.</p><h3>BCI Designs Used by Today’s Pioneers</h3><br/><img alt="Diagram comparing three brain-computer interface implants from Blackrock, Neuralink, Synchron." class="rm-shortcode" data-rm-shortcode-id="75d2979c205ebe19a1ea4e94507973c3" data-rm-shortcode-name="rebelmouse-image" id="4c076" loading="lazy" src="https://spectrum.ieee.org/media-library/diagram-comparing-three-brain-computer-interface-implants-from-blackrock-neuralink-synchron.png?id=65514139&width=980"/><p>Ease of use depends heavily on the application. Restoring function to a user’s own limbs or controlling robotic arms involves the most difficult learning curve. In early sessions, participants watch a virtual arm reach for objects while they imagine or attempt the same movement. Researchers record related brain signals and use them to train “decoder” software, which translates neural activity into control signals for a robotic arm or stimulation patterns for the user’s nerves or muscles.</p><p>Paralyzed in a 2010 swimming accident, Burkhart took part in a trial conducted by <a href="https://www.battelle.org/" target="_blank">Battelle Memorial Institute</a> and <a href="https://wexnermedical.osu.edu/" target="_blank">Ohio State University</a> from 2014 to 2021. His implant recorded signals from his motor cortex as he attempted to move his hand, and the system relayed those commands to electrodes in his arm that stimulated the muscles controlling his fingers.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A man seated at a desk has electronics wrapped around his right arm. He\u2019s holding a device shaped like a guitar and looking at a screen showing the fretboard of a guitar. " class="rm-shortcode" data-rm-shortcode-id="2d4609ca465d88f228401cf0e56f91e9" data-rm-shortcode-name="rebelmouse-image" id="6b47b" loading="lazy" src="https://spectrum.ieee.org/media-library/a-man-seated-at-a-desk-has-electronics-wrapped-around-his-right-arm-he-u2019s-holding-a-device-shaped-like-a-guitar-and-looking.jpg?id=65504802&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Ian Burkhart, who is paralyzed from the chest down, received a brain implant that routed neural signals through a computer to his paralyzed muscles, enabling him to play a video game. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Battelle</small></p><p>Getting the system to work seamlessly took time, says Burkhart, and initially required intense concentration. Eventually, he could shift his focus from each individual finger movement to the overall task, allowing him to swipe a credit card, pour from a bottle, and <a href="https://spectrum.ieee.org/brain-implants-and-wearables-let-paralyzed-people-move-again" target="_self">even play <em>Guitar Hero</em></a>.</p><p>Training a decoder is also not a one-and-done process. Systems must be regularly recalibrated to account for “neural drift”—the gradual shift in a person’s neural activity patterns over time. For complex tasks like robotic arm control, researchers may have to essentially train an entirely new decoder before each session, which can take up to an hour.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A man sits in a wheelchair surrounded by screens and electrical equipment. A device is attached to the top of his head, and a wire extends from it. Two other men stand in the room wearing masks.  " class="rm-shortcode" data-rm-shortcode-id="ec5eab3dfd4996ed87bf71eb84333f3d" data-rm-shortcode-name="rebelmouse-image" id="0cba2" loading="lazy" src="https://spectrum.ieee.org/media-library/a-man-sits-in-a-wheelchair-surrounded-by-screens-and-electrical-equipment-a-device-is-attached-to-the-top-of-his-head-and-a-wi.jpg?id=65504805&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Austin Beggin says that testing a BCI is hard work, but he adds that moments like petting his dog make it all worth it.  </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Daniel Lozada/The New York Times/Redux </small></p><p>Even after the system is ready, using the device can be taxing, says <a href="https://www.tiktok.com/@60minutes/video/7215008411992395054" target="_blank">Austin Beggin</a>, who was paralyzed in a swimming accident in 2015 and now participates in a Case Western Reserve University trial <a href="https://www.nytimes.com/2022/12/13/health/elon-musk-brain-implants-paralysis.html" target="_blank">aimed at restoring hand movement.</a> “The mental work of just trying to do something like shaking hands or feeding yourself is 100-fold versus you guys that don’t even think about it,” he says.</p><p>It’s also a serious time commitment. Beggin travels more than 2 hours from his home in Lima, Ohio, to Cleveland for two weeks every month to take part in experiments. All the equipment is set up in the house he stays in, and he typically works with the researchers for 3 to 4 hours a day. The majority of the experiments are not actually task-focused, he says, and instead are aimed at adjusting the control software or better understanding his neural responses to different stimuli.</p><p>But the BCI users say the hard work is worth it. Beyond the hope of restoring lost function, many feel a strong moral obligation to advance a technology that could help others. Beggin compares the pioneers to the early astronauts who laid the groundwork for the lunar landings. “We’re some of the first astronauts just to get shot up for a couple of hours and come back down to earth,” he says.</p><h2>The Emotional Impact of BCIs </h2><p>Speak to BCI early adopters and a pattern emerges: The biggest benefits are often more emotional than practical. Using a robotic arm to feed oneself or control a computer is clearly useful, but many pioneers say the most meaningful moments are the ones the experiment wasn’t even trying to produce. Beggin counts shaking his parents’ hands for the first time since his injury and stroking his pet dachshund as among his favorite moments. “That stuff is absolutely incredible,” he says.</p><p>Neuralink participant <a href="https://x.com/neuralink/status/1983263349715734982" target="_blank">Alex Conley</a>, who broke his neck in a car accident in 2021, uses his implant to control both a robotic arm and computers, enabling him to open doors, feed himself, and handle a smartphone. But he says the biggest boost has come from using computer-aided design software.</p><p>A former mechanic, Conley began using the software within days of receiving his implant to design parts that could be fabricated on a 3D printer. He has designed everything from replacement parts for his uncle’s power tools to bumpers for his brother-in-law’s truck. “I was a very big problem solver before my accident, I was able to fix people’s things,” he says. “This gives me that same little burst of joy.”</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Two photos show former U.S. president Barack Obama with a man seated in a wheelchair that has a robotic arm mounted to it. The first photo shows their whole bodies, the second is a close-up of a fist bump between Obama and the robotic hand. " class="rm-shortcode" data-rm-shortcode-id="df516748294d196a5ece83e680f3f325" data-rm-shortcode-name="rebelmouse-image" id="5acf9" loading="lazy" src="https://spectrum.ieee.org/media-library/two-photos-show-former-u-s-president-barack-obama-with-a-man-seated-in-a-wheelchair-that-has-a-robotic-arm-mounted-to-it-the-f.jpg?id=65504806&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">BCI user Nathan Copeland used a robotic arm to get a fist bump from then-President Barack Obama in 2016. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Jim Watson/AFP/Getty Images </small></p><p>The outside world often underestimates those little wins, says <a href="https://blackrockneurotech.com/insights/nathan-copeland-bci-pioneer/" target="_blank">Nathan Copeland</a>, who holds the record for the longest functional brain implant. After breaking his neck in a car accident in 2004, he joined a University of Pittsburgh BCI trial in 2015 and has since used the device to control both computers and a robotic arm.</p><p>After he uploaded a <a href="https://www.reddit.com/r/ffxiv/comments/dn1thj/i_thought_some_of_you_might_like_this_video_of_me/" target="_blank">video to Reddit</a> of himself playing <em><em>Final Fantasy XIV</em></em>, one commenter criticized him for not using his device for more practical tasks. Copeland says people don’t understand that those lighthearted activities also matter. “A lot of tasks that people think are mundane or frivolous are probably the tasks that have the most impact on someone that can’t do them,” he says. “Agency and freedom of expression, I think, are the things that impact a person’s life the most.”</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="49f2951c7484b0262253be4677639333" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/WjNHkRH0Dus?rel=0&start=90" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span><small class="image-media media-caption" placeholder="Add Photo Caption...">Nathan Copeland plays <i>Final Fantasy XIV</i> using his brain implant to control the game character.</small></p><h2>When Brain Implants Become Life-Changing </h2><p>This perspective resonates with Neuralink’s first user, <a href="https://newmobility.com/noland-arbaughs-life-as-the-first-neuralink-recipient/" target="_blank">Noland Arbaugh</a>—paralyzed from the neck down after a swimming accident in 2016. After receiving his implant in January 2024, he was able to control a cursor within minutes of the device being switched on. A few days later, the engineers let him play the video game <em><em>Civilisation VI</em></em>, and the technology’s potential suddenly felt real. “I played it for 8 hours or 12 hours straight,” he says. “It made me feel so independent and so free.”</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A man seated in a wheelchair looks at the screen of a laptop that\u2019s mounted on his wheelchair.  " class="rm-shortcode" data-rm-shortcode-id="30ce199de3390779d6767954025723e9" data-rm-shortcode-name="rebelmouse-image" id="a9d03" loading="lazy" src="https://spectrum.ieee.org/media-library/a-man-seated-in-a-wheelchair-looks-at-the-screen-of-a-laptop-that-u2019s-mounted-on-his-wheelchair.jpg?id=65504815&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Before receiving his Neuralink implant, Noland Arbaugh used mouth-operated devices to control a computer. He says the BCI is more reliable and enables him to do many more things on his own.  </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Rebecca Noble/The New York Times/Redux </small></p><p>But the technology is also providing more practical benefits. Before his implant, Arbaugh relied on a mouth-held typing stick and a mouth-controlled joystick called a quadstick, which uses sip-or-puff sensors to issue commands. But the fiddliness of this equipment required constant caregiver support. The Neuralink implant has dramatically increased the number of things he can do independently. He says he finds great value in not needing his family “to come in and help me 100 times a day.”</p><p>For <a href="https://www.als.org/blog/advances-brain-computer-interface-technology-help-one-man-find-his-voice" target="_blank">Casey Harrell</a>, the technology has been even more transformative. Diagnosed with ALS in 2020, the climate activist had just welcomed a baby daughter and was in the midst of a major campaign, pressuring a financial firm to divest from companies that had poor environmental records.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Person in a wheelchair outdoors, surrounded by green foliage and soft sunlight." class="rm-shortcode" data-rm-shortcode-id="041a1f40b02e5d01a72d117a237634d5" data-rm-shortcode-name="rebelmouse-image" id="45c80" loading="lazy" src="https://spectrum.ieee.org/media-library/person-in-a-wheelchair-outdoors-surrounded-by-green-foliage-and-soft-sunlight.jpg?id=65504832&width=980"/></p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Bald head with wired brain-computer interface sensors attached in front of a monitor" class="rm-shortcode" data-rm-shortcode-id="1cb7a1d971cd5ac70f674874ee93d27e" data-rm-shortcode-name="rebelmouse-image" id="b377a" loading="lazy" src="https://spectrum.ieee.org/media-library/bald-head-with-wired-brain-computer-interface-sensors-attached-in-front-of-a-monitor.jpg?id=65504831&width=980"/></p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Person using a brain-computer interface to control text on a monitor." class="rm-shortcode" data-rm-shortcode-id="4fbce330d12387f5661b1d6d9badcc55" data-rm-shortcode-name="rebelmouse-image" id="3940e" loading="lazy" src="https://spectrum.ieee.org/media-library/person-using-a-brain-computer-interface-to-control-text-on-a-monitor.jpg?id=65504835&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Casey Harrell was able to communicate again within 30 minutes of his BCI being switched on. The device translates his neural signals quickly enough for him to hold conversations. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Ian Bates/The New York Times/Redux </small></p><p>“Every morning we’d wake up and there’d be a new thing he couldn’t do, a new part of his body that didn’t work,” says his wife, Levana Saxon. Most alarming was his rapid loss of speech, which, among other things, left him unable to indicate when he was in pain. Then a relative alerted him to a <a href="https://health.ucdavis.edu/news/headlines/new-brain-computer-interface-allows-man-with-als-to-speak-again/2024/08" target="_blank">clinical trial</a> at the University of California, Davis, using BCIs to restore speech. He immediately signed up.</p><p>The device, implanted in July 2023, records from the brain region that controls muscles involved in talking and translates these signals into instructions for a voice synthesizer. Within 30 minutes of it being switched on, Harrell could communicate again. “I was absolutely overwhelmed with the thought of how this would impact my life and allow me to talk to my family and friends and better interact with my daughter,” he says. “It just was so overwhelming that I began to cry.”</p><p>While earlier assistive technology limited him to short, direct commands, Harrell says the BCI is fast enough that he can hold a proper conversation, and he’s been able to resume work part-time.</p><h2>What’s Holding BCI Technology Back? </h2><p>BCI technology still has limits. Most trial participants using Blackrock Neurotech implants can operate their devices only in the lab because the systems rely on wired connections and racks of computer hardware. Some users, including Copeland and Harrell, have had the equipment installed at home, but they still can’t leave the house with it. “That would be a big unlock if I was able to do so,” says Harrell.</p><p>The academic nature of many trials creates additional constraints. Pressure to publish and secure funding pushes researchers to demonstrate peak performance on narrow tasks rather than build more versatile and reliable systems, says <a href="https://utrecht-bci.nl/mariska-vansteensel/" target="_blank">Mariska Vansteensel</a>, who runs BCI studies at the University Medical Center Utrecht in the Netherlands. She says that investigating the technology’s limits or repeating an experiment in new patients is “less rewarded in terms of funding.”</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="c85200fe193b095c24a91c1a07bad088" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/1cqRFU0jx1k?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">In a clinical trial, Scott Imbrie uses a BCI to control a robotic arm, using signals from his motor cortex to make it move a block. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">University of Chicago</small></p><p>One of Imbrie’s biggest frustrations is the rapid turnover in experiments. Just as he begins to get proficient at one task, he’s asked to switch to the next task. Study designs also mean that much of the users’ time is spent on mundane tasks required to fine-tune the system.</p><p>Perhaps the biggest issue is that trials are often time-limited. That’s partly because scar tissue from the body’s immune response to the implant can gradually degrade signal quality. But constraints on funding and researcher availability can also make it impossible for users to keep using their BCIs after their trials end, even when the technology is still functional.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="236fdcf6ef676d6d58154c51ea2ccd07" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/60fAjaRfwnU?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">Ian Burkhart’s BCI enables him to grasp objects, pour from a bottle, and swipe a credit card.</small></p><p>Burkhart has firsthand experience. His trial was extended, but the implant was eventually removed after he got an infection. He always knew the trial would end, but it was nonetheless challenging. “It was a little bit of a tease where I got to see the capability of the restoration of function,” he says. “Now I’m just back to where I was.”</p><h2>The Push to Commercialize BCIs </h2><p>Progress is being made in transitioning the technology from experimental research devices to fully-fledged medical products that could help users in their everyday lives. Most academic BCI research has relied on Blackrock Neurotech’s Utah Arrays, which typically feature 96 needlelike electrodes that penetrate the brain’s surface. The implant is connected to a skull-mounted pedestal that’s wired to external hardware. But some of the newer devices are sleeker and less invasive.</p><p>Neuralink’s implant houses its electronics and rechargeable battery in a coin-size unit connected to flexible electrode threads inserted into the brain by a <a href="https://www.youtube.com/watch?v=wLJKOUzFOEU" target="_blank">robotic “sewing machine.”</a> The implant, which is roughly the size of a quarter or a euro, is mounted in a hole cut into the skull and charges and transfers data wirelessly. <a href="https://spectrum.ieee.org/synchron-bci" target="_self">Synchron takes a different approach</a>, threading a stent-like implant through blood vessels into the motor cortex. This “<a href="https://synchron.com/platform" target="_blank">stentrode</a>” connects by wire to a unit in the chest that powers the implant and transmits data wirelessly.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Bearded person in red T\u2011shirt using a laptop at a kitchen table" class="rm-shortcode" data-rm-shortcode-id="357a50573a7a53f991fe357924b7fa76" data-rm-shortcode-name="rebelmouse-image" id="62405" loading="lazy" src="https://spectrum.ieee.org/media-library/bearded-person-in-red-t-u2011shirt-using-a-laptop-at-a-kitchen-table.jpg?id=65504912&width=980"/></p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Man using a large on-screen keyboard to type messages on a tablet computer" class="rm-shortcode" data-rm-shortcode-id="7dd8055dbf7028c4cd03aedb0b1a55c7" data-rm-shortcode-name="rebelmouse-image" id="c7942" loading="lazy" src="https://spectrum.ieee.org/media-library/man-using-a-large-on-screen-keyboard-to-type-messages-on-a-tablet-computer.jpg?id=65504911&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Rodney Gorham can use his Synchron implant to control not just a computer, but also smart devices in his home like an air conditioner, fan, and smart speaker. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Rodney Decker </small></p><p>Neuralink’s decoder runs on a laptop, while Synchron deploys a smartphone-size signal processing unit as a wireless bridge to the user’s devices, which allows them to use their implants at home and on the move. The companies have also developed adaptive decoders that use machine learning to adjust to neural drift on the fly, reducing the need for recalibration.</p><p>Making these devices truly user-friendly will require technology that can interpret user context, says <a href="https://www.linkedin.com/in/kurt-haggstrom/" target="_blank">Kurt Haggstrom</a>, Synchron’s chief commercial officer—including mood, attention levels, and environmental factors like background noise and location. This approach will require AI that analyzes neural signals alongside other data streams such as audio and visual input.</p><p>Last year, Synchron took a first step by pairing its implant with an <a href="https://spectrum.ieee.org/apple-vision-pro" target="_self">Apple Vision Pro headset</a>. When trial participant <a href="https://www.rdworldonline.com/watch-rodney-a-paralyzed-man-control-his-home-with-tech-from-synchron-nvidia-and-apple/" target="_blank">Rodney Gorham</a> looked at devices such as a fan, a smart speaker, and an air conditioner, the headset overlaid a menu that enabled him to adjust the device’s settings using his implant.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="4d29290cc0251118e8c7c0ed46886e43" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/c-_OVgQ5q7k?rel=0&start=72" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">Rodney Gorham uses his Synchron implant to turn on music, feed his dog, and more. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Synchron BCI</small></p><p>Another way to reduce cognitive load is to detect high-order signals of intent in neural data rather than low-level motor commands, says <a href="https://www.linkedin.com/in/florian-solzbacher-aa971015/" target="_blank">Florian Solzbacher</a>, cofounder and chief scientific officer of Blackrock Neurotech. For instance, rather than manually navigating to an email app and typing, the user could simply think about sending an email and the system would then open it with content already prepopulated, he says.</p><p>Durability may prove a thornier problem to solve, UChicago’s Downey says. Current implants last around a decade—well short of a lifelong solution. And with limited real estate in the brain, replacement is only possible once or twice, he says.</p><p>Rapid technological progress also raises difficult decisions about whether to get a BCI implant now or wait for a more advanced device. This was a major concern for Gorham’s wife, Caroline. “I was hesitant. I didn’t want him to go on the trial but maybe a future one,” she says. “It was my fear of missing out on future upgrades.”</p><h2>Will Brain Implants Ever Become Consumer Tech? </h2><p>Some executives have raised the prospect of BCIs eventually becoming consumer devices. Neuralink founder <a href="https://spectrum.ieee.org/tag/elon-musk" target="_self">Elon Musk</a> has been particularly vocal, suggesting that the company’s implants could <a href="https://x.com/elonmusk/status/1802517673584341082?" target="_blank">replace smartphones</a>, let people <a href="https://www.theregister.com/2022/01/31/neuralink_job_ad/" target="_blank">save and replay memories</a>, or even achieve <a href="https://www.businessinsider.com/neuralink" target="_blank">“symbiosis” with AI</a>.</p><p>This kind of talk inspires mixed feelings in users. The hype brings visibility and funding, says Beggin, but could divert attention from medical users’ needs. Copeland worries that consumer branding could strip the devices of insurance coverage and that rising demand may make it harder to access qualified surgeons.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="A man, seen in profile, sits in a wheelchair. " class="rm-shortcode" data-rm-shortcode-id="e5928c73c49d9ab511ccc4c1187c5148" data-rm-shortcode-name="rebelmouse-image" id="437c0" loading="lazy" src="https://spectrum.ieee.org/media-library/a-man-seen-in-profile-sits-in-a-wheelchair.jpg?id=65504925&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Noland Arbaugh, the first recipient of Neuralink’s BCI, says that using the implant to control a computer made him feel independent and free. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Steve Craft/Guardian/eyevine/Redux </small></p><p>There are also concerns about how data collected by BCI companies will be handled if the devices go mainstream. As a trial participant, Arbaugh says he’s comfortable signing away his data rights to advance the technology, but he thinks stronger legal protections will be needed in the future. “Does that data still belong to Neuralink? Does it belong to each person? And can that data be sold?” he asks.</p><p>Blackrock’s Solzbacher says the company remains focused on the medical applications of the technology. But he also believes it is building a “universal interface to any kind of a computerized system” that may have broader applications in the future. And he says the company owes it to users not to limit them to a bare-bones assistive technology. “Why would somebody who’s got a medical condition want to get less than something that somebody who’s able-bodied would possibly also take?” says Solzbacher.</p><p>The ever-optimistic Imbrie heartily agrees. Medical devices are invariably expensive, he says, but targeting consumer applications could push companies to keep devices simple and affordable while continuing to add features. “I truly believe that making it a consumer-available product will just enhance the product’s capabilities for the medical field,” he says.</p><p>Imbrie is on a mission to refocus the conversation around BCIs on the positives. While concerns about risks are valid, he worries that the alarming language often used to describe brain implants discourages people from volunteering for trials that could help them.</p><p>“I remember laying there in the bed and not being able to move,” he says, “and it was really dehumanizing having to ask someone to do everything for you. As humans, we want to be independent.” <span class="ieee-end-mark"></span></p>]]></description><pubDate>Tue, 14 Apr 2026 13:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/bci-user-experience</guid><category>Bci</category><category>Clinical-trials</category><category>Brain-computer-interfaces</category><category>User-experience</category><category>Brain-implants</category><category>Assistive-technology</category><dc:creator>Edd Gent</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-close-up-shows-a-man-seated-in-a-wheelchair-attached-to-the-top-of-his-head-are-two-devices-each-with-a-cable-extending-away.jpg?id=65504719&amp;width=980"></media:content></item><item><title>Squishy Photonic Switches Promise Fast Low-Power Logic</title><link>https://spectrum.ieee.org/soft-photonics</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/illustration-of-a-micropipette-piercing-through-a-hemisphere-shaped-membrane-to-inject-a-droplet-at-its-core.jpg?id=65506297&width=1200&height=800&coordinates=0%2C83%2C0%2C84"/><br/><br/><p><span>Photonic devices, which rely on light instead of electricity, have the potential to be faster and more energy efficient than today’s electronics. They also present a unique opportunity to develop devices using <a href="https://spectrum.ieee.org/soft-robot-actuators-bugs" target="_self">soft materials</a>, such as polymers and gels, which are poor conductors of electricity but are easier to manufacture and more environmentally friendly. The development of these potentially squishy, <a href="https://spectrum.ieee.org/wearable-sensors" target="_self">flexible photonics</a>, however, requires the ability to manipulate light using only light, not electricity.</span></p><p>In soft matter, that’s been done primarily by changing the physical properties of optical materials or by using intense light pulses to change the direction of light. Now, an international team of scientists has developed a new way of controlling light with light using very low light intensities and without changing any of the physical properties of materials. </p><p><a href="https://musevic.fmf.uni-lj.si/" target="_blank"><span>Igor Muševič</span></a>, a professor of physics at the University of Ljubljana who led the project, says that he first got the idea for the device while at a conference in San Francisco, listening to a talk by <a href="https://www.nobelprize.org/prizes/chemistry/2014/hell/facts/" target="_blank">Stefan W. Hell </a>about stimulated emission depletion (STED) microscopy. The imaging technique, for which Hell won a <a href="https://www.nobelprize.org/prizes/chemistry/2014/summary/" target="_blank">Nobel Prize in Chemistry in 2014</a>, uses two lasers to produce an extremely small light beam to scan objects. “When I saw this, I said, This is manipulation light by light, right?” Muševič recalls.</p><p><span>His realization inspired a device into which a laser pulse is fired. Whether or not this beam makes it out of the device depends on whether or not a second pulse is fired less than a nanosecond afterwards.</span></p><h2>A liquid crystal photonic switch</h2><p><span>The device consists of a spherically shaped bead of liquid crystal, held in shape by its elastic material properties and the forces between its molecules, infused with a fluorescent dye and trapped between four upright cone-shaped polymer structures that guide light in and out of the device. When a laser pulse is sent through one of the four polymer waveguides, the light is quickly transferred into the liquid crystal, exciting the fluorescent dye. In a process known as whispering gallery mode resonance, the photons inside the liquid crystal are reflected back inside each time they hit the liquid’s spherical surface. The result is that light circulates inside the cavity until it is eventually reflected into one of the waveguides, which then emits the photons out in a laser beam. </span></p><p>The team realized that sending a second laser pulse of a different color into the waveguides before the liquid crystal started emitting light from the first laser pulse resulted in stimulated emission of the excited dye molecules. The photons from the second laser pulse, which had to be fired into the waveguides after the first laser pulse, interact with the already-excited dye molecules. The interaction causes the dye to emit photons identical to those in the second pulse while depleting the energy from the first pulse. The second laser beam, called the STED beam, is amplified by the process, while the light from the first pulse is so diminished that it isn’t emitted at all. Because the outcome of the first laser pulse could be controlled using the second laser pulse, the team had successfully demonstrated the control of light by light.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="0cb7a5df3d8c2896d2f429edfd746f29" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/mImgOT2zJ0I?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Vandna Sharma, Jaka Zaplotnik, et al.</small> </p><p><span>According to the Ljubljana team, the energy efficiency of the liquid crystal approach is much better than previous soft-matter techniques, which had typically involved using intense light fields to change material properties of the soft matter, such as the index of refraction. The new method reduces the energy needed by more than a factor of a hundred. Because the STED laser pulse circulates repeatedly in the crystal, a single photon can deplete many dye molecules of the energy from the first laser pulse.</span> </p><p><a href="https://ravnik.fmf.uni-lj.si/" target="_blank">Miha Ravnik</a>, a theoretical physicist also at the University of Ljubljana who worked on the project, explains that control of light by light is essential in soft-matter photonic logic gates. “You can very much control when [light] is generated and in which direction,” Ravnik says of the light shined into the polymer waveguides. “And this gives you, then, this capability that you create logical operations with light.”</p><p>Aside from its potential in photonic logical circuits, the team’s approach presents several technical advantages over photonics made from silicon or other hard materials, Muševič says. For example, using soft matter greatly simplifies the manufacturing process. The liquid crystal in the team’s device can be inserted in less than a second, but manufacturing a similar structure with hard materials is difficult. Additionally, soft-matter devices can be manufactured at much lower temperatures than silicon and other hard materials. Muševič also points out that soft matter presents an opportunity to experiment with the geometry of the device. With liquid crystals “you can make many different kinds of cavities,” says Muševič. “You have, I would say, a lot of engineering space.”</p><p>Ravnik is excited for the potential of the team’s breakthrough, particularly as a step toward <a href="https://spectrum.ieee.org/generative-optical-ai-nature-ucla" target="_self">photonic computing</a> and even photonic neural networks. But, he recognizes that these developments are far down the line. “There’s no way this technology can compete with current neural network implementation at all,” he admits. Still, the possibilities are tantalizing. “The energy losses are predicted to be extremely low, the speeds for calculation extremely high.”</p>]]></description><pubDate>Mon, 13 Apr 2026 12:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/soft-photonics</guid><category>Flexible-circuits</category><category>Photonics</category><category>Optical-switch</category><dc:creator>Velvet Wu</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/illustration-of-a-micropipette-piercing-through-a-hemisphere-shaped-membrane-to-inject-a-droplet-at-its-core.jpg?id=65506297&amp;width=980"></media:content></item><item><title>Working With More Experienced Engineers Can Fast-Track Career Growth</title><link>https://spectrum.ieee.org/using-feedback-engineering</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/an-illustration-of-stylized-people-wearing-business-casual-clothing.webp?id=65257424&width=1200&height=800&coordinates=0%2C50%2C0%2C50"/><br/><br/><p><em>This article is crossposted from </em>IEEE Spectrum<em>’s careers newsletter. <a href="https://engage.ieee.org/Career-Alert-Sign-Up.html" rel="noopener noreferrer" target="_blank"><em>Sign up now</em></a><em> to get insider tips, expert advice, and practical strategies, <em><em>written i<em>n partnership with tech career development company <a href="https://www.parsity.io/" rel="noopener noreferrer" target="_blank">Parsity</a> and </em></em></em>delivered to your inbox for free!</em></em></p><h2>The Worst Engineer in the Room</h2><p>My salary doubled. My confidence tanked. </p><p>That’s what happened when I had just joined a five-person startup in San Francisco in my third year as a software engineer. Two of the founders had been recognized in Forbes 30 Under 30. The team was exceptional by any measure.</p><p>On my first day, someone made a joke about Dijkstra’s algorithm. Everyone laughed. I smiled along, then looked it up afterward so I could understand why it was funny. Dijkstra’s algorithm finds the shortest path between 2 points—the math underlying GPS navigation. It’s a foundational concept in virtually every formal computer science curriculum. I had never encountered it.</p><p>That moment reflected a broader pattern. Conversations about system design and tradeoffs often felt just out of reach. I could follow parts of them, but not enough to contribute meaningfully.</p><p>I was mostly self-taught. Wide coverage, shallow roots. The engineers around me had roots. You could feel it in how they reasoned through problems, how they talked about tradeoffs, how they debugged with patience instead of pure panic.</p><h2>The Advice That Sounds Good Until You’re Living It</h2><p>You’ve heard the phrase: “If you’re the smartest person in the room, you’re in the wrong room.”</p><p>It sounds aspirational. What nobody tells you is what it actually feels like to be in that room. It feels like barely following system design conversations. Like nodding along to discussions you can only partially decode. Like shipping solutions through trial and error and hoping nobody looks too closely.</p><p>Being the weakest engineer in the room is genuinely uncomfortable. It surfaces every gap. And if you’re not careful, it pushes you in exactly the wrong direction.</p><p>My instinct was to make myself smaller. On a team of five, every voice mattered. I stopped offering mine. I rushed toward working solutions without real understanding, hoping velocity would compensate for depth.</p><p>I was working harder and, at the same time, I was not improving.</p><p>The turning point came when one of the most senior engineers left. Before departing, he told me it was difficult to work with me because I lacked foundational programming knowledge, listing out the concepts he saw me struggle with.</p><p>For the first time, what had felt like vague inadequacy became something specific.</p><h2>What the Cliché Misses</h2><p>Proximity to stronger engineers is not sufficient on its own. You won’t absorb their skill through osmosis. The engineers who thrive when they’re outmatched are not the ones who wait for confidence to arrive. They treat the discomfort as diagnostic information.</p><p>What can they answer that I can’t? What do they see in a system that I’m missing?</p><p>I defined a clear picture of the engineer I wanted to become and compared it to where I was. I wrote down what I did not know. I identified how I would close each gap with books, tutorials and small projects. I asked for recommendations from the same engineer who gave me the hard feedback.</p><p>I figured out the gaps. Then the bridges. Then I worked through each of them.</p><p>Over time, conversations became clearer. Debugging became more systematic. I started contributing meaningfully rather than just executing tasks.</p><h2>The Other Room Nobody Warns You About</h2><p>There’s a less-obvious version of this same problem: when you’re the strongest engineer in the room. </p><p>It can feel rewarding. Less friction, more validation. But there’s also less growth. When you’re at the ceiling, there’s no external pressure to raise your own floor. The feedback loops that sharpen judgment go quiet. Some engineers spend years there without noticing. They’re good. They’re comfortable. They stop getting better.</p><p>Both rooms carry risk. One threatens your confidence. The other threatens your trajectory.</p><p>Being the weakest engineer in a strong room is an advantage, but only if you treat it like one. It gives you a clear benchmark. But the room doesn’t do the work for you. You have to name the gaps, build a plan, and follow through.</p><p>And if you ever find yourself in the other room, where you’re clearly the strongest, pay attention to how long you’ve been there.</p><p>Both rooms are trying to tell you something.</p><p>—Brian</p><h2><a href="https://spectrum.ieee.org/us-engineering-phd-enrollment-drop" target="_self">Are U.S. Engineering Ph.D. Programs Losing Students?</a></h2><p>Not every engineer has a doctorate, but Ph.D. engineers are an essential part of the workforce, researching and designing tomorrow’s high-tech products and systems. In the United States, early signs are emerging that Ph.D. programs in electrical engineering and related fields may be shrinking. Political and economic uncertainty mean some universities are now seeing smaller applicant pools and graduate cohorts. </p><p><a href="https://spectrum.ieee.org/us-engineering-phd-enrollment-drop" target="_blank">Read more here. </a></p><h2><a href="https://spectrum.ieee.org/ai-community-engagement" target="_self">What Happens When You Host an AI Cafe</a></h2><p>Last November, three professors at Auburn University in Ala. hosted a gathering at a coffee shop to confront students’ concerns about AI. The event, which they call an “AI Café,” was meant to create an environment “where scholars engage their communities in genuine dialogue about AI. Not to lecture about technical capabilities, but to listen, learn, and co-create a vision for AI that serves the public interest.” In a guest article, they share what they learned at the event and tips for starting your own AI Café. </p><p><a href="https://spectrum.ieee.org/ai-community-engagement" target="_blank">Read more here. </a></p><h2><a href="https://newsletter.pragmaticengineer.com/p/what-is-inference-engineering" rel="noopener noreferrer" target="_blank">What Is Inference Engineering?</a></h2>Inference, the process of running a trained AI model on new data, is increasingly <a href="https://spectrum.ieee.org/nvidia-groq-3" target="_self">becoming a focus</a> in the world of AI engineering. The growth of open LLMs means that more engineers can now tweak the models to perform better at inference. Given this trend, a recent issue of the Substack “The Pragmatic Engineer” does a deep dive on inference engineering—what it is, when it’s needed, and how to do it. <p><a href="https://newsletter.pragmaticengineer.com/p/what-is-inference-engineering" target="_blank">Read more here. </a></p>]]></description><pubDate>Fri, 10 Apr 2026 18:49:00 +0000</pubDate><guid>https://spectrum.ieee.org/using-feedback-engineering</guid><category>Careers</category><category>Careers-newsletter</category><dc:creator>Brian Jenney</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/an-illustration-of-stylized-people-wearing-business-casual-clothing.webp?id=65257424&amp;width=980"></media:content></item><item><title>Remembering Gus Gaynor: A Devoted IEEE Volunteer</title><link>https://spectrum.ieee.org/remembering-gus-gaynor</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/black-and-white-photograph-of-a-white-high-school-boy-lowering-a-radio-systems-needle-onto-a-vinyl-record.jpg?id=65492955&width=1200&height=800&coordinates=0%2C83%2C0%2C84"/><br/><br/><p><a href="https://life.ieee.org/an-amazing-career-gerard-gus-gaynor/" rel="noopener noreferrer" target="_blank">Gerard “Gus” Gaynor</a>, a long-serving IEEE volunteer and former engineering director at <a href="https://www.3m.com/" rel="noopener noreferrer" target="_blank">3M</a>, died on 9 March. The IEEE Life Fellow was 104.</p><p>Readers of <a href="https://spectrum.ieee.org/the-institute/" target="_blank"><em><em>The Institute</em></em></a> might remember Gus from his 2022 profile: “<a href="https://spectrum.ieee.org/gus-gaynor-profile" target="_self">From Fixing Farm Equipment to Becoming a Director at 3M</a>.” Just last year, he and I coauthored two<a href="https://spectrum.ieee.org/influence-your-career" target="_blank">articles. One </a>discusses <a href="https://spectrum.ieee.org/influence-your-career" target="_blank">how to leverage relationships to boost your career growth</a>. The other weighs the <a href="https://spectrum.ieee.org/management-versus-technical-track" target="_blank">pros and cons of pursuing a technical or managerial career path</a>. He was 103 years old then. How many IEEE members can claim a centenarian coauthor?</p><p>I first met Gus in 2009 at the <a href="https://technical-community-spotlight.ieee.org/what-is-the-ieee-technical-activities-board-tab/" rel="noopener noreferrer" target="_blank">IEEE Technical Activities Board</a> (TAB) meeting in San Juan, Puerto Rico. We sat together in the airplane on our way back to Minneapolis, our hometown. At home I told many of my friends about the remarkable person—who was 87 years young at the time—with whom I chatted during our six-hour flight.</p><p>A decade later, he and I met for lunch in Minneapolis. He drove himself to the restaurant, just asking for a hand to navigate the snowy sidewalk.</p><h2>A dedicated IEEE volunteer</h2><p>Gus’s involvement with IEEE predates the organization. He joined the <a href="https://ethw.org/IRE_History_1912-1963#History_of_the_Institute_of_Radio_Engineers_1912-1963" rel="noopener noreferrer" target="_blank">Institute of Radio Engineers</a>, a predecessor society, as a student member in 1942. Twenty years later he became an active IEEE volunteer.</p><p>He served on the TAB’s finance committee and the <a href="https://pspb.ieee.org/" rel="noopener noreferrer" target="_blank">Publications Services and Products Board</a>. He was president of the IEEE Engineering Management Society (now the <a href="https://www.ieee-tems.org/" rel="noopener noreferrer" target="_blank">Technology and Engineering Management Society</a> ), and he was the <a href="https://www.ieee-tems.org/publications-of-the-technology-management-council/" rel="noopener noreferrer" target="_blank">Technology Management Council</a>’s first president. He was the founding editor of <a href="https://ieeeusa.org/" rel="noopener noreferrer" target="_blank">IEEE-USA</a>’s online magazine <a href="https://ieeeusa.org/product/the-best-of-todays-engineer-on-innovation/" rel="noopener noreferrer" target="_blank"><em><em>Today’s Engineer</em></em></a>, which reported on government legislation and issues affecting U.S. members’ careers. The magazine is now available as the e-newsletter <a href="https://insight.ieeeusa.org/about/" rel="noopener noreferrer" target="_blank"><em>IEEE-USA InSight</em></a>.</p><p>He authored several books on technology management and other topics, published by IEEE-USA and IEEE-Wiley.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="An elderly white man smiling in a dress shirt against a background of bookshelves." class="rm-shortcode" data-rm-shortcode-id="438ad571c4c9c78266d24b251480a736" data-rm-shortcode-name="rebelmouse-image" id="d6fab" loading="lazy" src="https://spectrum.ieee.org/media-library/an-elderly-white-man-smiling-in-a-dress-shirt-against-a-background-of-bookshelves.jpg?id=65492995&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">IEEE Life Fellow Gerard “Gus” Gaynor died on 9 March.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">The Gaynor Family</small></p><p>Most recently, after the formation of TEMS in 2015, he became an active member of its executive committee. He served two terms as vice president of publications.</p><p>At 100 years old, he led the launch of a new publication, <a href="https://www.ieee-tems.org/ieee-tems-leadership-briefs/" rel="noopener noreferrer" target="_blank"><em><em>TEMS Leadership Briefs</em></em></a>, a novel short-format open-access publication aimed at technology leaders.</p><p>Gus, who is a former member of <em>The Institute</em>’s editorial advisory board, also worked with <a href="https://spectrum.ieee.org/u/kathy-pretz" target="_self">Kathy Pretz</a>, <em>The Institute’s</em> editor in chief, to start an ongoing series of TEMS-sponsored career-interest articles. He coauthored several of them.</p><p>Throughout his 64 years as an IEEE volunteer, he received several honors. They include IEEE EMS’s Engineering Manager of the Year Award, the IEEE TEMS Career Achievement Award, and the IEEE-USA <a href="https://ieeeusa.org/volunteers/awards-recognition/professionalism/mcclure/" target="_blank">McClure Citation of Honor</a>. In 2014 he was inducted into the <a href="https://www.ieee.org/about/tab-hall-of-honor" rel="noopener noreferrer" target="_blank">IEEE Technical Activities Board Hall of Honor</a>.</p><h2>A 25-year career at 3M</h2><p>Gus received a degree in electrical engineering in 1950 from the <a href="https://umich.edu/" rel="noopener noreferrer" target="_blank">University of Michigan</a> in Ann Arbor. He worked for several companies including <a href="https://en.wikipedia.org/wiki/Automatic_Electric" rel="noopener noreferrer" target="_blank">Automatic Electric</a> (now part of <a href="https://www.nokia.com/" rel="noopener noreferrer" target="_blank">Nokia</a>) and Johnson Farebox (now part of <a href="https://genfare.com/" rel="noopener noreferrer" target="_blank">Genfare</a>), before joining 3M in 1962.</p><p>During his successful 25-year career at 3M, he served as chief engineer for a division in Italy, established the innovation department, and led the design and installation of the company’s first computerized manufacturing facilities. He retired as director of engineering in 1987.</p><p>Last year, IEEE Life Fellow <a href="https://www.linkedin.com/in/michael-condry-79931a" rel="noopener noreferrer" target="_blank">Michael Condry</a>, a former TEMS president, organized a Zoom call with Gus and other leaders of the society to celebrate Gus’s 104th birthday. Gus looked well and was his usual upbeat self, telling everyone: “I’m good. Everything’s well. I can’t complain.”</p><p>Gus was married to <a href="https://www.washburn-mcreavy.com/m/obituaries/Shirley-Gaynor/" rel="noopener noreferrer" target="_blank">Shirley Margaret Karrels Gaynor</a>, who passed away in 2018. He lives on in the hearts and minds of his seven children, seven grandchildren, two great-grandchildren, and innumerable friends and IEEE colleagues.</p>]]></description><pubDate>Thu, 09 Apr 2026 18:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/remembering-gus-gaynor</guid><category>Ieee-member-news</category><category>In-memoriam</category><category>Tribute</category><category>Ieee-technology-and-engineering</category><category>Careers</category><category>Type-ti</category><dc:creator>Tariq Samad</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/black-and-white-photograph-of-a-white-high-school-boy-lowering-a-radio-systems-needle-onto-a-vinyl-record.jpg?id=65492955&amp;width=980"></media:content></item><item><title>GoZTASP: A Zero-Trust Platform for Governing Autonomous Systems at Mission Scale</title><link>https://content.knowledgehub.wiley.com/goztasp-a-zero-trust-platform-for-governing-autonomous-systems-at-mission-scale/</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/technology-innovation-institute-logo-with-stylized-tii-and-curved-line.png?id=65498963&width=980"/><br/><br/><p>ZTASP is a mission-scale assurance and governance platform designed for autonomous systems operating in real-world environments. It integrates heterogeneous systems—including drones, robots, sensors, and human operators—into a unified zero-trust architecture. Through Secure Runtime Assurance (SRTA) and Secure Spatio-Temporal Reasoning (SSTR), ZTASP continuously verifies system integrity, enforces safety constraints, and enables resilient operation even under degraded conditions.</p><p>ZTASP has progressed beyond conceptual design, with operational validation at Technology Readiness Level (TRL) 7 in mission critical environments. Core components, including Saluki secure flight controllers, have reached TRL8 and are deployed in customer systems. While initially developed for high-consequence mission environments, the same assurance challenges are increasingly present across domains such as healthcare, transportation, and critical infrastructure.</p><p><span><a href="https://content.knowledgehub.wiley.com/goztasp-a-zero-trust-platform-for-governing-autonomous-systems-at-mission-scale/" target="_blank">Download this free whitepaper now!</a></span></p>]]></description><pubDate>Thu, 09 Apr 2026 15:06:39 +0000</pubDate><guid>https://content.knowledgehub.wiley.com/goztasp-a-zero-trust-platform-for-governing-autonomous-systems-at-mission-scale/</guid><category>Autonomous-systems</category><category>Drones</category><category>Sensors</category><category>Transportation</category><category>Type-whitepaper</category><dc:creator>Technology Innovation Institute</dc:creator><media:content medium="image" type="image/png" url="https://assets.rbl.ms/65498963/origin.png"></media:content></item><item><title>Chip Can Project Video the Size of a Grain of Sand</title><link>https://spectrum.ieee.org/mems-photonics</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/an-array-of-tiny-metallic-cantilevers-curving-away-from-the-surface-of-a-photonic-chip.jpg?id=65493217&width=1200&height=800&coordinates=156%2C0%2C156%2C0"/><br/><br/><p><span>By many estimates, quantum computers will need <a href="https://spectrum.ieee.org/neutral-atom-quantum-computing" target="_blank">millions of qubits </a>to realize their potential applications in cybersecurity, drug development, and other industries. The problem is, anyone who has wanted to simultaneously control millions of a certain kind of qubit has run into the problem of trying to control millions of laser beams. </span> </p><p><span>That’s exactly the challenge that was faced by scientists working on the <a href="https://www.mitre.org/resources/quantum-moonshot" target="_blank">MITRE Quantum Moonshot project</a>, which brought together scientists from MITRE, MIT, the University of Colorado at Boulder, and Sandia National Laboratories. The solution they developed came in the form of an image projection technology that they realized could also be the fix for a host of other challenges in augmented reality, biomedical imaging, and elsewhere. The device is a 1-square-millimeter photonic chip capable of projecting the Mona Lisa onto an area smaller than the size of two human egg <a href="https://spectrum.ieee.org/embryo-electrode-array" target="_blank">cells</a>. </span> </p><p><span>“When we started, we certainly never would have anticipated that we would be making a technology that might revolutionize imaging,” says Matt Eichenfield, one of the leaders of the Quantum Moonshot project, a collaborative research effort focused on developing a scalable, diamond-based quantum computer, and a professor of quantum engineering at the University of Colorado at Boulder. Each second, their chip is capable of projecting 68.6 million individual spots of light—called scannable pixels—to differentiate them from physical pixels. That’s more than 50 times the capability of previous technology, such as <a href="https://spectrum.ieee.org/mems-lidar" target="_blank">micro-electromechanical systems (MEMS) micromirror arrays</a>.</span></p><p> <span>“We have now made a scannable pixel that is at the absolute limit of what diffraction allows,” says <a href="https://www.linkedin.com/in/y-henry-wen-2b41979/" target="_blank">Henry Wen</a>, a visiting researcher at MIT and a photonics engineer at <a href="https://www.quera.com/" target="_blank">QuEra Computing</a>.</span></p><p>The chip’s distinguishing feature is an array of tiny microscale cantilevers, which curve away from the plane of the chip in response to voltage and act as miniature “ski jumps” for light. Light is channeled along the length of each cantilever via a waveguide and exits at its tip. The cantilevers contain a thin layer of aluminum nitride, a piezoelectric that expands or contracts under voltage, thus moving the micromachine up and down and enabling the array to scan beams of light over a two-dimensional area.</p><p>Despite the magnitude of the team’s achievement, Eichenfield says that the process of engineering the cantilevers was “pretty smooth.” Each cantilever is composed of a stack of several submicrometer layers of material and curls approximately 90 degrees out of the plane at rest. To achieve such a high curvature, the team took advantage of differences in the contraction and expansion of individual layers caused by physical stresses in the material resulting from the fabrication process. The materials are first deposited flat onto the chip. Then, a layer in the chip below the cantilever is removed, allowing the material stresses to take effect, releasing the cantilever from the chip and allowing it to curl out. The top layer of each cantilever also features a series of silicon dioxide bars running perpendicular to the waveguide, which keep the cantilever from curling along its width while also improving its lengthwise curvature.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="5525c992b93704c6dfdada2cd2c1d9c2" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/A4-ZqQTZauw?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">A micro-cantilever wiggles and waggles to project light in the right place.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Matt Saha, Y. Henry Wen, et al.</small></p><p>What was more of a challenge than engineering the chip itself was figuring out the details of actually making the chip project images and videos. Working out the process of synchronizing and timing the cantilevers’ motion and light beams to generate the right colors at the right time was a substantial effort, according to <a href="https://www.linkedin.com/in/agreenspon/" target="_blank">Andy Greenspon</a>, a researcher at MITRE who also worked on the project. Now, the team has successfully projected a variety of videos from a single cantilever, including clips from the movie <em><em><a href="https://www.youtube.com/watch?v=GPG3zSgm_Qo&list=PLnvfBuirq7alZgA0yGBnNObE5CeJTpUW4" target="_blank">A Charlie Brown Christmas</a></em></em>. </p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="A warped projection of the Mona Lisa." class="rm-shortcode" data-rm-shortcode-id="a4e5294e1a010872e545dbc18fb0e208" data-rm-shortcode-name="rebelmouse-image" id="a1039" loading="lazy" src="https://spectrum.ieee.org/media-library/a-warped-projection-of-the-mona-lisa.jpg?id=65493253&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The chip projected a roughly 125-micrometer image of the Mona Lisa.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit..."><a href="https://www.nature.com/articles/s41586-025-10038-6" target="_blank">Matt Saha, Y. Henry Wen, et al.</a></small></p><p>Because the chip can project so many more spots in any given time interval than any previous beam scanners, it could also be used to control many more qubits in quantum computers. The Quantum Moonshot program’s mission is to build a quantum computer that can be scaled to millions of qubits. So clearly, it needs a scalable way of controlling each one, explains Wen. Instead of using one laser per qubit, the team realized that not every qubit needed to be controlled at every given moment. The chip’s ability to move light beams over a two-dimensional area would allow them to control all of the qubits with many fewer lasers. </p><p>Another process that Wen thinks the chip could improve is scanning objects for <a href="https://spectrum.ieee.org/3d-printed-linear-motor" target="_blank">3D printing</a>. Today, that typically involves using a single laser to scan over the entire surface of an object. The new chip, however, could potentially employ thousands of laser beams. “I think now you can take a process that would have taken hours and maybe bring it down to minutes,” says Wen. </p><p>Wen is also excited to explore the potential of different cantilever shapes. By changing the orientations of the bars perpendicular to the waveguide, the team has been able to make the cantilevers curl into helixes. Wen says that such unusual shapes could be useful in making a <a href="https://spectrum.ieee.org/neurobot-living-robot-nervous-system" target="_blank">lab-on-a-chip for cell biology</a> or <a href="https://spectrum.ieee.org/lab-on-a-chip-grippers" target="_blank">drug development</a>. “A lot of this stuff is imaging, scanning a laser across something, either to image it or to stimulate some response. And so we could have one of these ski jumps curl not just up, but actually curl back around, and then move around and scan over a sample,” Wen explains. “If you can imagine a structure that will be useful for you, we should try it.”</p>]]></description><pubDate>Thu, 09 Apr 2026 13:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/mems-photonics</guid><category>Microarray</category><category>Digital-micromirror-device</category><category>Mems</category><category>Quantum-computers</category><category>Nitrogen-vacancy-defects-diamond</category><dc:creator>Velvet Wu</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/an-array-of-tiny-metallic-cantilevers-curving-away-from-the-surface-of-a-photonic-chip.jpg?id=65493217&amp;width=980"></media:content></item><item><title>Temple University Student Highlights IEEE Membership Perks</title><link>https://spectrum.ieee.org/temple-university-student-membership-perks</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-young-white-man-smiling-and-crossing-his-arms-in-a-workshop.jpg?id=65485944&width=1200&height=800&coordinates=62%2C0%2C63%2C0"/><br/><br/><p><a href="https://www.linkedin.com/in/kyle-mcginley112/" rel="noopener noreferrer" target="_blank">Kyle McGinley</a> graduated from high school in 2018 and, like many teenagers, he was unsure what career he wanted to pursue. Recuperating from a sports injury led him to consider becoming a physical therapist for athletes. But he was skilled at repairing cars and fixing things around the house, so he thought about becoming an engineer, like his father.</p><p>McGinley, who lives in Sellersville, Pa., took some classes at <a href="https://www.mc3.edu/" rel="noopener noreferrer" target="_blank">Montgomery County Community College</a> in Blue Bell, while also working. During his years at the college, he took a variety of courses and was drawn to electrical engineering and computing, he says. He left to pursue a bachelor’s degree in electrical and computer engineering in Philadelphia at <a href="https://engineering.temple.edu/" rel="noopener noreferrer" target="_blank">Temple University</a>, where he is currently a junior.</p><h3>Kyle McGinley</h3><br/><h2><strong>MEMBER GRADE</strong></h2><p>Student member</p><p><strong>UNIVERSITY</strong></p><p>Temple, in Philadelphia</p><p><strong>MAJOR</strong></p><p><strong></strong> Electrical and computer engineering</p><p>The 26-year-old is also a teaching assistant and a research assistant at Temple. His research focuses on applying artificial intelligence to electrical hardware and robotics. He helped build an AI-integrated <a href="https://spectrum.ieee.org/honda-p2-robot-ieee-milestone" target="_self">android</a> companion to assist in-home caregivers.</p><p>Temple recognized McGinley’s efforts last year with its <a href="https://engineering.temple.edu/students/our-students/scholarships#:~:text=The%20College%20of%20Engineering%20at%20Temple%20University,credit%20hours%20in%20engineering%20or%20engineering%20technology" rel="noopener noreferrer" target="_blank">Butz scholarship</a>, which is awarded annually to an electrical and computer engineering undergraduate with an interest in software development, AI development systems, health education software, or a similar field.</p><p>An IEEE <a href="https://students.ieee.org/membership-benefits/" rel="noopener noreferrer" target="_blank">student member</a>, he is active within the university’s student branch.</p><p>“Dr. Brian Butz, the late professor emeritus, dedicated his research to artificial intelligence,” McGinley says. “The scholarship he and his wife Susan established helps allow students to pursue research in AI. Their generous donation has helped fund my research.” </p><h2>Building a robot aide</h2><p>McGinley is a teaching assistant for his digital circuit design course. In a class of 35 students, it can be a struggle for some to digest the professor’s words, he says.</p><p>“My job is to answer students’ questions if they are having problems following the professor’s lecture or are confused about any of the topics,” he says. “In the lab, I help students debug code or with hardware issues they have on the FPGA [field-programmable gate array] boards.”</p><p>He also conducts research for the university’s <a href="https://cfl-temple.github.io/" rel="noopener noreferrer" target="_blank">Computer Fusion Lab</a> under the supervision of <a href="https://engineering.temple.edu/directory/li-bai-lbai" rel="noopener noreferrer" target="_blank">IEEE Senior Member Li Bai</a>, a professor of electrical and computer engineering. McGinley writes software programs at the lab.</p><p class="pull-quote">“In school, they don’t teach you how to communicate with people. They only teach you how to remember stuff. Working well with people is one of the most underrated skills that a lot of students don’t understand is important.” </p><p>One such assignment was working with the <a href="https://cph.temple.edu/" target="_blank">Temple School of Social Work at the Barnett College of Public Health</a> to build a robot companion integrated with AI to assist individuals with <a href="https://spectrum.ieee.org/parkinsons-disease-pen" target="_self">Parkinson’s disease</a> and their caregivers.</p><p>“I realized the need for this with my grandmother, when she was taking care of my grandfather,” he says. “It was a lot for her, trying to remember everything.”</p><p>Using the latest software and hardware, he and three classmates rebuilt an older lab robot. They installed an operating system and used <a href="https://spectrum.ieee.org/top-programming-languages-2025" target="_self">Python and C++</a> for its control, perception, and behavior, he says. The students also incorporated Google’s <a href="https://gemini.google.com/" rel="noopener noreferrer" target="_blank">Gemini AI</a> to help with routine tasks such as scheduling medication reminders and setting alarms for upcoming doctor visits.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="A small humanoid robot standing on a kitchen counter." class="rm-shortcode" data-rm-shortcode-id="004f8c672a90c8b1cd738b7bc9d7f84a" data-rm-shortcode-name="rebelmouse-image" id="09e49" loading="lazy" src="https://spectrum.ieee.org/media-library/a-small-humanoid-robot-standing-on-a-kitchen-counter.jpg?id=65486403&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Kyle McGinley helped build an AI-integrated android to assist individuals with Parkinson’s disease and their caregivers.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Temple University of Public Health</small></p><p>The AI-integrated android was intended to assist, not replace, the caregivers by handling the mental load of remembering tasks, he says.</p><p>“This was one of the cool things that drew me to working in the robotics field,” he says. “Something where AI could be used to help caregivers do simple tasks.</p><p>“My career ambition after I graduate is to gain real-world experience in the engineering industry to learn skills outside of academia, Long term, I want to do project management or work in a technical lead role, with the primary goal of creating impactful projects that I can be proud of.”</p><h2>The benefits of a student branch</h2><p>McGinley joined <a href="https://www.instagram.com/temple_ieee/" target="_blank">Temple’s IEEE student branch</a> last year after one of his professors offered extra credit to students who did so. After attending meetings and participating in a few workshops, he found he really liked the club, he says, adding that he made new friends and enjoyed the camaraderie with other engineering students.</p><p>After the student branch’s board members got to know McGinley better, they asked him to become the club’s historian and manage its social media account. He also helps with event planning, creating and posting fliers, taking pictures, and shooting videos of the gatherings.</p><p>The branch has benefited from McGinley’s involvement, but he says it’s a two-way street.</p><p>“The biggest things I’ve learned are being held accountable and being reliable,” he says. “I am responsible for other people knowing what’s going on.”</p><p>Being an active volunteer has improved his communication skills, he says.</p><p>“Learning to clearly communicate with other people to make sure everyone is on the same page is important,” he says. “In school, they don’t teach you how to communicate with people. They only teach you how to remember stuff. Working well with people is one of the most underrated skills that a lot of students don’t understand is important.”</p><p>He encourages students to join their <a href="https://students.ieee.org/student-branches/" target="_blank">university’s IEEE branch</a>.</p><p>“I know it can be scary because you might not know anyone, but it honestly can’t hurt you; it could actually benefit you,” he says. “Being active is going to help you with a lot of skills that you need.</p><p>“You’ll definitely get opportunities that you would have never known about, like a scholarship or working in the research lab. I would have never gotten these opportunities if I hadn’t shown up. Joining IEEE and being active is the best thing you can do for your career.”</p><p><em>This article was updated on 9 April 2026.</em></p>]]></description><pubDate>Tue, 07 Apr 2026 18:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/temple-university-student-membership-perks</guid><category>Robotics</category><category>Ieee-member-news</category><category>Artificial-intelligence</category><category>Careers</category><category>Student-members</category><category>Temple-university</category><category>Type-ti</category><dc:creator>Kathy Pretz</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-young-white-man-smiling-and-crossing-his-arms-in-a-workshop.jpg?id=65485944&amp;width=980"></media:content></item><item><title>Decentralized Training Can Help Solve AI’s Energy Woes</title><link>https://spectrum.ieee.org/decentralized-ai-training-2676670858</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/illustration-of-several-data-servers-interconnected-across-long-distances.jpg?id=65477795&width=1200&height=800&coordinates=156%2C0%2C156%2C0"/><br/><br/><p> <a href="https://spectrum.ieee.org/topic/artificial-intelligence/" target="_self">Artificial intelligence</a> harbors an enormous <a href="https://spectrum.ieee.org/topic/energy/" target="_self">energy</a> appetite. Such constant cravings are evident in the <a href="https://spectrum.ieee.org/ai-index-2025" target="_self">hefty carbon footprint</a> of the <a href="https://spectrum.ieee.org/tag/data-centers" target="_self">data centers</a> behind the AI boom and the steady increase over time of <a href="https://spectrum.ieee.org/tag/carbon-emissions" target="_self">carbon emissions</a> from training frontier <a href="https://spectrum.ieee.org/tag/ai-models" target="_self">AI models</a>.</p><p>No wonder big tech companies are warming up to <a href="https://spectrum.ieee.org/tag/nuclear-energy" target="_self">nuclear energy</a>, envisioning a future fueled by reliable, carbon-free sources. But while <a href="https://spectrum.ieee.org/nuclear-powered-data-center" target="_self">nuclear-powered data centers</a> might still be years away, some in the research and industry spheres are taking action right now to curb AI’s growing energy demands. They’re tackling training as one of the most energy-intensive phases in a model’s life cycle, focusing their efforts on decentralization.</p><p>Decentralization allocates model training across a network of independent nodes rather than relying on one platform or provider. It allows compute to go where the energy is—be it a dormant server sitting in a research lab or a computer in a <a href="https://spectrum.ieee.org/tag/solar-power" target="_self">solar-powered</a> home. Instead of constructing more data centers that require <a href="https://spectrum.ieee.org/tag/power-grid" target="_self">electric grids</a> to scale up their infrastructure and capacity, decentralization harnesses energy from existing sources, avoiding adding more power into the mix.</p><h2>Hardware in harmony</h2><p>Training AI models is a huge data center sport, synchronized across clusters of closely connected <a href="https://spectrum.ieee.org/tag/gpus" target="_self">GPUs</a>. But as <a href="https://spectrum.ieee.org/mlperf-trends" target="_self">hardware improvements struggle to keep up</a> with the swift rise in the size of <a href="https://spectrum.ieee.org/tag/large-language-models" target="_self">large language models</a>, even massive single data centers are no longer cutting it.</p><p>Tech firms are turning to the pooled power of multiple data centers—no matter their location. <a href="https://spectrum.ieee.org/tag/nvidia" target="_self">Nvidia</a>, for instance, launched the <a href="https://developer.nvidia.com/blog/how-to-connect-distributed-data-centers-into-large-ai-factories-with-scale-across-networking/" target="_blank">Spectrum-XGS Ethernet for scale-across networking</a>, which “can deliver the performance needed for large-scale single job AI training and inference across geographically separated data centers.” Similarly, <a href="https://spectrum.ieee.org/tag/cisco" target="_self">Cisco</a> introduced its <a href="https://blogs.cisco.com/sp/the-new-benchmark-for-distributed-ai-networking" target="_blank">8223 router</a> designed to “connect geographically dispersed AI clusters.”</p><p>Other companies are harvesting idle compute in <a href="https://spectrum.ieee.org/tag/servers" target="_self">servers</a>, sparking the emergence of a <a href="https://spectrum.ieee.org/gpu-as-a-service" target="_self">GPU-as-a-Service</a> business model. Take <a href="https://akash.network/" rel="noopener noreferrer" target="_blank">Akash Network</a>, a peer-to-peer <a href="https://spectrum.ieee.org/tag/cloud-computing" target="_self">cloud computing</a> marketplace that bills itself as the “Airbnb for data centers.” Those with unused or underused GPUs in offices and smaller data centers register as providers, while those in need of computing power are considered as tenants who can choose among providers and rent their GPUs.</p><p>“If you look at [AI] training today, it’s very dependent on the latest and greatest GPUs,” says Akash cofounder and CEO <a href="https://www.linkedin.com/in/gosuri" rel="noopener noreferrer" target="_blank">Greg Osuri</a>. “The world is transitioning, fortunately, from only relying on large, high-density GPUs to now considering smaller GPUs.”</p><h2>Software in sync</h2><p>In addition to orchestrating the <a href="https://spectrum.ieee.org/tag/hardware" target="_self">hardware</a>, decentralized AI training also requires algorithmic changes on the <a href="https://spectrum.ieee.org/tag/software" target="_self">software</a> side. This is where <a href="https://cloud.google.com/discover/what-is-federated-learning" rel="noopener noreferrer" target="_blank">federated learning</a>, a form of distributed <a href="https://spectrum.ieee.org/tag/machine-learning" target="_self">machine learning</a>, comes in.</p><p>It starts with an initial version of a global AI model housed in a trusted entity such as a central server. The server distributes the model to participating organizations, which train it locally on their data and share only the model weights with the trusted entity, explains <a href="https://www.csail.mit.edu/person/lalana-kagal" rel="noopener noreferrer" target="_blank">Lalana Kagal</a>, a principal research scientist at <a href="https://www.csail.mit.edu/" rel="noopener noreferrer" target="_blank">MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL)</a> who leads the <a href="https://www.csail.mit.edu/research/decentralized-information-group-dig" rel="noopener noreferrer" target="_blank">Decentralized Information Group</a>. The trusted entity then aggregates the weights, often by averaging them, integrates them into the global model, and sends the updated model back to the participants. This collaborative training cycle repeats until the model is considered fully trained.</p><p>But there are drawbacks to distributing both data and computation. The constant back-and-forth exchanges of model weights, for instance, result in high communication costs. Fault tolerance is another issue.</p><p>“A big thing about AI is that every training step is not fault-tolerant,” Osuri says. “That means if one node goes down, you have to restore the whole batch again.”</p><p>To overcome these hurdles, researchers at <a href="https://deepmind.google/" rel="noopener noreferrer" target="_blank">Google DeepMind</a> developed <a href="https://arxiv.org/abs/2311.08105" rel="noopener noreferrer" target="_blank">DiLoCo</a>, a distributed low-communication optimization <a href="https://spectrum.ieee.org/tag/algorithms" target="_self">algorithm</a>. DiLoCo forms what <a href="https://spectrum.ieee.org/tag/google-deepmind" target="_self">Google DeepMind</a> research scientist <a href="https://arthurdouillard.com/" rel="noopener noreferrer" target="_blank">Arthur Douillard</a> calls “islands of compute,” where each island consists of a group of <a href="https://spectrum.ieee.org/tag/chips" target="_self">chips</a>. Every island holds a different chip type, but chips within an island must be of the same type. Islands are decoupled from each other, and synchronizing knowledge between them happens once in a while. This decoupling means islands can perform training steps independently without communicating as often, and chips can fail without having to interrupt the remaining healthy chips. However, the team’s experiments found diminishing performance after eight islands.</p><p>An improved version, dubbed <a href="https://arxiv.org/abs/2501.18512" rel="noopener noreferrer" target="_blank">Streaming DiLoCo</a>, further reduces the bandwidth requirement by synchronizing knowledge “in a streaming fashion across several steps and without stopping for communicating,” says Douillard. The mechanism is akin to watching a video even if it hasn’t been fully downloaded yet. “In Streaming DiLoCo, as you do computational work, the knowledge is being synchronized gradually in the background,” he adds.</p><p>AI development platform <a href="https://www.primeintellect.ai/" rel="noopener noreferrer" target="_blank">Prime Intellect</a> implemented a variant of the DiLoCo algorithm as a vital component of its 10-billion-parameter <a href="https://www.primeintellect.ai/blog/intellect-1-release" rel="noopener noreferrer" target="_blank">INTELLECT-1</a> model trained across five countries spanning three continents. Upping the ante, <a href="https://0g.ai/" rel="noopener noreferrer" target="_blank">0G Labs</a>, makers of a decentralized AI <a href="https://spectrum.ieee.org/tag/operating-system" target="_self">operating system</a>, <a href="https://0g.ai/blog/worlds-first-distributed-100b-parameter-ai" rel="noopener noreferrer" target="_blank">adapted DiLoCo to train a 107-billion-parameter foundation model</a> under a network of segregated clusters with limited bandwidth. Meanwhile, popular <a href="https://spectrum.ieee.org/tag/open-source" target="_self">open-source</a> <a href="https://spectrum.ieee.org/tag/deep-learning" target="_self">deep learning</a> framework <a href="https://pytorch.org/projects/pytorch/" rel="noopener noreferrer" target="_blank">PyTorch</a> included DiLoCo in its <a href="https://meta-pytorch.org/torchft/" rel="noopener noreferrer" target="_blank">repository of fault-tolerance techniques</a>.</p><p>“A lot of engineering has been done by the community to take our DiLoCo paper and integrate it in a system learning over consumer-grade internet,” Douillard says. “I’m very excited to see my research being useful.”</p><h2>A more energy-efficient way to train AI</h2><p>With hardware and software enhancements in place, decentralized AI training is primed to help solve AI’s energy problem. This approach offers the option of training models “in a cheaper, more resource-efficient, more energy-efficient way,” says MIT CSAIL’s Kagal.</p><p>And while Douillard admits that “training methods like DiLoCo are arguably more complex, they provide an interesting trade-off of system efficiency.” For instance, you can now use data centers across far apart locations without needing to build ultrafast bandwidth in between. Douillard adds that fault tolerance is baked in because “the blast radius of a chip failing is limited to its island of compute.”</p><p>Even better, companies can take advantage of existing underutilized processing capacity rather than continuously building new energy-hungry data centers. Betting big on such an opportunity, Akash created its <a href="https://www.youtube.com/watch?v=zAj41xSNPeI" rel="noopener noreferrer" target="_blank">Starcluster program</a>. One of the program’s aims involves tapping into solar-powered homes and employing the desktops and laptops within them to train AI models. “We want to convert your home into a fully functional data center,” Osuri says.</p><p>Osuri acknowledges that participating in Starcluster will not be trivial. Beyond solar panels and devices equipped with consumer-grade GPUs, participants would also need to invest in <a href="https://spectrum.ieee.org/tag/batteries" target="_self">batteries</a> for backup power and redundant internet to prevent downtime. The Starcluster program is figuring out ways to package all these aspects together and make it easier for homeowners, including collaborating with industry partners to subsidize battery costs.</p><p>Back-end work is already underway to enable <a href="https://akash.network/roadmap/aep-60/" rel="noopener noreferrer" target="_blank">homes to participate as providers in the Akash Network</a>, and the team hopes to reach its target by 2027. The Starcluster program also envisions expanding into other solar-powered locations, such as schools and local community sites.</p><p>Decentralized AI training holds much promise to steer AI toward a more environmentally sustainable future. For Osuri, such potential lies in moving AI “to where the energy is instead of moving the energy to where AI is.”</p>]]></description><pubDate>Tue, 07 Apr 2026 14:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/decentralized-ai-training-2676670858</guid><category>Training</category><category>Ai-energy</category><category>Data-center</category><category>Large-language-models</category><dc:creator>Rina Diane Caballar</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/illustration-of-several-data-servers-interconnected-across-long-distances.jpg?id=65477795&amp;width=980"></media:content></item><item><title>Why AI Systems Fail Quietly</title><link>https://spectrum.ieee.org/ai-reliability</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-series-of-135-green-dots-slowly-transition-from-bright-green-to-black.png?id=65461614&width=1200&height=800&coordinates=73%2C0%2C74%2C0"/><br/><br/><p>In late-stage testing of a distributed AI platform, engineers sometimes encounter a perplexing situation: Every monitoring dashboard reads “healthy,” yet users report that the system’s decisions are slowly becoming wrong.</p><p>Engineers are trained to recognize <a href="https://spectrum.ieee.org/amp/it-management-software-failures-2674305315" target="_blank">failure</a> in familiar ways: a service crashes, a sensor stops responding, a constraint violation triggers a shutdown. Something breaks, and the system tells you. But a growing class of software failures looks very different. The system keeps running, logs appear normal, and monitoring dashboards stay green. Yet the system’s behavior quietly drifts away from what it was designed to do.</p><p>This pattern is becoming more common as autonomy spreads across software systems. Quiet failure is emerging as one of the defining engineering challenges of autonomous systems because correctness now depends on coordination, timing, and feedback across entire systems.</p><h2>When Systems Fail Without Breaking</h2><p>Consider a hypothetical enterprise AI assistant designed to summarize regulatory updates for financial analysts. The system retrieves documents from internal repositories, synthesizes them using a language model, and distributes summaries across internal channels.</p><p>Technically, everything works. The system retrieves valid documents, generates coherent summaries, and delivers them without issue.</p><p>But over time, something slips. Maybe an updated document repository isn’t added to the retrieval pipeline. The assistant keeps producing summaries that are coherent and internally consistent, but they’re increasingly based on obsolete information. Nothing crashes, no alerts fire, every component behaves as designed. The problem is that the overall result is wrong.</p><p>From the outside, the system looks operational. From the perspective of the organization relying on it, the system is quietly failing.</p><h2>The Limits of Traditional Observability</h2><p>One reason quiet failures are difficult to detect is that traditional systems measure the wrong signals. Operational dashboards track uptime, latency, and error rates, the core elements of modern <a href="https://www.ibm.com/think/topics/observability" target="_blank">observability</a>. These metrics are well-suited for transactional applications where requests are processed independently, and correctness can often be verified immediately.</p><p>Autonomous systems behave differently. Many AI-driven systems operate through continuous reasoning loops, where each decision influences subsequent actions. Correctness emerges not from a single computation but from sequences of interactions across components and over time. A retrieval system may return contextually inappropriate and technically valid information. A <a href="https://spectrum.ieee.org/ai-agent-benchmarks" target="_blank">planning agent</a> may generate steps that are locally reasonable but globally unsafe. A distributed decision system may execute correct actions in the wrong order.</p><p>None of these conditions necessarily produces errors. From the perspective of conventional observability, the system appears healthy. From the perspective of its intended purpose, it may already be failing.</p><h2>Why Autonomy Changes Failure</h2><p>The deeper issue is architectural. Traditional software systems were built around discrete operations: a request arrives, the system processes it, and the result is returned. Control is episodic and externally initiated by a user, scheduler, or external trigger.</p><p>Autonomous systems change that structure. Instead of responding to individual requests, they observe, reason, and act continuously. AI agents maintain context across interactions. Infrastructure systems adjust resources in real time. Automated workflows trigger additional actions without human input.</p><p>In these systems, correctness depends less on whether any single component works and more on coordination across time.</p><p>Distributed-systems engineers have long wrestled with issues of coordination. But this is coordination of a new kind. It’s no longer about things like keeping data consistent across services. It’s about ensuring that a stream of decisions—made by models, reasoning engines, planning algorithms, and tools, all operating with partial context—adds up to the right outcome.</p><p>A modern AI system may evaluate thousands of signals, generate candidate actions, and execute them across a distributed infrastructure. Each action changes the environment in which the next decision is made. Under these conditions, small <a href="https://spectrum.ieee.org/ai-mistakes-schneier" target="_blank">mistakes</a> can compound. A step that is locally reasonable can still push the system further off course.</p><p>Engineers are beginning to confront what might be called behavioral reliability: whether an autonomous system’s actions remain aligned with its intended purpose over time.</p><h2>The Missing Layer: Behavioral Control</h2><p>When organizations encounter quiet failures, the initial instinct is to improve monitoring: deeper logs, better tracing, more analytics. Observability is essential, but it only shows that the behavior has already diverged—it doesn’t correct it.</p><p>Quiet failures require something different: the ability to shape system behavior while it is still unfolding. In other words, autonomous systems increasingly need control architectures, not just monitoring.</p><p>Engineers in industrial domains have long relied on <a href="https://en.wikipedia.org/wiki/Supervisory_control" target="_blank">supervisory control systems</a>. These are software layers that continuously evaluate a system’s status and intervene when behavior drifts outside safe bounds. Aircraft flight-control systems, power-grid operations, and large manufacturing plants all rely on such supervisory loops. Software systems historically avoided them because most applications didn’t need them. Autonomous systems increasingly do.</p><p>Behavioral monitoring in AI systems focuses on whether actions remain aligned with intended purpose, not just whether components are functioning. Instead of relying only on metrics such as latency or error rates, engineers look for signs of behavior drift: <a href="https://en.wikipedia.org/wiki/Concept_drift" target="_blank">shifts in outputs</a>, inconsistent handling of similar inputs, or changes in how multistep tasks are carried out. An AI assistant that begins citing outdated sources, or an automated system that takes corrective actions more often than expected, may signal that the system is no longer using the right information to make decisions. In practice, this means tracking outcomes and patterns of behavior over time.</p><p>Supervisory control builds on these signals by intervening while the system is running. A supervisory layer checks whether ongoing actions remain within acceptable bounds and can respond by delaying or blocking actions, limiting the system to safer operating modes, or routing decisions for review. In more advanced setups, it can adjust behavior in real time—for example, by restricting data access, tightening constraints on outputs, or requiring extra confirmation for high-impact actions.</p><p>Together, these approaches turn reliability into an active process. Systems don’t just run, they are continuously checked and steered. Quiet failures may still occur, but they can be detected earlier and corrected while the system is operating.</p><h2>A Shift in Engineering Thinking</h2><p>Preventing quiet failures requires a shift in how engineers think about reliability: from ensuring components work correctly to ensuring system behavior stays aligned over time. Rather than assuming that correct behavior will emerge automatically from component design, engineers must increasingly treat behavior as something that needs active supervision.</p><p>As AI systems become more autonomous, this shift will likely spread across many domains of computing, including cloud infrastructure, robotics, and large-scale decision systems. The hardest engineering challenge may no longer be building systems that work, but ensuring that they continue to do the right thing over time.</p>]]></description><pubDate>Tue, 07 Apr 2026 13:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/ai-reliability</guid><category>Software-failure</category><category>Software-reliability</category><category>Software-engineering</category><category>Cloud-computing</category><category>Autonomous-systems</category><dc:creator>Varun Raj</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/a-series-of-135-green-dots-slowly-transition-from-bright-green-to-black.png?id=65461614&amp;width=980"></media:content></item><item><title>Over-the-Air Computation Uses Radio Interference to Crunch Data</title><link>https://spectrum.ieee.org/wireless-network-over-air-computation</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/abstract-wavy-lines-and-geometric-circles-forming-a-colorful-fluid-layered-pattern.png?id=65476058&width=1200&height=800&coordinates=0%2C666%2C0%2C667"/><br/><br/><p><strong>Picture a highway with</strong> networked autonomous cars driving along it. On a serene, cloudless day, these cars need only exchange thimblefuls of data with one another. Now picture the same stretch in a sudden snow squall: The cars rapidly need to share vast amounts of essential new data about slippery roads, emergency braking, and changing conditions.</p><p>These two very different scenarios involve vehicle networks with very different computational loads. Eavesdropping on network traffic using a ham radio, you wouldn’t hear much static on the line on a clear, calm day. On the other hand, sudden whiteout conditions on a wintry day would sound like a cacophony of sensor readings and network chatter.</p><p>Normally this cacophony would mean two simultaneous problems: congested communications and a rising demand for computing power to handle all the data. But what if the network itself could expand its processing capabilities with every rising decibel of chatter and with every sensor’s chirp?</p><p>Traditional wireless networks treat communication as separate from computation. First you move data, then you process it. However, an emerging new paradigm called over-the-air computation (OAC) could fundamentally change the game. First <a href="https://bobaknazer.github.io/files/bn_mg_allerton05.pdf" target="_blank">proposed in 2005</a> and recently <a href="https://ieeexplore.ieee.org/abstract/document/11358822" target="_blank">developed and prototyped</a> by a <a href="https://arxiv.org/abs/2311.06829" target="_blank">number of teams</a> around the world, <a href="https://ieeexplore.ieee.org/document/11119744" target="_blank">including ours</a>, OAC combines communication and computation into a single framework. This means that an OAC sensor network—whether shared among <a href="https://spectrum.ieee.org/tag/autonomous-vehicles" target="_self">autonomous vehicles</a>, <a href="https://spectrum.ieee.org/tag/internet-of-things" target="_self">Internet-of-Things</a> sensors, <a href="https://spectrum.ieee.org/tag/smart-home" target="_self">smart-home</a> devices, or <a href="https://spectrum.ieee.org/tag/smart-cities" target="_self">smart-city</a> infrastructure—can carry some of the network’s computing burden as conditions demand.</p><p>The idea takes advantage of a basic physical fact of electromagnetic radiation: When multiple devices transmit simultaneously, their wireless signals naturally combine in the air. Normally, such cross talk is seen as interference, which radios are designed to suppress—especially digital radios with their error-correcting schemes and inherent resistance to low-level noise.</p><p><span>But if we carefully design the transmissions, cross talk can enable a wireless network to directly perform some calculations, such as a sum or an average. </span><a href="https://ieeexplore.ieee.org/document/9663107" target="_blank">Some prototypes today</a><span> do this with </span><a href="https://arxiv.org/abs/2212.06596" target="_blank">analog-style signaling</a><span> on otherwise digital radios—so that the superimposed waveforms represent numbers that have been added before digital signal processing takes place.</span></p><p>Researchers are also beginning to explore <a href="https://arxiv.org/abs/2405.15969" target="_blank">digital, over-the-air computation schemes</a>, which embed the same ideas <a href="https://dl.acm.org/doi/abs/10.1109/TWC.2025.3540455" target="_blank">into digital formats</a>, ultimately allowing the prototype schemes to coexist with today’s digital radio protocols. These various over-the-air computation techniques can help networks scale gracefully, enabling new classes of real-time, data-intensive services while making more efficient use of wireless spectrum.</p><p>OAC, in other words, turns signal interference from a problem into a feature, one that can help wireless systems support massive growth.</p><h2>Reimagining radio interference as infrastructure</h2><p>For<em> </em>decades, engineers designed radio communications protocols with <a href="https://en.wikipedia.org/wiki/Channel_access_method" target="_blank">one overriding goal</a>: to isolate each signal and recover each message cleanly. Today’s networks face a different set of pressures. They must coordinate large groups of devices on shared tasks—such as AI model training or combining disparate sensor readings, also known as <a href="https://spectrum.ieee.org/tag/sensor-fusion" target="_self">sensor fusion</a>—while exchanging as little raw data as possible, to improve both efficiency and privacy. For these reasons, a new approach to transmitting and receiving data may be worth considering, one that doesn’t rely on collecting and storing every individual device’s contributions.</p><p>By turning interference into computation, OAC transforms the wireless medium from a contested battlefield into a collaborative workspace. This paradigm shift has far-reaching consequences: Signals no longer compete for isolation; they cooperate to achieve shared outcomes. OAC cuts through layers of digital processing, reduces latency, and lowers energy consumption.</p><p>Even very simple operations, such as addition, can be the building blocks of surprisingly powerful computations. Many complex processes can be broken down into combinations of simpler pieces, much like how a rich sound can be re-created by combining a few basic tones. By carefully shaping what devices transmit and how the result is interpreted at the receiver, the wireless channel running OAC can carry out other calculations beyond addition. In practice, this means that with the right design, wireless signals can compute a number of key functions that modern algorithms rely on.</p><h3>THE PROBLEM (TRADITIONAL APPROACH) </h3><br/><img alt="Diagram of cars at mixed speeds with complex dashed feedback loops between them" class="rm-shortcode" data-rm-shortcode-id="bfb6f90a49f60c28d337ca50c3da7bb5" data-rm-shortcode-name="rebelmouse-image" id="774d5" loading="lazy" src="https://spectrum.ieee.org/media-library/diagram-of-cars-at-mixed-speeds-with-complex-dashed-feedback-loops-between-them.png?id=65476280&width=980"/><h3></h3><br/><p>For instance, many key tasks in modern networks don’t require the logging and storage of every individual network transmission. Rather, the goal is instead to infer properties about aggregate patterns of network traffic—<a href="https://ieeexplore.ieee.org/document/4118472" target="_blank">reaching agreement or identifying what matters most</a> about the traffic. <a href="https://lamport.azurewebsites.net/pubs/paxos-simple.pdf" target="_blank">Consensus algorithms</a> rely on majority voting to <a href="https://openreview.net/pdf?id=BJxhijAcY7" target="_blank">ensure reliable decisions,</a> even when some devices fail. Artificial intelligence systems depend on <a href="https://proceedings.neurips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf" target="_blank">matrix reduction and simplification operations</a> such as “<a href="https://en.wikipedia.org/wiki/Pooling_layer#Max_pooling" target="_blank">max pooling</a>” (keeping only peak values) to <a href="https://pages.ucsd.edu/~ztu/publication/pami_gpooling.pdf" target="_blank">extract the most useful signals</a> from noisy data.</p><p>In smart cities and smart grids, what <a href="https://www.tandfonline.com/doi/full/10.1080/01621459.2020.1736081" target="_blank">matters most</a> is often not individual readings but <a href="https://www.sciencedirect.com/science/article/abs/pii/S1364032123006159?via%3Dihub" rel="noopener noreferrer" target="_blank">distribution</a>. How many devices report each traffic condition? What is the range of demand across neighborhoods? These are histogram questions—summaries of the device counts per category.</p><p>With type-based multiple access (TBMA), an over-the-air computation <a href="https://ieeexplore.ieee.org/document/1576988" rel="noopener noreferrer" target="_blank">method we use</a>, devices reporting a given condition transmit together over a shared channel. Their signals add up, and the receiver sees only the total signal strength per category. In a single transmission, the entire histogram emerges without ever identifying individual devices. And the more devices there are, the better the estimate. The result is greater spectrum efficiency, with lower latency and scalable, privacy-friendly operations—all from letting the wireless medium do the aggregating and counting.</p><p>It’s easy to imagine how analog values transmitted over the air could be summed via superposition. The amplitudes from different signals add together, so the values those amplitudes represent also simply add together. The more challenging question concerns preserving that additive magic, but with <em>digital </em>signals.</p><p>Here’s how OAC does it. Consider, for instance, one TBMA approach for a network of sensors that gives each possible sensor reading its own dedicated frequency channel. Every sensor on the network that reads “4” transmits on frequency four; every sensor that reads “7” transmits on frequency seven. When multiple devices share the same reading, their amplitudes combine. The stronger the combined signal at a given frequency, the more devices there are reporting that particular value.</p><p>A <a href="https://en.wikipedia.org/wiki/Orthogonal_frequency-division_multiplexing" rel="noopener noreferrer" target="_blank">receiver equipped with a bank of filters tuned to each frequency</a> reads out a count of votes for every possible sensor value. In a single, simultaneous transmission, the whole network has reported its state.</p><p>It might seem paradoxical—digital computation riding atop what appears to be an analog physical effect. But this is also true of all “digital” radio. A Wi-Fi transmitter does not launch ones and zeroes into the air; it modulates electromagnetic waves whose amplitudes and phases encode digital data. The “digital” label ultimately refers to the information layer, not the physics. What makes OAC digital, in the same sense, is that the values being computed—each sensor reading, each frequency-bin count—are discrete and quantized from the start. And because they are discrete, the same <a href="https://arxiv.org/abs/0908.2119" rel="noopener noreferrer" target="_blank">error-correction machinery</a> that has made digital communications robust for decades can be applied here too.</p><p>Synchronization is where OAC’s demands diverge most sharply from digital wireless conventions. Many OAC variants today require something akin to a shared clock at nanosecond precision: Every signal’s phase must be synchronized, or the superposition runs the risk of collapsing into destructive interference. While TBMA relaxes this burden a bit—devices need only share a time window—real engineering challenges lie ahead regardless, before over-the-air computation is ready for the mobile world.</p><h2>How will over-the-air computation work in the field?</h2><p>Over-the-air computation has in recent years moved from theory to initial proofs-of-concept and network test runs. Our research teams in South Carolina and Spain have built working prototypes that deliver repeatable results—with no cables and no external timing sources such as GPS-locked references. All synchronization is handled within the radios themselves.</p><p>Our team at the University of South Carolina (led by Sahin) started with off-the-shelf <a href="https://spectrum.ieee.org/hardware-for-your-software-radio" target="_self">software-defined radios</a>—Analog Devices’ <a href="https://www.analog.com/en/resources/evaluation-hardware-and-software/evaluation-boards-kits/adalm-pluto.html#eb-overview" rel="noopener noreferrer" target="_blank">Adalm-Pluto</a>. We modified the devices’ <a href="https://spectrum.ieee.org/painless-fpga-programming" target="_self">field-programmable gate array</a> hardware inside each radio so it can respond to a trigger signal transmitted from another radio. This simple hack enabled simultaneous transmission, a core requirement for OAC. Our setup used five radios acting as edge devices and one acting as a base station. The task involved training a neural network to perform image recognition over the air. Our system, whose results we <a href="https://ieeexplore.ieee.org/document/10008778" rel="noopener noreferrer" target="_blank">first reported in 2022</a>, achieved a 95 percent accuracy in image recognition without ever moving raw data across the network.</p><h3>THE OVER-THE-AIR COMPUTATION (OAC) APPROACH</h3><br/><img alt="Illustration of cars adjusting speed with colored dashed lines indicating traffic signal control." class="rm-shortcode" data-rm-shortcode-id="05f47093d9693ac5b148c8e62fbb1374" data-rm-shortcode-name="rebelmouse-image" id="eb61f" loading="lazy" src="https://spectrum.ieee.org/media-library/illustration-of-cars-adjusting-speed-with-colored-dashed-lines-indicating-traffic-signal-control.png?id=65487320&width=980"/><h3></h3><br/><p>We also <a href="https://mentor.ieee.org/802.11/dcn/22/11-22-1483-01-aiml-wireless-for-ml-over-the-air-computation.pptx" target="_blank">demonstrated our initial OAC setup</a> at a March 2025 <a href="https://1.ieee802.org/march-2025-plenary-session-in-atlanta-ga-usa/" target="_blank">IEEE 802.11 working group meeting,</a> where an <a href="https://www.ieee802.org/11/Reports/aiml_update.htm" target="_blank">IEEE committee was studying AI and machine learning capabilities</a> for future Wi-Fi standards. As we showed, OAC’s road ahead doesn’t necessarily require reinventing wireless technology. Rather, it can also build on and repurpose existing protocols already in Wi-Fi and 5G.</p><p>However, before OAC can become a routine feature of commercial wireless systems, networks must provide finer-tuned coordination of timing and signal power levels. Mobility is a difficult problem, too. When mobile devices move around, phase synchronization degrades quickly, and computational accuracy can suffer. Present-day OAC tests work in controlled lab environments. But making them robust in dynamic, real-world settings—vehicles on highways, sensors scattered across cities—remains a new frontier for this emerging technology.</p><p>Both of our teams are now scaling up our prototypes and demonstrations. We are together aiming to understand how over-the-air computation performs as the number of devices increases beyond lab-bench scales. Turning prototypes and test-beds into production systems for autonomous vehicles and smart cities will require anticipating tomorrow’s mobility and synchronization problems—and no doubt a range of other challenges down the road.</p><h2>Where OAC goes from here</h2><p>To realize the technological ambitions of over-the-air computation, nanosecond timing and exquisite RF signal design will be crucial. Fortunately, recent engineering advances have made substantial progress in both of these fields.</p><p>Because OAC demands waveform superposition, it benefits from tight coordination in time, frequency, phase, and amplitude among RF transmitters. Such requirements build naturally on decades of work in wireless communication systems designed for shared access. Modern networks <a href="https://www.mdpi.com/2673-4001/5/1/4" target="_blank">already synchronize large numbers of devices</a> using <a href="https://ieeexplore.ieee.org/document/10637136" rel="noopener noreferrer" target="_blank">high-precision timing </a>and <a href="https://peerj.com/articles/cs-2687/" rel="noopener noreferrer" target="_blank">uplink coordination</a>.</p><p>OAC uses the same synchronization techniques already in cellular and Wi-Fi systems. But to actually run over-the-air computations, more precision still will be needed. <a href="https://ieeexplore.ieee.org/document/4657149" rel="noopener noreferrer" target="_blank">Power control</a>, <a href="https://ieeexplore.ieee.org/document/5118192" rel="noopener noreferrer" target="_blank">gain adjustment</a>, and <a href="https://link.springer.com/article/10.1186/s13638-016-0670-9" rel="noopener noreferrer" target="_blank">timing calibration</a> are <a href="https://ieeexplore.ieee.org/document/11016910" rel="noopener noreferrer" target="_blank">standard tools</a> today. We expect that engineers will further refine these existing methods to begin to meet OAC’s more stringent accuracy demands.</p><h3>THE OAC RESULT </h3><br/><img alt="OAC result bar chart: slow 1 (blue), medium 3 (green), fast 1 (red)." class="rm-shortcode" data-rm-shortcode-id="3042c6dc72ca2f66e275f68504ac4f6a" data-rm-shortcode-name="rebelmouse-image" id="b72bb" loading="lazy" src="https://spectrum.ieee.org/media-library/oac-result-bar-chart-slow-1-blue-medium-3-green-fast-1-red.png?id=65476295&width=980"/><p><span>In some cases, in fact, imperfect timing standards may be all that’s needed. Designs and emerging standards in 5G and 6G wireless systems today use </span><a href="https://ieeexplore.ieee.org/abstract/document/9834918" target="_blank">clever encoding that tolerates imperfect synchronization</a><span>. Minor timing errors, </span><a href="https://en.wikipedia.org/wiki/Frequency_drift" target="_blank">frequency drift</a><span>, and signal overlap can in some cases still work capably within an OAC protocol, we anticipate. Instead of fighting messiness, over-the-air computation may sometimes simply be able to roll with it.</span></p><p>Another challenge ahead concerns shifting processing to the transmitter. Instead of the receiver trying to clean up overlapping signals, a better and more efficient approach would involve each transmitter fixing its own signal before sending. Such “pre-compensation” techniques are <a href="https://ieeexplore.ieee.org/document/4350229" target="_blank">already used in MIMO technology</a> (<a href="https://arxiv.org/abs/1902.07678" target="_blank">multi-antenna systems</a> in modern <a href="https://standards.ieee.org/beyond-standards/the-evolution-of-wi-fi-technology-and-standards/" target="_blank">Wi-Fi</a> and cellular networks). OAC would just be repurposing techniques that have already been developed for 5G and 6G technologies.</p><p>Materials science can also help OAC efforts ahead. New generations of <a href="https://spectrum.ieee.org/metamaterials-could-solve-one-of-6gs-big-problems" target="_self">reconfigurable intelligent surfaces</a> shape signals via tiny adjustable elements in the antenna. The surfaces catch radio signals and reshape them as they bounce around. Reconfigurable surfaces can <a href="https://ieeexplore.ieee.org/document/9140329/" target="_blank">strengthen useful signals, eliminate interference, and synchronize wavefront arrivals</a> that would otherwise be out of sync. OAC stands to benefit from these and other emerging capabilities that intelligent surfaces will provide.</p><p>At the system level, OAC will represent a fundamental shift in wireless network system design. Wireless engineers have <a href="https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_avoidance" target="_blank">traditionally tried to avoid</a> designing devices that transmit at the same time. But over-the-air systems will flip the old, familiar design standards on their head.</p><p>One might object that OAC stands to upend decades of existing wireless signal standards that have always presumed data pipes to be data pipes only—not microcomputers as well. Yet we do not anticipate much difficulty merging OAC with existing wireless standards. In a sense, in fact, the <a href="https://www.ieee802.org/11/" target="_blank">IEEE 802.11</a> and <a href="https://www.3gpp.org/" target="_blank">3GPP</a> (3rd Generation Partnership Project) standards bodies have already shown the way.</p><p>A network can set aside certain brief time windows or narrow slices of bandwidth for over‑the‑air computation, and use the rest for ordinary data. From the radio’s point of view, OAC just becomes another operating mode that is turned on when needed and left off the rest of the time.</p><p>Over the past decade, both the IEEE and 3GPP have <a href="https://ieeexplore.ieee.org/document/6515173" target="_blank">integrated once-experimental technologies</a> into their wireless standards—for example, <a href="https://ieeexplore.ieee.org/document/6732923" target="_blank">millimeter-wave mobile communications</a>, <a href="https://link.springer.com/article/10.1155/2011/496763" target="_blank">multiuser MIMO</a>, <a href="https://ieeexplore.ieee.org/document/8458146" target="_blank">beamforming</a>, and <a href="https://ieeexplore.ieee.org/document/7926923" target="_blank">network slicing</a>—by defining each new technological advance as an optional feature. OAC, we suggest, can also operate alongside conventional wireless data traffic as an optional service. Because OAC places high demands on timing and accuracy, networks will need the ability to enable or disable over‑the‑air computation on a per‑application basis.</p><p>With continued progress, OAC will evolve from lab prototype to standardized wireless capability through the 2020s and into the decade ahead. In the process, the wireless medium will transform from a passive data carrier into an active computational partner—providing essential infrastructure for the real-time intelligent systems that future wireless technologies will demand.</p><p>So on that snowy highway sometime in the 2030s, vehicles and sensors won’t wait for permission to think together. Using the emerging over-the-air computation protocols that we’re helping to pioneer, simultaneous computation will be the new default. The networks will work as one.<span class="ieee-end-mark"></span></p>]]></description><pubDate>Tue, 07 Apr 2026 13:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/wireless-network-over-air-computation</guid><category>Wireless-networks</category><category>Network-infrastructure</category><category>Autonomous-vehicles</category><category>Smart-cities</category><category>Interference</category><category>Computational-resources</category><dc:creator>Ana I. Pérez-Neira</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/abstract-wavy-lines-and-geometric-circles-forming-a-colorful-fluid-layered-pattern.png?id=65476058&amp;width=980"></media:content></item><item><title>AI Is Insatiable</title><link>https://spectrum.ieee.org/high-bandwidth-memory-shortage</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/robot-hand-catching-falling-computer-chips-from-an-open-snack-bag-in-pop-art-style.png?id=65425799&width=1200&height=800&coordinates=0%2C180%2C0%2C181"/><br/><br/><p>While browsing our website a few weeks ago, I stumbled upon “<a href="https://spectrum.ieee.org/dram-shortage" target="_self">How and When the Memory Chip Shortage Will End</a>” by Senior Editor Samuel K. Moore. His analysis focuses on the current DRAM shortage caused by AI hyperscalers’ ravenous appetite for memory, a major constraint on the speed at which large language models run. Moore provides a clear explanation of the shortage, particularly for high bandwidth memory (HBM).</p><p>As we and the rest of the tech media have documented, AI is a resource hog. AI <a href="https://spectrum.ieee.org/data-center-sustainability-metrics" target="_self">electricity consumption</a> could account for up to 12 percent of all U.S. power by 2028. <a href="https://spectrum.ieee.org/ai-energy-use" target="_self">Generative AI queries</a> consumed 15 terawatt-hours in 2025 and are projected to consume 347 TWh by 2030. <a href="https://spectrum.ieee.org/data-centers-pollution" target="_self">Water consumption for cooling AI data centers</a> is predicted to double or even quadruple by 2028 compared to 2023.</p><p>But Moore’s reporting shines a light on an obscure corner of the AI boom. <a href="https://spectrum.ieee.org/processing-in-dram-accelerates-ai" target="_self">HBM</a> is a particular type of memory product tailor-made to serve AI processors. Makers of those processors, notably Nvidia and AMD, are demanding more and more memory for each of their chips, driven by the needs and wants of firms like Google, Microsoft, OpenAI, and Anthropic, which are underwriting an unprecedented buildout of data centers. And some of these facilities are colossal: You can read about the engineering challenges of building Meta’s mind-boggling 5-gigawatt Hyperion site in Louisiana, in “<a href="https://spectrum.ieee.org/5gw-data-center" target="_blank">What Will It Take to Build the World’s Largest Data Center?</a>”</p><p>We realized that Moore’s HBM story was both important and unique, and so we decided to include it in this issue, with some updates since the original published on 10 February. We paired it with a recent story by Contributing Editor Matthew S. Smith exploring how the memory-chip shortage is driving up the price of low-cost computers like the <a href="https://www.raspberrypi.com/" rel="noopener noreferrer" target="_blank">Raspberry Pi</a>. The result is “<a href="https://spectrum.ieee.org/dram-shortage" target="_blank">AI Is a Memory Hog</a>.”</p><p>The big question now is, When will the shortage end? Price pressure caused by AI hyperscaler demand on all kinds of consumer electronics is being masked by stubborn inflation combined with a perpetually shifting tariff regime, at least here in the United States. So I asked Moore what indicators he’s looking for that would signal an easing of the memory shortage.</p><p>“On the supply side, I’d say that if any of the big three HBM companies—<a href="https://www.micron.com/" rel="noopener noreferrer" target="_blank">Micron</a>, <a href="https://semiconductor.samsung.com/dram/" rel="noopener noreferrer" target="_blank">Samsung</a>, and <a href="https://www.skhynix.com/" rel="noopener noreferrer" target="_blank">SK Hynix</a>—say that they are adjusting the schedule of the arrival of new production, that’d be an important signal,” Moore told me. “On the demand side, it will be interesting to see how tech companies adapt up and down the supply chain. Data centers might steer toward hardware that sacrifices some performance for less memory. Startups developing all sorts of products might pivot toward creative redesigns that use less memory. Constraints like shortages can lead to interesting technology solutions, so I’m looking forward to covering those.”</p><p><span>To be sure you don’t miss any of Moore’s analysis of this topic and to stay current on the entire spectrum of technology development, <a href="https://spectrum.ieee.org/newsletters/" target="_blank">sign up for our weekly newsletter, Tech Alert.</a></span></p>]]></description><pubDate>Mon, 06 Apr 2026 14:22:58 +0000</pubDate><guid>https://spectrum.ieee.org/high-bandwidth-memory-shortage</guid><category>Semiconductors</category><category>Dram</category><category>Memory</category><category>Chips</category><category>Ai</category><category>Data-centers</category><dc:creator>Harry Goldstein</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/robot-hand-catching-falling-computer-chips-from-an-open-snack-bag-in-pop-art-style.png?id=65425799&amp;width=980"></media:content></item><item><title>What Happened When We Set Up a Robotics Lab in a Mall</title><link>https://spectrum.ieee.org/boston-dynamics-spot-interaction</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/families-watch-a-colorful-robotic-dog-demo-at-a-robotics-and-ai-institute-exhibit.jpg?id=65453180&width=1200&height=800&coordinates=62%2C0%2C63%2C0"/><br/><br/><p>Building the next generation of robots for successful integration into our homes, offices, and factories is more than just solving the hardware and software problems—we also need to understand how they will be perceived and how they can work effectively with people in those spaces.</p> <p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <a href="https://rai-inst.com/"></a><a class="shortcode-media-lightbox__toggle shortcode-media-controls__button material-icons" title="Select for lightbox">aspect_ratio</a><a href="https://rai-inst.com/" target="_blank"><img alt="Robotics and AI Institute logo with text about post originally appearing there" class="rm-shortcode" data-rm-shortcode-id="09961581414b810cff45f77932185cb3" data-rm-shortcode-name="rebelmouse-image" id="89ff0" loading="lazy" src="https://spectrum.ieee.org/media-library/robotics-and-ai-institute-logo-with-text-about-post-originally-appearing-there.png?id=65453513&width=980"/></a> </p><p>In summer 2025, <a href="https://spectrum.ieee.org/boston-dynamics-ai-institute-hyundai" target="_blank">RAI Institute</a> set up a free pop-up robot experience in the CambridgeSide mall, designed to let people experience state-of-the-art robotics first hand. While news stories about robots and AI are common, with some being overly critical and some overly optimistic, most people have not encountered robots in the flesh (or metal) as it were. With no direct experience, their opinions are largely shaped by pop culture and social media, both of which are more focused on sensational stories instead of accurate information about how the robots might be used effectively and where the technology still falls short. Our goal with the pop-up was twofold: first, to give people an opportunity to see robots that they would otherwise not have a chance to experience; and second, to better understand how the public feels about interacting with these robots.</p><h2>Designing a Robot Experience for the General Public</h2><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Three experimental robotic prototypes displayed behind barriers in a bright gallery." class="rm-shortcode" data-rm-shortcode-id="a1fe59976ca74226f29b65137649c4d4" data-rm-shortcode-name="rebelmouse-image" id="c9163" loading="lazy" src="https://spectrum.ieee.org/media-library/three-experimental-robotic-prototypes-displayed-behind-barriers-in-a-bright-gallery.webp?id=65453673&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Some earlier version legged robots, built by the RAI Institute’s Executive Director, Marc Raibert</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">RAI Institute</small></p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Red robot dog and electric bike displayed in glass cases inside a modern mall." class="rm-shortcode" data-rm-shortcode-id="f0c655444535aac7e11e20510c8bbbae" data-rm-shortcode-name="rebelmouse-image" id="6b96a" loading="lazy" src="https://spectrum.ieee.org/media-library/red-robot-dog-and-electric-bike-displayed-in-glass-cases-inside-a-modern-mall.webp?id=65453671&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The ANYmal by ANYrobotics (left) and a previous model of the RAI Institute’s UMV (right)</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">RAI Institute</small></p><p><span>The pop-up space had two areas: a museum area where people could see historical and modern robots, including some <a href="https://spectrum.ieee.org/marc-raibert-boston-dynamics-instutute" target="_blank">RAI Institute</a> builds like the </span><a href="https://rai-inst.com/resources/blog/designing-wheeled-robotic-systems/" target="_blank">UMV</a>,<span> and an interactive experience called “Drive-a-Spot.” This area was a driving arena where anyone who came by could take the controls of a Spot quadruped, one of the more recognizable, commercially available robots today.</span></p><p>The guest robot drivers used a custom controller built on an adaptive video game controller that was designed so that anyone of any age could use it. It featured basic controls: move forward, back, left, right, adjust height, sit, stand, and tilt. The buttons were large so that tiny or elderly hands could use the controller, and the people who drove Spot ranged in age from 2 to over 90.<br/></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Adaptive gaming controller with large programmable buttons on a black table." class="rm-shortcode" data-rm-shortcode-id="d191483045e332282c7d73dac0962f80" data-rm-shortcode-name="rebelmouse-image" id="2545f" loading="lazy" src="https://spectrum.ieee.org/media-library/adaptive-gaming-controller-with-large-programmable-buttons-on-a-black-table.jpg?id=65453210&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The guest robot drivers used a custom controller built on an adaptive video game controller that was designed so that anyone of any age could use it.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">RAI Institute</small></p><p>The demo area was designed to be a bit challenging for the Spot robot to maneuver in—it contained tight passages, low obstacles to step over, a barrier to crouch under, and taller objects the robot had to avoid. Much to the surprise of many of our guests, Spot is able to autonomously adjust itself to traverse and avoid those obstacles when being supervised by the joystick.<br/></p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="1c2dcee3b7a437fc3f967b9095f81e91" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/dPjUkJGC5Xg?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">RAI Institute</small> </p><p><span>The driving arena’s theme rotated every few weeks across four scenarios: a factory, a home, a hospital, and an outdoor/disaster environment. These were chosen to contrast settings where robots are broadly accepted (industrial, emergency response) with settings where public ambivalence is well documented (domestic, healthcare).</span></p><h3></h3><br/><div class="rblad-ieee_in_content"></div><p>The visitors who chose to drive the Spot robot could also participate in a short survey before and after their driving experience. The survey focused on two core dimensions:</p><ul><li><strong>Comfort: How comfortable would you feel if you encountered a robot in a factory, home, hospital, office, or outdoor/disaster scenario?</strong></li><li><strong>Suitability: How well would this robot work in each of those contexts?</strong> </li></ul><p>The survey also recorded emotional reactions immediately after driving, likelihood to recommend the experience, and open-ended responses about what they found memorable or surprising. The researchers were careful to separate the environment participants drove through from the scenarios they were asked to evaluate in the survey. This distinction is important for interpreting the results given below.</p><h2>Did Interacting With the Robot Change People’s Feelings about Robots?</h2><p><span></span><span>Out of approximately 10,000 guests that visited the Robot Lab, 10 percent of those drove the Spot and opted in to our surveys. Of those surveyed, more than 65 percent of people had seen images or videos of Spot robots online, but most had never seen one of the robots in person.</span></p><h3>Increased Comfort Through Experience</h3><p>Across all five contexts presented in the survey (factory, home, hospital, office, and outdoor/disaster scenarios), comfort scores increased significantly after the driving session. The effects were small to moderate in magnitude, but they were consistent and statistically robust after correcting for multiple comparisons across all participants spanning children to older adults.</p><p>The largest gain appeared in the outdoor/disaster context, which started with low comfort despite high perceived suitability. People already thought Spot would be useful in search-and-rescue scenarios; they just weren’t comfortable with it performing in that scenario. This discomfort may stem from media portrayals of quadruped robots in military contexts. A few minutes of hands-on control appears to partially dissolve that apprehension.</p><p>Participants who drove through the factory-themed arena showed no significant increase in comfort, but this scenario already had the highest rating of any rated context at baseline, leaving little room for improvement.</p><p>No matter their previous experience, most people were neutral about having a Spot robot in their home before their driving experience. However, after the experience of controlling the Spot robot, people had a statistically significant increase in their comfort at having a Spot in their home and also felt that a Spot robot was more suitable for work in any environment, not just the one they had driven it in.</p><h3>Better Understanding of Where Robots Can Fit Into Daily Life</h3><p>Perceived suitability for Spot to operate in each context also increased. However, the pattern in the data is different. The largest gains weren’t in the high-baseline industrial and outdoor contexts. They were in home, office, and hospital—the very environments where people started out most skeptical.</p><p>Participants who drove the Spot robot in a home-themed environment didn’t just consider homes more suitable for robots; they also rated hospitals and offices as more suitable. This result suggests that hands-on control alters something more fundamental than just context-specific familiarity. It may change a person’s underlying understanding of a robot’s capabilities and, consequently, where they believe robots are appropriate.</p><h3>Results by Demographic</h3><p>The hands-on experience seems to be similarly effective across genders, although it does not completely eliminate existing disparities. For example, men reported higher baseline comfort than women across all five contexts. However, all genders improved at similar rates after interaction. The gap didn’t significantly widen or close in most contexts, though it did narrow in factory and office settings.</p><p>Age effects were more context dependent. Children (aged 8–17) rated factory environments as less comfortable and less suitable before the study. However, this could be because most children do not have experience with factory settings or industrial environments. After interaction, this gap largely persisted. By contrast, children showed stronger gains in office comfort than older adults and entered the study rating home contexts more favorably than adults did.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Stacked bar chart of survey participants by age group and gender categories." class="rm-shortcode" data-rm-shortcode-id="91a6e3f855ba0152f034182d4710df9d" data-rm-shortcode-name="rebelmouse-image" id="313e6" loading="lazy" src="https://spectrum.ieee.org/media-library/stacked-bar-chart-of-survey-participants-by-age-group-and-gender-categories.jpg?id=65453246&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Participants ranged from age 8 to over age 75.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">RAI Institute</small></p><p><span>Participants who had previously driven Spot (mainly robotics professionals) began with higher comfort across the board. But after the hands-on session, people with no prior exposure caught up to experienced drivers. This level of familiarity would be difficult to replicate with images and videos alone.</span></p><h3>Post-Interaction Results</h3><p>Post-interaction emotional data was overwhelmingly positive. “Excitement” was reported by 74 percent of participants, “happiness” by 50 percent, and only 12 percent reported “nervousness.” Over 55 percent rated the experience as “brilliant,” and 62 percent said they were very likely to recommend it to a friend.</p><p>The open-ended responses added a lot more color. The most commonly mentioned moments were locomotion and terrain adaptation (22 percent). This included the way Spot navigated steps, tight spaces, and uneven ground and expressive tilt movements (22 percent), which people found surprisingly doglike or dancelike. A smaller set of responses (3 percent) described anthropomorphic reactions: worrying about “hurting” the robot or finding its behavior “silly” in a way that prompted genuine emotional response.</p><p>When asked what tasks they’d want a robot to perform, responses shifted meaningfully. Before driving, answers clustered around domestic assistance and heavy or hazardous labor. After driving, domestic help remained prominent, but entertainment and play jumped from 7.5 percent to 19.4 percent. Companionship also appeared at 5 percent. References to hazardous or industrial tasks declined as people who had operated the robot began imagining it as a companion and playmate, not just a labor-replacement tool.</p><h2>Key Takeaways from the Robot Lab</h2><p>In the not-so-distant future, robots will become more common in public and private spaces. But whether that integration into daily life will be accepted by the general public remains to be seen. The standard approach to building acceptance has been passive exposure such as videos, exhibits, and articles. This study suggests giving people agency and letting them actually operate a robot is a qualitatively different intervention.</p><p>Short, well-designed, hands-on encounters can raise comfort in precisely the social domains where ambivalence is highest and where future robotics deployment will likely take place. This hands-on experience shouldn’t be limited to tech conferences and museums, as it may be more valuable than just entertaining.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Children control a robot car at a tech booth with staff and jungle-themed backdrop" class="rm-shortcode" data-rm-shortcode-id="561f653ae87e1468c7ac31ac92d0fe00" data-rm-shortcode-name="rebelmouse-image" id="a32d5" loading="lazy" src="https://spectrum.ieee.org/media-library/children-control-a-robot-car-at-a-tech-booth-with-staff-and-jungle-themed-backdrop.jpg?id=65453264&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Fun for all ages!</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">RAI Institute</small></p><p><span>We consider the pop-up a success, but as with all experiments, we also learned a lot along the way. For our takeaways, in addition to the increased comfort with robots, we also found that the guests to our space really enjoyed talking to the robotics experts who staffed the location. For many people, the opportunity to talk to a roboticist was as unique as the opportunity to drive a robot, and in the future, we are excited to continue to share our technical work as well as the experiences of our humans, in addition to our humanoids.</span></p><p>Does building a space where folks can experience robots firsthand have the potential to create meaningful, long-term attitude shifts? That remains an open question. But the effect’s direction and consistency across different situations, ages, and genders are hard to ignore.</p><div class="horizontal-rule"></div><p><a href="https://rai-inst.com/wp-content/uploads/2026/03/HRI26-Pop-Up_Encounters_with_Spot.pdf" target="_blank">Pop-Up Encounters With Spot: Shaping Public Perceptions of Robots Through Hands-On Experience</a>, by Hae Won Park, Georgia Van de Zande, Xiajie Zhang, Dawn Wendell, and Jessica Hodgins from the RAI Institute and the MIT Media Lab, was presented last month at the <a href="https://humanrobotinteraction.org/2026/" target="_blank">2026 ACM/IEEE International Conference on Human-Robot Interaction</a> in Edinburgh, Scotland.</p>]]></description><pubDate>Sun, 05 Apr 2026 13:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/boston-dynamics-spot-interaction</guid><category>Boston-dynamics</category><category>Legged-robots</category><category>Spot-robot</category><dc:creator>Dawn Wendell</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/families-watch-a-colorful-robotic-dog-demo-at-a-robotics-and-ai-institute-exhibit.jpg?id=65453180&amp;width=980"></media:content></item><item><title>Video Friday: Digit Learns to Dance—Virtually Overnight</title><link>https://spectrum.ieee.org/video-humanoid-dancing</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/bipedal-teal-robot-practices-side-to-side-dance-move-with-arm-movement.gif?id=65460048&width=1200&height=800&coordinates=11%2C0%2C11%2C0"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://2026.ieee-icra.org/">ICRA 2026</a>: 1–5 June 2026, VIENNA</h5><h5><a href="https://roboticsconference.org/">RSS 2026</a>: 13–17 July 2026, SYDNEY</h5><h5><a href="https://mrs.fel.cvut.cz/summer-school-2026/">Summer School on Multi-Robot Systems</a>: 29 July–4 August 2026, PRAGUE</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="pc-n6aciusu"><em>Getting Digit to dance takes more than putting on some fancy shoes—our AI Team can teach Digit new whole-body control capabilities overnight. Using raw motion data from mocap, animation, and teleop methods, Digit gets new skills through sim-to-real reinforcement training.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="4477bcbaf1f5072afe88c2c0015eebd1" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Pc-n6ACIuSU?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.agilityrobotics.com/">Agility</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="sy2xyrmv44y"><em>We’ve created GEN-1, our latest milestone in scaling robot learning. We believe it to be the first general-purpose AI model that crosses a new performance threshold: mastery of simple physical tasks. It improves average success rates to 99% on tasks where previous models achieve 64%, completes tasks roughly 3x faster than state of the art, and requires only 1 hour of robot data for each of these results. GEN-1 unlocks commercial viability across a broad range of applications—and while it cannot solve all tasks today, it is a significant step towards our mission of creating generalist intelligence for the physical world.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="bbbeecb0e15f3b78f50b3ebf230ecf33" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/SY2xyrmV44Y?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://generalistai.com/blog/apr-02-2026-GEN-1">Generalist</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="pn_bj5-qyw8"><em>Unitree open-sources UnifoLM-WBT-Dataset—high-quality real-world humanoid robot <a data-linked-post="2650273084" href="https://spectrum.ieee.org/mit-humanoid-robot-teleoperation-dynamic-tasks" target="_blank">whole-body teleoperation</a> (WBT) dataset for open environments. Publicly available since March 5, 2026, the dataset will continue to receive high-frequency rolling updates. It aims to establish the most comprehensive real-world humanoid robot dataset in terms of scenario coverage, task complexity, and manipulation diversity.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="bd19da6e3dfeb2ede20007b534d1b9a6" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/pN_bj5-QyW8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://huggingface.co/collections/unitreerobotics/unifolm-wbt-dataset">Hugging Face</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="79mr-_-a9js"><em>Autonomous mobile robots operating in human-shared indoor environments often require paths that reflect human spatial intentions, such as avoiding interference with pedestrian flow or maintaining comfortable clearance. This paper presents MRReP, a Mixed Reality-based interface that enables users to draw a Hand-drawn Reference Path (HRP) directly on the physical floor using hand gestures.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="783457e452248043a5ec6e2898ae5289" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/79mR-_-a9js?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://mertcookimg.github.io/mrrep/">MRReP</a> ]</p><p>Thanks, Masato!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="97qialc5hnm"><em>Eye contact, even momentarily between strangers, plays a pivotal role in fostering human connection, promoting happiness, and enhancing belonging. Through autonomous navigation and adaptive mirror control, Mirrorbot facilitates serendipitous, nonverbal interactions by dynamically transitioning reflections from self-focused to mutual recognition, sparking eye contact, shared awareness, and playful engagement.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="232f93e3a45a2e11d81366bb7ed95286" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/97qIaLC5hNM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://arl.human.cornell.edu/research-MirrorBot.html">ARL</a> ] via [ <a href="https://news.cornell.edu/stories/2026/04/mirrorbot-fostering-human-connection">Cornell University</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="jya06ffonyg"><em>Experience PAL Robotics’ new teleoperation system for TIAGo Pro, the AI-ready mobile manipulator designed for advanced research. This real-time VR teleoperation setup allows precise control of TIAGo Pro’s dual arms in Cartesian space, ideal for remote manipulation, AI data collection, and robot learning.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="86699af54f2bfd064590b0cd59aa3f8c" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/jya06FFONyg?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://pal-robotics.com/robot/tiago-pro/">PAL Robotics</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="t52sq8gk5ks">Utter brilliance from Robust AI. No notes.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="71e7d47e220a5b61b914c1491f1df3dc" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/T52SQ8Gk5Ks?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.robust.ai/">Robust AI</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="w8lqu8dkvp4"><em>Come along with our Senior Test Engineer, Nick L., as he takes us on a tour of the <a data-linked-post="2650277831" href="https://spectrum.ieee.org/qa-irobot-roomba-i7" target="_blank">Home Test Labs</a> inside the iRobot HQ.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="56a753f2b7e0640f199e35246a22843f" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/W8lQU8dKvP4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.irobot.com/en_US/our-story.html">iRobot</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="gjukjrwjpxg"><em>By automating the final “magic 5%” of production—the precise trimming of swim goggles’ silicone gaskets based on individual face scans—UR cobots allow THEMAGIC5 to deliver affordable, custom-fit goggles, enabling the company to scale from a Kickstarter sensation to selling over 400,000 goggles worldwide.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="76ebeda03bf930b9cd576a8e870f8dad" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/GJukJRWjPxg?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.universal-robots.com/case-stories/non-stop-robot-precision-for-7-years-cobots-deliver-the-last-magic-5-in-swim-goggle-production/">Universal Robots</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="x16ht1erjhk"><em>Sanctuary AI has once again demonstrated its industry-leading approach to training dexterous manipulation policies for its advanced hydraulic hands. In this video, their proprietary hydraulic hand autonomously manipulates a lettered cube, continuously reorienting it to match a specified goal (displayed in the bottom-left corner of the video).</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ad1d77f7ce4f331c7e74b0b779ff6cae" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/X16Ht1ERjHk?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.sanctuary.ai/">Sanctuary AI</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="r3toz2pgppy"><em>China’s Yuxing 3-06 commercial experimental satellite, the first of its kind to be equipped with a flexible robotic arm, has recently completed an in-orbit refueling test and verification of key technologies. The test paves the way for Yuxing 3-06, dubbed a “space refueling station,” to refuel other satellites in orbit, manage space debris, and provide other in-orbit services.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="eaf9d2765bb1e0ebff60f038ccba42fd" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/R3TOZ2PgPPY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://mp.weixin.qq.com/s/1c-9aNwuXv_p-VhojMkwwA">Sanyuan Aerospace</a> ] via [ <a href="https://spacenews.com/chinese-startup-tests-flexible-robotic-arm-in-space-for-on-orbit-servicing/">Space News</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="z4poalprrhe"><em>This is a demonstration of natural walking, whole-body teleoperation, and motion tracking with our custom-built humanoid robot. The control policies are trained using large-scale parallel reinforcement learning (RL). By deploying robust policies learned in a physics simulator onto the real hardware, we achieve dynamic and stable whole-body motions.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="703bacdcb0167fb3aa9bfe36e1da07ac" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/z4POaLPRRhE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://robotics.tokyo/">Tokyo Robotics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="5olcwku7l9u"><em>Faced with aging railway infrastructure, a shrinking workforce and rising construction costs, Japan Railway West asked construction innovator Serendix to replace an old wooden building at its Hatsushima railway station using its 3D printing technology. An ABB robot enabled the company to assemble the new building in a single night ready for the first train service the next day.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="031eec5b200f86cdad72129d9a002cfc" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/5olcWkU7l9U?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.abb.com/global/en/news/134689">ABB</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="1k1phiqcfty"><em>Humanoid, SAP, and Martur Fompak team up to test humanoid robots in automotive manufacturing logistics. This joint proof of concept explores how robots can streamline operations, improve efficiency, and shape the future of smart factories.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="cc54aa14687108db3bc231b8cc456fea" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/1K1phiQCftY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://thehumanoid.ai/">Humanoid</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="oqglmefwbt8">This MIT Robotics Seminar is from Dario Floreano at EPFL, on “Avian Inspired Drones.”</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="7013e7fe97df52eb328681b647c9fddc" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/oqglMEFWBt8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://robotics.mit.edu/robotics-seminar/">MIT</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="etk5es0jvm4">This MIT Robotics Seminar is from Ken Goldberg at UC Berkeley: “Good Old-Fashioned Engineering Can Close the 100,000 Year ‘Data Gap’ in Robotics.”</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="710bc514cbab6092dc5f439cf03127c6" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/EtK5es0jVM4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://robotics.mit.edu/robotics-seminar/">MIT</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 03 Apr 2026 16:30:01 +0000</pubDate><guid>https://spectrum.ieee.org/video-humanoid-dancing</guid><category>Humanoid-robots</category><category>Video-friday</category><category>Robot-ai</category><category>Human-robot-interaction</category><category>Teleoperation</category><category>Industrial-robots</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/gif" url="https://spectrum.ieee.org/media-library/bipedal-teal-robot-practices-side-to-side-dance-move-with-arm-movement.gif?id=65460048&amp;width=980"></media:content></item><item><title>ENIAC’s Architects Wove Stories Through Computing</title><link>https://spectrum.ieee.org/eniac-80th-anniversary-weaving</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/close-up-black-and-white-1940-s-image-of-a-woman-holding-a-metallic-brick-like-controller-with-large-knobs.jpg?id=65453792&width=1200&height=800&coordinates=0%2C83%2C0%2C84"/><br/><br/><p><em><em>This year marks the </em></em><a href="https://spectrum.ieee.org/eniac-80-ieee-milestone" target="_self"><em><em>80th anniversary of ENIAC</em></em></a><em><em>, the first general-purpose digital computer. The computer was built during World War II to speed up ballistics calculations, but its contributions to computing extend well beyond military applications.</em></em></p><div class="rm-embed embed-media"><iframe height="110px" id="noa-web-audio-player" src="https://embed-player.newsoveraudio.com/v4?key=q5m19e&id=https://spectrum.ieee.org/eniac-80th-anniversary-weaving&bgColor=F5F5F5&color=1b1b1c&playColor=1b1b1c&progressBgColor=F5F5F5&progressBorderColor=bdbbbb&titleColor=1b1b1c&timeColor=1b1b1c&speedColor=1b1b1c&noaLinkColor=556B7D&noaLinkHighlightColor=FF4B00&feedbackButton=true" style="border: none" width="100%"></iframe></div><p><em><em>Two of ENIAC’s key architects—John W. Mauchly, its co-inventor, and Kathleen “Kay” McNulty, one of the <a href="https://spectrum.ieee.org/eniac-woman-programmers" target="_blank">six original programmers</a>—married a few years after its completion and raised seven children together. Mauchly and McNulty’s grandchild Naomi Most </em></em><a href="https://youtu.be/XYEVmqGhVxo?si=fseDLKFz1W8meWR6&t=4515" rel="noopener noreferrer" target="_blank"><em><em>delivered a talk</em></em></a><em><em> as part of a celebration in honor of ENIAC’s anniversary on 15 February, which was held online and in-person at the American Helicopter Museum in West Chester, Pa. The following is adapted from that presentation.</em></em></p><p class="ieee-inbody-related">RELATED: <a href="https://spectrum.ieee.org/eniac-80-ieee-milestone" target="_blank">ENIAC, the First General-Purpose Digital Computer, Turns 80</a></p><p>There was a library at my grandparents’ farmhouse that felt like it went on forever. September light through the windows, beech leaves rustling outside on the stone porch, the sounds of cousins and aunts and uncles somewhere in the house. And in the corner of that library, an IBM personal computer.</p><p>When I spent summers there as a child, I didn’t yet know that the computer was closely tied to my family’s story.</p><p>My grandparents are known for their contributions to creating the Electronic Numerical Integrator and Computer, or ENIAC. But both were interested in more than just crunching numbers: My grandfather wanted to predict the weather. My grandmother wanted to be a good storyteller. </p><p>In Irish, the first language my grandmother Kathleen “Kay” McNulty ever spoke, a word existed to describe both of these impulses: <em><em>ríomh</em></em>.</p><p>I began to learn the Irish language myself five years ago, and I was struck by how certain words and phrases had multiple meanings. According to renowned Irish cultural historian Manchán Magan—from whom I took lessons—the word <em><em>ríomh</em></em> has at different times been used to mean to compute, but also <a href="https://www.making.ie/stories/irish-words-weaving" rel="noopener noreferrer" target="_blank">to weave, to narrate, or to compose a poem</a>. That one word that can tell the story of ENIAC, a machine with wires woven like thread that was built to compute, make predictions, and search for a signal in the noise. </p><h2>John Mauchly’s Weather-Prediction Ambitions</h2><p>Before working on ENIAC, John Mauchly <a href="https://fi.edu/en/news/case-files-john-w-mauchly-and-j-presper-eckert" rel="noopener noreferrer" target="_blank">spent years collecting rainfall data</a> across the United States. His favorite pastime was meteorology, and he wanted to find patterns in storm systems to predict the weather.</p><p>The Army, however, funded ENIAC to make simpler predictions: calculating ballistic trajectory tables. Start there, co-inventors J. Presper Eckert and Mauchly realized, and perhaps the weather would soon be computable.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Black and white 1960s image of two white men in suits looking at a wall of computer controls." class="rm-shortcode" data-rm-shortcode-id="7872d50df109149c936e400909defc38" data-rm-shortcode-name="rebelmouse-image" id="75108" loading="lazy" src="https://spectrum.ieee.org/media-library/black-and-white-1960s-image-of-two-white-men-in-suits-looking-at-a-wall-of-computer-controls.jpg?id=65428294&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Co-inventors John Mauchly [left] and J. Presper Eckert look at a portion of ENIAC on 25 November 1966. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Hulton Archive/Getty Images</small></p><p>Weather is a system unfolding through time, and a model of a storm is a story about how that system might unfold. There’s an old Irish saying related to this idea: <a href="https://daltai.com/is-maith-an-scealai-an-aimsir/" target="_blank"><em><em>Is maith an scéalaí an aimsir</em></em></a><em><em>.</em></em> Literally, “weather is a good storyteller.” But <em><em>aimsir</em></em> also means time. So the usual translation of this phrase into English becomes “time will tell.”</p><p>Mauchly wanted to <em><em>ríomh an aimsire</em></em>—to weave the weather into pattern, to compute the storm, to narrate the chaos. He realized that complex systems don’t reveal their full purpose at conception. They reveal it through <em><em>aimsir</em></em>—through weather, through time, through use.</p><h2>ENIAC’s First Programmers Were Weavers</h2><p>Kathleen “Kay” McNulty was born on 12 February 1921, in Creeslough, Ireland, on the night <a href="https://en.wikipedia.org/wiki/James_McNulty_(Irish_activist)" target="_blank">her father</a>—an IRA training officer—was arrested and imprisoned in Derry Gaol.</p><p>Family oral history holds that her people were weavers. She spoke only Irish until her family reached Philadelphia when she was 4 years old, entering American school the following year knowing virtually no English. She graduated in 1942 from Chestnut Hill College with a mathematics degree, was recruited to compute artillery firing tables by hand for the U.S. Army, and was then selected—along with <a href="https://spectrum.ieee.org/the-women-behind-eniac" target="_blank">five other women</a>—to program ENIAC.</p><p>They had no manual. They had only blueprints.</p><p>McNulty and her colleagues learned ENIAC and its quirks the way you learn a loom: by touch, by memory, by routing threads of electricity into patterns. They developed embodied knowledge the designers could only approximate. They could narrow a malfunction to a specific failed vacuum tube before any technician could locate it.</p><p>McNulty and Mauchly are also credited with conceiving the subroutine, the sequence of instructions that can be repeatedly recalled to perform a task, now essential in any programming. The subroutine was not in ENIAC’s blueprints, nor in the funding proposal. The concept emerged as highly determined people extended their imagination into the machine’s affordances.</p><p>The engineers designed the loom. Weavers discovered its true capabilities.</p><p>In 1950, four years after ENIAC was switched on, Mauchly’s dream was realized as it was used in the <a href="https://www.guinnessworldrecords.com/world-records/775520-first-computer-assisted-weather-forecast" target="_blank">world’s first computer-assisted weather forecast</a>. That was made possible after Klara von Neumann and Nick Metropolis reassembled and upgraded the ENIAC with a small amount of digital program memory. The programmers who transformed the math into operational code for the ENIAC were Norma Gilbarg, Ellen-Kristine Eliassen, and Margaret Smagorinsky. Their names are not as well-known as they should be.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Black and white 1940s image of three women operating a differential analyser in a basement." class="rm-shortcode" data-rm-shortcode-id="298168a77d38fd343eeb7d4bbfc219a7" data-rm-shortcode-name="rebelmouse-image" id="aacec" loading="lazy" src="https://spectrum.ieee.org/media-library/black-and-white-1940s-image-of-three-women-operating-a-differential-analyser-in-a-basement.jpg?id=65453828&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Before programming ENIAC, Kay McNulty [left] was recruited by the U.S. Army to compute artillery firing tables. Here, she and two other women, Alyse Snyder [center] and Sis Stump, operate a mechanical analog computer designed to solve differential equations in the basement of the University of Pennsylvania’s Moore School of Electrical Engineering.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">University of Pennsylvania</small></p><h2>Kay McNulty, Family Storyteller</h2><p>Kay married John Mauchly in 1948, describing him as “the greatest delight of my life. He was so intelligent and had so many ideas.... He was not only lovable, he was loving.” She spent the rest of her life ensuring he, Eckert, and the ENIAC programmers would be recognized.</p><p>When she died in 2006, I came to her funeral in shock, not fully knowing what I’d lost. As she drifted away, it was said, she had been reciting her prayers in Irish. This understanding made it quickly over to Creeslough, in County Donegal, and awaited me when I visited to honor her memory with the <a href="https://www.youtube.com/watch?v=zbkk2RJMW9g" target="_blank">dedication of a plaque</a> right there in the center of town.</p><p>In <a href="https://mathshistory.st-andrews.ac.uk/Extras/Mauchly_Antonelli_story" target="_blank">her own memoir</a>, she wrote: “If I am remembered at all, I would like to be remembered as my family storyteller.”</p><p>In Irish, the word for computer is <em><em>ríomhaire</em></em>. One who ríomhs. One who weaves, computes, and tells. My grandfather wanted to tell the story of the weather through computing. My grandmother wanted to be remembered as a storyteller. The language of her childhood already had a word that contained both of those ambitions.</p><h2>Computers as Narrative Engines</h2><p>When it was built, ENIAC looked like the back room of a textile production house. Panels. Switchboards. A room full of wires. Thread.</p><p>Thread does not tell you what it will become. We tend to think of computing as calculation—discrete and deterministic. But a model is a structured story about how something behaves.</p><p>Weather models, ballistic tables, economic forecasts, neural networks: These are all narrative engines, systems that take raw inputs and produce accounts of how the world might unfold. In complex systems, when parts are woven together through use, new structures arise that no one specified in advance.</p><p>Like ENIAC, the machines we are building now—the large models, the autonomous systems—are not merely calculators. They are looms.</p><p>Their most important properties will not be specified in advance. They will emerge through use, through the people who learn how to weave with them.</p><p>Through imagination.</p><p>Through <em><em>aimsir</em></em>.</p>]]></description><pubDate>Fri, 03 Apr 2026 13:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/eniac-80th-anniversary-weaving</guid><category>Eniac</category><category>Weather-prediction</category><category>Computer-history</category><category>Ireland</category><dc:creator>Naomi Most</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/close-up-black-and-white-1940-s-image-of-a-woman-holding-a-metallic-brick-like-controller-with-large-knobs.jpg?id=65453792&amp;width=980"></media:content></item><item><title>Young Professional’s AI Tool Spots Mental Health Conditions</title><link>https://spectrum.ieee.org/abhishek-appaji-ai-diagnostic-tool</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/an-adult-indian-man-using-a-machine-to-capture-images-of-an-adult-womans-retina.jpg?id=65452299&width=1200&height=800&coordinates=156%2C0%2C156%2C0"/><br/><br/><p><a href="https://www.abhishekappaji.com/" rel="noopener noreferrer" target="_blank">Abhishek Appaji</a> has committed his career to bringing lifesaving technology to underresourced communities. The IEEE senior member weaves together artificial intelligence, biomedical engineering, deep learning, and neuroscience to make doctors’ jobs easier and to improve patient outcomes.</p><p>“The intersection of these fields is where the most impactful breakthroughs in diagnostic precision occur,” says Appaji, an associate professor of medical electronics engineering at the <a href="https://www.bmsce.ac.in/" target="_blank">B.M.S. College of Engineering</a>, in Bengaluru, India.</p><h3>Abhishek Appaji</h3><br/><p><strong>Employer </strong></p><p><strong></strong>B.M.S. College of Engineering, in Bengaluru, India</p><p><strong>Job title</strong></p><p><strong></strong>Associate professor of medical electronics engineering</p><p><strong>Member grade </strong></p><p><strong></strong>IEEE senior member</p><p><strong>Alma maters </strong></p><p><strong></strong>B.M.S. College of Engineering; University of Visvesvaraya, in Bengaluru; Maastricht University, in the Netherlands</p><p>Many of his inventions have been deployed in remote areas of India, providing physicians with quality diagnostic tools, including an AI-powered machine that can scan retinas to detect medical conditions and a smart bed that continuously monitors a patient’s vital signs.</p><p>An active volunteer with the <a href="https://yp.ieee.org/" rel="noopener noreferrer" target="_blank">IEEE Young Professionals</a> <a href="https://yp.ieeebangalore.org/" rel="noopener noreferrer" target="_blank">Bangalore Section</a>, he has launched professional networking events, technology workshops, a mentorship program, and other initiatives.</p><p>For his “contributions to accessible AI-driven health care solutions and leadership in empowering young professionals,” Appaji is the recipient of this year’s <a href="https://corporate-awards.ieee.org/award/ieee-theodore-w-hissey-outstanding-young-professional-award/" rel="noopener noreferrer" target="_blank">IEEE Theodore W. Hissey Outstanding Young Professional Award</a>. The honor is sponsored by the <a href="https://ieeephotonics.org/" rel="noopener noreferrer" target="_blank">IEEE Photonics</a> and <a href="https://ieee-pes.org/" rel="noopener noreferrer" target="_blank">Power & Energy</a> societies as well as IEEE Young Professionals. The award is scheduled to be presented this month during the <a href="https://corporate-awards.ieee.org/event/laureate-forum-honors-ceremony-gala/" rel="noopener noreferrer" target="_blank">IEEE Honors Ceremony</a> in New York City.</p><p>“This award represents a significant milestone in my career,” Appaji says. “It validates my core belief that our success as engineers is not solely measured by research outcomes or publications but by the tangible impact we have on lives through accessible technology and the quality of the next generation of leaders we empower.”</p><h2>Developing a blood glucose measurement device</h2><p>After earning a bachelor’s degree in engineering from B.M.S. in 2010, he joined the school as a lecturer in its medical electronics engineering department. At the same time, he pursued master’s degrees in bioinformatics at the <a href="https://uvce.ac.in/" rel="noopener noreferrer" target="_blank">University Visvesvarya College of Engineering</a>, also in Bengaluru. He graduated in 2013 and continued to teach at B.M.S.C.E.</p><p>Four years later, Appaji signed up for the <a href="https://openlearning.mit.edu/courses-programs/mit-bootcamps" rel="noopener noreferrer" target="_blank">MIT Global Entrepreneurship Bootcamp</a>, a two-week intensive hybrid program that includes webinars, online courses, and a five-day stay at MIT. It’s designed to give teams of aspiring entrepreneurs, innovators, and early-stage founders the structured mindset, tools, and frameworks they need to succeed.</p><p>Appaji says he discovered the program while researching opportunities in innovation.</p><p>“I had the technical expertise, but I needed a structured framework to transition my research from the laboratory to the market,” he says.</p><p>During the MIT boot camp, he and a team of four other participants were tasked with approaching a complex health care challenge. They developed a noninvasive blood glucose measurement device to manage gestational diabetes—a condition that causes high blood sugar and insulin resistance during pregnancy. When the program ended, Appaji and two of his Australia-based teammates continued their collaboration by founding <a href="https://au.linkedin.com/company/glucotekinc" rel="noopener noreferrer" target="_blank">Glucotek</a> in Brisbane, Australia.</p><p>Inspired to continue his research in health care technology, Appaji pursued a doctorate in mental health and neurosciences at <a href="https://www.maastrichtuniversity.nl/" rel="noopener noreferrer" target="_blank">Maastricht University</a>, in the Netherlands.</p><p>His <a href="https://cris.maastrichtuniversity.nl/en/publications/retinal-vascular-features-as-a-biomarker-for-psychiatric-disorder/" rel="noopener noreferrer" target="_blank">thesis</a> focused on computational methods to identify retinal vascular patterns.</p><p class="pull-quote">“The patterns we analyze—including the curvature of the vessels, the angles at which they branch out, and their dimensions—reveal the health of the microvascular system,” he says. “With conditions like schizophrenia and bipolar disorder, microvascular changes mirror neurovascular changes in the brain.”</p><p><span>“My journey has shown me that IEEE is much more than a professional society; it is a global platform that allows me to collaborate with a diverse network of experts to solve local humanitarian challenges.”</span></p><p>Examining and measuring the retinal vascular system offers physicians a noninvasive way to examine neural changes, which can be biomarkers for psychiatric illnesses, he says.</p><p>To bring his idea to life, he collaborated with an ophthalmologist, a psychiatrist, and colleagues from his engineering school to develop a screening device. They also created and trained the AI models that analyze retinal images.</p><p>Ideas from his thesis led to the creation of the Smart Eye Kiosk, an AI-powered tool that scans the network of small veins that deliver blood to the inner retina. The tool monitors stress levels and mental health. It also screens for basic eye diseases such as diabetic retinopathy, as well as damage to retinal blood vessels caused by high blood sugar.</p><p>Retinal images also can reveal physiological changes in the brain associated with psychiatric disorders such as schizophrenia and bipolar disorder, Appaji says. The kiosk uses AI models to analyze measurements of the vasculature network, such as vessel thickness, which can be biomarkers for psychiatric conditions. Since mental illnesses can be linked to genetics, relatives of patients with schizophrenia and bipolar disorder were also invited to participate in a study funded by India’s <a href="https://dst.gov.in/cognitive-science-research-initiative-csri" target="_blank">Cognitive Science Research Initiative’s Department of Science & Technology</a>. The clinical data from this study can pave the way for earlier, more accurate diagnoses.</p><p>“The biological basis for this is fascinating,” Appaji says. “The retina is the only place in the human body where the central nervous system and the vascular system can be visualized directly and noninvasively. Anatomically, the retina is an extension of the posterior part of the brain. Therefore, physiological changes in the brain are often reflected in the eyes.”</p><p>This kiosk was developed in collaboration with <a href="https://www.ttsh.com.sg/" target="_blank">Tan Tock Seng Hospital</a> and <a href="https://www.ntu.edu.sg/" target="_blank">Nanyang Technological University</a>, which was funded by <a href="https://www.chi.sg/platformprogrammes/ourfundingprogrammes/ntfhip/" rel="noopener noreferrer" target="_blank">Ng Teng Fong Healthcare Innovation Program</a>.</p><p>He earned his Ph.D. in 2020 from Maastricht, and he received the Best Thesis Award from the university’s <a href="https://www.maastrichtuniversity.nl/research/mental-health-and-neuroscience-research-institute" rel="noopener noreferrer" target="_blank">Mental Health and Neuroscience Research Institute</a>. Appaji credits his time at the school for his multidisciplinary approach to developing medical devices.</p><p>“Having the perspectives of mentors from diverse fields was essential to help me move my research beyond theory into a data-driven diagnostic tool,” he says.</p><p>He was then named institutional coordinator of R&D at B.M.S. and later was promoted to be its head.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="An adult Indian man looking at a rectangular device in his hand, labeled \u201cdozee\u201d." class="rm-shortcode" data-rm-shortcode-id="bc22f80982f03961c7b5f5fd684014f2" data-rm-shortcode-name="rebelmouse-image" id="40db1" loading="lazy" src="https://spectrum.ieee.org/media-library/an-adult-indian-man-looking-at-a-rectangular-device-in-his-hand-labeled-u201cdozee-u201d.jpg?id=65452303&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Abhishek Appaji working on a smart bed sensor that continuously monitors a patient’s vital signs without the use of wires or wearable sensors.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Abhishek Appaji</small></p><h2>A wireless smart bed to monitor vital signs</h2><p>Appaji continues to develop technologies for patients who need them most. “I feel a deep need to bridge this gap and ensure innovations have a tangible impact on society,” he says. In addition to the Smart Eye Kiosk, he improved the performance of the sensors of the smart beds that continuously monitor a patient’s vital signs without the use of wires or wearable sensors. The beds help hospital staff check on their patients in a noninvasive way.</p><p>The project was done in collaboration with health AI company <a href="https://www.dozeehealth.ai/" target="_blank">Dozee (Turtle Shell Technologies)</a> in Bengaluru. The system measures mechanical microvibrations produced by the body in response to the ejection of blood into the aorta, which occurs with each heartbeat. A thin, industrial-grade sensor sheet is placed underneath the mattress. Additional funding is being provided by India’s <a href="https://dst.gov.in" rel="noopener noreferrer" target="_blank">Department of Science and Technology</a>.</p><p>“These sensors are incredibly sensitive,” Appaji says. “They pick up minute mechanical tremors through the mattress material.”</p><p>The sensors detect the force of the patient’s heartbeat and the expansion and contraction of their chest during respiration. The vibrations are converted into electrical signals and analyzed using deep learning algorithms developed by Appaji and his team at the university in collaboration with Dozee.</p><p>The technology is used in more than 200 hospitals throughout India and in thousands of households, he says.</p><h2>Mentoring budding entrepreneurs </h2><p>Appaji is also executive director of the <a href="https://bigfoundation.org.in/" rel="noopener noreferrer" target="_blank">BMSreenivasiah Innovators Guild Foundation</a>, dedicated to nurturing entrepreneurial talent among students and faculty across the BMS group of Institutions. A not-for-profit company promoted by the BMS Education Trust, BIG Foundation provides a structured ecosystem for innovation, incubation, and startup growth.</p><p>There, Appaji mentors budding entrepreneurs, offering advice on business plans, product pitches, marketing strategies, and licensing. Participants are students and faculty members.</p><p>The foundation has incubated more than 10 ventures, according to Appaji.</p><p>“The majority are centered on health care applications,” he says, “and have successfully secured backing from investors and seed funds.”</p><h2>Taking IEEE’s mission to heart</h2><p>Appaji was introduced to IEEE as an undergraduate when one of his professors encouraged him to volunteer for a conference sponsored by the <a href="https://www.embs.org/" rel="noopener noreferrer" target="_blank">IEEE Engineering in Medicine and Biology Society</a>. He transcribed the seminars for session chairs, assisted with managing the talks, and helped answer attendees’ questions.</p><p>“That experience was transformative,” he recalls. “I was amazed to find myself in the same room with the speakers and scientists who had authored the very textbooks I was studying.</p><p>“It was then that I realized IEEE is far more than just technology and volunteering; it is a global platform for high-level networking with world-class scientists and technologists.”</p><p>Appaji has served in several IEEE leadership positions, including 2018–2019 chair of the Young Professionals Bangalore Section. He is now treasurer of the <a href="https://ieee-edusociety.org/home" rel="noopener noreferrer" target="_blank">IEEE Education Society</a>, chair of <a href="https://ieeecsbangalore.org/" rel="noopener noreferrer" target="_blank">IEEE Computer Society Bangalore Chapter</a>, member of the steering committee of <a href="https://ieee-dataport.org/" rel="noopener noreferrer" target="_blank">IEEE DataPort</a>, and serves on the IEEE <a href="https://www.ieee.org/communities/geographic-activities" rel="noopener noreferrer" target="_blank">Member and Geographic Activities</a> and <a href="https://ea.ieee.org/ea-programs" rel="noopener noreferrer" target="_blank">IEEE Educational Activities</a> boards.</p><p>“What motivates me to remain active within IEEE is the profound alignment between my personal goals and the organizational mission of advancing technology for the benefit of humanity,” he says. “My journey has shown me that IEEE is much more than a professional society; it is a global platform that allows me to collaborate with a diverse network of experts to solve local humanitarian challenges.”</p><p>The organization has helped fund some of Appaji’s lifesaving work. During the <a href="https://spectrum.ieee.org/tag/covid-19" target="_self">COVID-19 pandemic</a>, he received a grant from the <a href="https://ieeeht.org/" rel="noopener noreferrer" target="_blank">IEEE Humanitarian Technologies Board </a>and <a href="https://www.ieeer10.org/" rel="noopener noreferrer" target="_blank">Region 10</a> to develop <a href="https://spectrum.ieee.org/ieee-sections-receive-grants-for-their-innovative-ways-of-helping-to-fight-the-coronavirus" target="_self">3D-printed protective equipment</a> for people in Bengaluru’s underserved communities. The virus spread quickly in the high-density areas, where social distancing was nearly impossible. The kits, which included a door opener to avoid high-touch surfaces and an elbow-operated soap dispenser, were sent to nearly 500 households.</p><p>“This work remains one of my most meaningful contributions to humanitarian technology,” Appaji says, “demonstrating how engineering can be rapidly deployed to protect vulnerable populations during a global crisis.”</p><p>He advises younger IEEE members to: “Say yes to taking on roles of responsibility. Don’t wait for a formal title to lead; instead, start by volunteering to do small, manageable tasks within your local chapter or section.”</p><p>“The networking opportunities and leadership skills you gain through these early responsibilities will shape your professional career far more than any textbook ever could.”</p>]]></description><pubDate>Thu, 02 Apr 2026 18:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/abhishek-appaji-ai-diagnostic-tool</guid><category>Ieee-member-news</category><category>Health-care</category><category>Biomedical</category><category>Ieee-young-professionals</category><category>Ieee-awards</category><category>Type-ti</category><dc:creator>Amanda Davis</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/an-adult-indian-man-using-a-machine-to-capture-images-of-an-adult-womans-retina.jpg?id=65452299&amp;width=980"></media:content></item><item><title>What Exoskeletons Learned From One Relentless User</title><link>https://spectrum.ieee.org/exoskeleton-user-experience</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-man-wearing-a-full-body-robotic-exoskeleton-standing-on-a-city-sidewalk.png?id=65426945&width=1200&height=800&coordinates=0%2C667%2C0%2C667"/><br/><br/><p><strong>It’s easy to assume</strong> that Robert Woo was defined by the accident that took away his ability to walk.</p><p>Certainly, the day of his accident—14 December 2007—was a turning point. Woo, an architect working on the new Goldman Sachs headquarters in New York City, hadn’t attended his company’s holiday party the night before, and that morning he was the only one in the trailer that served as the construction-site office. He was bent over his laptop when, 30 floors above, a <a href="https://www.nydailynews.com/2007/12/14/west-side-crane-accident-injures-1-at-goldman-sachs-site/" target="_blank">crane’s nylon sling gave way</a>, sending about 6 tonnes of steel plummeting toward the trailer. The roof collapsed, folding Woo in half and smashing his face into his laptop, which smashed through his desk.</p><p>“I was conscious throughout the whole ordeal,” Woo remembers. “It was an out-of-body experience. I could hear myself screaming in pain. I could hear the voices of the rescue workers. I heard one firefighter say, ‘Don’t worry, we’re getting to you.’” The rescue workers hauled him out of the rubble and got him to the emergency room in 18 minutes flat; with one lung crushed and the other punctured, he wouldn’t have lasted much longer. In those frantic early moments, a doctor told him that he might be paralyzed from the neck down for the rest of his life. He remembers asking the doctors to let him die.</p><p>Woo simply couldn’t imagine how a paralyzed version of himself could continue living his life. Then 39 years old, he worked long hours and jetted around the world to supervise the construction of skyscrapers. More important, he had two young boys, ages 6 months and 2 years. “I couldn’t see having a life while being paralyzed from the neck down, not being able to teach my boys how to play ball,” he recalls. “What kind of life would that be?”</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="2986541a87f62bd11465a0fd835782ed" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/UNddtkBGuAs?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span> <small class="image-media media-caption" placeholder="Add Photo Caption...">Robert Woo walks inside the Wandercraft facility in New York City using the company’s latest self-balancing exoskeleton. </small> <small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Nicole Millman </small> </p><p><span>But in a Manhattan showroom last May, Woo showed that he’s not defined by that accident, which left him paralyzed from the chest down, but with the use of his arms. Instead, he has defined himself by how he has responded to his injury, and the new life he built after it.</span></p><h3></h3><br/><div class="rblad-ieee_in_content"></div><p>In the showroom, Woo transferred himself from his wheelchair to a 80-kilogram (176-pound) exoskeleton suit. After strapping himself in, he manipulated a joystick in his left hand to rise from a chair and then proceeded to walk across the room on robotic legs. Woo’s steps were short but smooth, and he clanked as he walked.</p><p>This exoskeleton, from the French company <a href="https://en.wandercraft.eu/" target="_blank">Wandercraft</a>, is one of the first to let the user walk without arm braces or crutches, which most other models require to stabilize the user’s upper body. The battery-powered exoskeleton took care of both propulsion and balance; Woo just had to steer. The bulky apparatus had a backplate that extended above Woo’s head, a large padded collar, armrests, motorized legs, and footplates. Walking across the room, he appeared to be half man, half machine. On the other side of the showroom’s plate-glass window, on Park Avenue, a kid walking by with his family came to a dead halt on the sidewalk, staring with awe at the cyborg inside.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Person seated wearing a full lower-body robotic exoskeleton for mobility assistance" class="rm-shortcode" data-rm-shortcode-id="eeace6a9e987149ce383ccec6937a1b8" data-rm-shortcode-name="rebelmouse-image" id="73d05" loading="lazy" src="https://spectrum.ieee.org/media-library/person-seated-wearing-a-full-lower-body-robotic-exoskeleton-for-mobility-assistance.jpg?id=65427288&width=980"/></p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Close-up of a hand operating the joystick and controls on a powered wheelchair armrest" class="rm-shortcode" data-rm-shortcode-id="c5dd0b296623bb32a2eb37c88ac0b5f0" data-rm-shortcode-name="rebelmouse-image" id="2d73d" loading="lazy" src="https://spectrum.ieee.org/media-library/close-up-of-a-hand-operating-the-joystick-and-controls-on-a-powered-wheelchair-armrest.jpg?id=65427286&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Robert Woo prepares to walk in a Wandercraft exoskeleton; the device’s controller enables him to stand up, initiate walk mode, and choose a direction. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Bryan Anselm/Redux </small></p><p>The amazement on the boy’s face was reminiscent of Woo’s young sons’ reaction when they saw a photo of Woo trying out an early exoskeleton, back in 2011. “Their first comment was, ‘Oh, Daddy’s in an Iron Man suit,’” he remembers. Then they asked, “When are you going to start flying?” To which Woo replied, “Well, I’ve got to learn how to walk first.”</p><p>The title of exoskeleton superhero suits Woo. He’s as soft-spoken and mild-mannered as Clark Kent, with a smile that lights up his face. Yet the strength underneath is undeniable; he has built a new life out of sheer determination. </p><p>For 15 years, he’s been a test pilot, early adopter, and clinical-study subject for the most prominent exoskeletons under development around the world. He placed the first order for an exoskeleton that was approved for home use, and he learned what it was like to be Iron Man around the house. Throughout it all, he has given the companies detailed feedback drawn from both his architectural design skills and his user experience. He has shaped the technology from inside of it.</p><p><a href="https://people.njit.edu/profile/pal" target="_blank">Saikat Pal</a>, a researcher at the New Jersey Institute of Technology, in Newark, met Woo during clinical trials for Wandercraft’s first model. Like so many others in the field, Pal quickly recognized that Woo brought a lot to the table. “He’s a super-mega user of exoskeletons: very enthusiastic, very athletic,” Pal says. “He’s the perfect subject.”</p><p>By pushing the technology forward, Woo has paved the way for thousands of people with spinal cord injuries as well as other forms of paralysis, who are now benefiting from exoskeletons in rehab clinics and in their homes. “Our bionics program at Mount Sinai started with Robert Woo,” says <a href="https://profiles.mountsinai.org/angela-riccobonno" target="_blank">Angela Riccobono</a>, the director of rehabilitation neuropsychology at <a href="https://www.mountsinai.org/" target="_blank">Mount Sinai Hospital</a>, in New York City, where Woo became an outpatient after his accident. “We have a plaque that dedicates our bionics program to him.”</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="3b08ced9c1ebb53070cf467341ccabd1" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/6kIvBtYeYUs?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span><small class="image-media media-caption" placeholder="Add Photo Caption...">Robert Woo walks down a sidewalk in New York City in 2015 using a ReWalk exoskeleton, one of the first exoskeletons designed for use outside the rehab clinic. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Eliza Strickland</small></p><p>It’s a fitting tribute. Woo’s post-accident life has been marked by victories, frustrations, deep love, and one devastating loss, and yet he has continued to devote himself to bionics. And while his vision for exoskeletons hasn’t changed, experience has reshaped what he expects from them in his lifetime.</p><h2>Rebuilding a Life After his Spinal Cord Injury </h2><p>Long before Woo ever stood up in a robotic suit, he had developed the habits of mind that would later make him an unusually perceptive test pilot.</p><p>Woo has always been a builder, a tinkerer, a fixer. Growing up in the suburbs of Toronto, he put together model kits of battleships and airplanes without looking at the instructions. “I just put things together the way I thought it would work out,” he says. He trained as an architect and in 2000 joined the Toronto-based firm <a href="https://www.adamson-associates.com/" target="_blank">Adamson Associates Architects</a>, a job that soon had him traveling to Europe and Asia to work on corporate high-rises.</p><p>Adamson specializes in taking the stunning designs of visionary architects and turning them into practical buildings with elevators and bathrooms. “Most of the design architects don’t really have a clue about how to build buildings,” Woo says. He liked solving those problems; he liked reconciling beautiful designs with the stubborn reality of construction. That talent for understanding a structure from the inside and spotting the flaws would prove essential later.</p><p>After his accident, Woo had two major surgeries to stabilize his crushed spine, which required surgeons to cut through muscles and nerves that connected to his arms. For two months, he couldn’t feel or move his arms; there was a chance he never would again. Only when sensation began creeping back into his fingertips did he allow himself to imagine a different future. If he wasn’t paralyzed from the neck down, he thought, maybe more of his body could be brought back online. “My focus was to walk again,” he says.</p><p>Woo was discharged in March 2008 and went back to his New York City apartment. He was still bedridden and required around-the-clock care. He doesn’t much like to talk about this next part: By May, his then-wife had moved back to Canada and filed for divorce, asking for full custody of their two children. Woo remembers her saying, “I can’t look after three babies, and one of them for life.”</p><p>It was a dark time. Riccobono of Mount Sinai, who met Woo shortly after he became an outpatient there in 2008, recalls the despondent look on his face the first time they talked. “I wasn’t sure that he wasn’t going to take his life, to be honest,” she says. “He felt like he had nothing to live for.”</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="One photo shows a smiling man in an exoskeleton with his arm around a smiling woman. The other photo shows a metal plaque saying that the Rehabilitation Bionics Program was made possible by the advocacy and dedication of Robert Woo." class="rm-shortcode" data-rm-shortcode-id="24060627efe3d5ed4b5585e963e6cd34" data-rm-shortcode-name="rebelmouse-image" id="7a1d5" loading="lazy" src="https://spectrum.ieee.org/media-library/one-photo-shows-a-smiling-man-in-an-exoskeleton-with-his-arm-around-a-smiling-woman-the-other-photo-shows-a-metal-plaque-saying.jpg?id=65427290&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Angela Riccobono of Mount Sinai Hospital (left) credits Woo with jump-starting the hospital’s bionics program; a plaque in the department of rehabilitation medicine recognizes his role. </small></p><p>Yet Woo harbors no animosity toward his ex-wife. “If we hadn’t separated and gone through the custody hearing, I don’t think I would have gotten this far,” he says. To win partial custody of his children, Woo had to become independent. He had to get off narcotic pain medications, regain strength, and learn how to navigate life in a wheelchair. He had to show that he no longer needed constant nursing, and that he could take care of both himself and his boys.</p><p>There were milestones: learning how to get back into his wheelchair after a fall, learning to drive a car with hand controls, learning to manage his body as it was, not as it had been. The biggest change came when he reconnected with his high school sweetheart, a vivacious woman named Vivian Springer. She was then dividing her time between Toronto and New York City, and she had a son who was almost the same age as Woo’s two boys. Springer had worked in a nursing home and knew how to change the sheets without getting him out of bed; she was currently working in human resources and knew how to deal with insurance companies. “You wouldn’t believe how much stress it lifted off of me,” Woo says. Over time, they became a family.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Man using a robotic exoskeleton with support, shopping and standing with children." class="rm-shortcode" data-rm-shortcode-id="893ec3e7bbaf953f0fa9b20e639dd9a4" data-rm-shortcode-name="rebelmouse-image" id="54575" loading="lazy" src="https://spectrum.ieee.org/media-library/man-using-a-robotic-exoskeleton-with-support-shopping-and-standing-with-children.jpg?id=65427555&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Robert Woo’s wife, Vivian, was trained in how to operate the device he used at home. His sons, Tristan (left) and Adrien, grew up watching their dad test exoskeletons. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Left: Lifeward; Right: Robert Woo </small></p><p>Once Woo had that foundation in place, Riccobono witnessed a profound change. “He went from focusing on ‘what I can’t do anymore’ to ‘What’s still possible? What can I do with what I have?’” At Mount Sinai, Woo remembers asking his doctor <a href="https://profiles.mountsinai.org/kristjan-t-ragnarsson" target="_blank">Kristjan Ragnarsson</a>, who was then chairman of the department of rehabilitation medicine, if he would ever walk again. “His response was, ‘Yes, you can walk again,’” Woo remembers, “‘but not the way you used to walk.’”</p><h2>First Steps in an Exoskeleton </h2><p>As soon as he had regained use of his hands, Woo had started googling, looking for anything that could get him back on his feet. He tried rehab equipment like the <a href="https://www.sralab.org/services/lokomat" target="_blank">Lokomat</a>, which used a harness suspended above a treadmill to enable users to walk. But at the time, it required three physical therapists: one to move each leg and one to control the machine. It was a far cry from the independent strides he dreamed of.</p><p>Several years in, he learned about two companies that had built something radically different: exoskeleton suits for people with spinal cord injuries. These prototypes had motors at the knees and the hips to move the legs, with the user stabilizing their upper body with arm braces. Woo desperately wanted to try one, although the technology was still experimental and far from regulatory approval. So he took the idea to Ragnarsson, asking if Mount Sinai could bring an exoskeleton into its rehab clinic for a test drive. Ragnarsson, who’s now retired, remembers the request well. “He certainly gave us the kick in the behind to get going with the technology,” he says.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Man in robotic exoskeleton walks with canes during rehab demo as clinicians observe" class="rm-shortcode" data-rm-shortcode-id="08a494fb70ca5c5d7c0e5a3bb263b28c" data-rm-shortcode-name="rebelmouse-image" id="16b99" loading="lazy" src="https://spectrum.ieee.org/media-library/man-in-robotic-exoskeleton-walks-with-canes-during-rehab-demo-as-clinicians-observe.jpg?id=65427556&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Robert Woo tries out an early exoskeleton from Ekso Bionics at Mount Sinai Hospital, where he first began testing the technology. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Mario Tama/Getty Images</small></p><p>Ragnarsson had seen decades of failed attempts to get paraplegics upright, including “inflatable garments made of the same material the astronauts used when they went to the moon,” he says. All those devices had proved too tiring for the user; in contrast, the battery-powered exoskeletons promised to do most of the work. And he knew the CEO of <a href="https://eksobionics.com/" target="_blank">Ekso Bionics</a>, a Berkeley, Calif.–based company that had built exoskeletons for the military. In 2011, Ekso <a href="https://spectrum.ieee.org/goodbye-wheelchair-hello-exoskeleton" target="_blank">brought its new clinical prototype to Mount Sinai</a>.</p><p>The day came for Woo’s first walk. “I was excited, and I was also scared, because I hadn’t stood up for almost five years,” he remembers. “Standing up for the first time was like floating, because I couldn’t feel my feet.” In that first Ekso model, Woo didn’t control when he stepped forward; instead, he shifted his weight in preparation, and then a physical therapist used a remote control to trigger the step. Woo walked slowly across the room, using a walker to stabilize his upper body, his steps a symphony of clunks and creaks and whirs. He found it mentally and physically exhausting, but the effort felt like progress.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="996f7d01a8c62b70fe92b38fa003fe59" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/l-QJx8QWCyc?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span><small class="image-media media-caption" placeholder="Add Photo Caption...">Robert Woo stands using an exoskeleton and embraces his wife, Vivian. Woo says that exoskeleton use has both physical and psychological benefits. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Mt. Sinai</small></p><p>Riccobono was there for those first steps, with tears running down her face. “I remembered how he looked the day I first met him, so defeated,” she says. “To see him rise from the chair, to see him rise to a standing position, to see how tall he was, to see him take those first steps—it was beautiful.” Ragnarsson saw clear benefits to the technology. “Any type of walking is good physiologically,” he says. “And it’s a tremendous boost psychologically to stand up and look someone in the eye.” Woo remembers hugging his partner, Springer, and for the first time not worrying about running over her toes with his wheelchair. I first met Woo a few days later, during his third session with the Ekso at Mount Sinai.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Two people stand outside; one uses blue exoskeleton crutches for mobility." class="rm-shortcode" data-rm-shortcode-id="69a52fa10854ff73f463efd70c6fbaac" data-rm-shortcode-name="rebelmouse-image" id="b81ad" loading="lazy" src="https://spectrum.ieee.org/media-library/two-people-stand-outside-one-uses-blue-exoskeleton-crutches-for-mobility.jpg?id=65427570&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Ann Spungen (left), a researcher at a Veterans Affairs hospital, led early clinical trials of exoskeletons. Her research focused on the medical benefits of exoskeleton use. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Robert Woo </small></p><p>Later that same year, at a Department of Veterans Affairs (VA) hospital in the Bronx, Woo got to try a prototype of the world’s other leading exoskeleton: the <a href="https://golifeward.com/products/rewalkpersonal-exoskeleton/" target="_blank">ReWalk</a>, from the Israeli company of the same name (since renamed <a href="https://golifeward.com/" target="_blank">Lifeward</a>). VA researchers, led by <a href="https://www.linkedin.com/in/ann-spungen-3971b246/" target="_blank">Ann Spungen</a>, were keen to determine if exoskeleton use had real medical value for veterans with spinal cord injuries. Woo was part of <a href="https://clinicaltrials.gov/study/NCT01454570?lat=40.8673611&lng=-73.9065313&locStr=James%20J.%20Peters%20Department%20of%20Veterans%20Affairs%20Medical%20Center,%20West%20Kingsbridge%20Road,%20The%20Bronx,%20NY&distance=50&term=ReWalk&viewType=Card&rank=1" target="_blank">that clinical trial</a>, for which he had more than 70 walking sessions, and he’s since been in many others. But he remembers the first VA trial with the most gratitude. “Dr. Spungen’s first exoskeleton clinical trial really turned things around for me,” he says.</p><p>Over the course of the trial’s nine intense months, Woo says he saw noticeable improvements to many facets of his health. “By the end of the trial, I eliminated about three-quarters of my medication intake,” he says, including narcotic pain pills and medication for muscle spasms. He grew fitter, with <a href="https://www.sciencedirect.com/science/article/abs/pii/S1094695018300970" target="_blank">less body fat</a>, more muscle mass, and lower cholesterol. His circulation improved, he says, causing scrapes and cuts to heal more quickly, and his <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC7957745/" target="_blank">digestion improved too</a>. The results Woo experienced have generally been borne out in research studies at the VA and elsewhere—exoskeletons aren’t just good for the mind, they’re good for the body.</p><h2>Improving Exoskeletons From the Inside </h2><p>During the VA trial, Woo began to think of exoskeletons not as miraculous machines, but as works in progress.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Man wearing robotic exoskeleton and using crutches on a city sidewalk" class="rm-shortcode" data-rm-shortcode-id="c6e269240874c399dd042e63b52fc7f6" data-rm-shortcode-name="rebelmouse-image" id="8c60a" loading="lazy" src="https://spectrum.ieee.org/media-library/man-wearing-robotic-exoskeleton-and-using-crutches-on-a-city-sidewalk.jpg?id=65427579&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Pierre Asselin (right), a biomedical engineer, worked with Robert Woo during clinical trials of exoskeletons. He says Woo was always pushing the limits of the technology. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Robert Woo </small></p><p><a href="https://www.linkedin.com/in/pierre-asselin-195a0b4/" target="_blank">Pierre Asselin</a>, the biomedical engineer coordinating the VA’s study, watched participants respond very differently to the equipment. “These devices are not the equivalent of walking—you’re tired after walking a mile,” he says. He notes that later models of both the Ekso and ReWalk enabled users to initiate each step through software that recognized when they shifted their weight. Asselin adds that the cognitive load is “like learning to drive a manual transmission car, where at first you’re really struggling to coordinate the clutch and the brake.” Woo picked it up immediately, he remembers.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Man in a leg exoskeleton reaches into a kitchen cabinet while another observes." class="rm-shortcode" data-rm-shortcode-id="c537ce4f78539951c11063a9cb902729" data-rm-shortcode-name="rebelmouse-image" id="236cd" loading="lazy" src="https://spectrum.ieee.org/media-library/man-in-a-leg-exoskeleton-reaches-into-a-kitchen-cabinet-while-another-observes.jpg?id=65427582&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Robert Woo uses an exoskeleton to reach items in a kitchen cabinet during a test of the device’s utility for everyday tasks.  </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Eliza Strickland </small></p>Woo became an invaluable partner, Asselin says. “When we first started with the devices, there was no training manual. We developed all of that through collaboration with Robert and other participants.” Woo pushed the limits of the technology, Asselin says, whether it was seeing how many steps he could take on one battery charge or simulating a failure mode. “He’d say, ‘What happens if I was to fall? What would be the approach to getting up?’”<p><span>Woo approached the ReWalk the way he had approached buildings in his previous life: He looked inside the structure and found the weak points. An early model left some users with leg abrasions where the straps rubbed—a small injury for most people, but a serious risk for someone who can’t feel a wound forming. Woo suggested better padding and stronger abdominal supports to redistribute the load. He also hated the heavy backpack that carried the battery and computer, so one afternoon he grabbed an old pack, cut off the straps, and rebuilt it into a compact hip-mounted pouch. Then he snapped photos and sent them to the company. The next model arrived with a fanny pack.</span></p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Hand-drawn concept sketch of a modular device labeled \u201cReWack 6.0\u201d with notes and arrows" class="rm-shortcode" data-rm-shortcode-id="d0e09446b489c6a5f720b68d263450a3" data-rm-shortcode-name="rebelmouse-image" id="76e48" loading="lazy" src="https://spectrum.ieee.org/media-library/hand-drawn-concept-sketch-of-a-modular-device-labeled-u201crewack-6-0-u201d-with-notes-and-arrows.jpg?id=65427594&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Robert Woo sent detailed design sketches as part of his feedback to exoskeleton engineers.  </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Robert Woo </small></p><p>Sometimes his fixes were more ambitious. One Ekso unit that he used at Mount Sinai kept shutting down after 30 minutes. Woo felt the hip motors and found them hot to the touch. “I said, ‘Can I remove these? I’m going to make a really quick fix, okay? Give me a drill and I’ll put a couple of holes in it,” he recalls telling the therapists, proposing to create a DIY heat sink. He wasn’t allowed to modify the prototype, but a year later the company introduced improved cooling around the hip motors. “There is a Robert Woo design on this device,” one therapist told him.</p><p><a href="https://www.linkedin.com/in/eythorbender/" target="_blank">Eythor Bender</a>, who was then the CEO of Ekso, called Woo to thank him for his feedback and invite him to spend a week at Ekso’s headquarters. “There was no lack of engineering power in that building,” says Bender. “But sometimes when you work with engineers, they overlook important things.” Bender says Woo brought both design skills and lived experience to his weeklong residency. “He told the engineers, ‘Guys, this has to be something that people actually like to wear.’”</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Patient in exoskeleton uses walker, flanked by doctor in lab coat and man in suit" class="rm-shortcode" data-rm-shortcode-id="0769a2526e44a9360c2a966a9839c4ee" data-rm-shortcode-name="rebelmouse-image" id="2e1fa" loading="lazy" src="https://spectrum.ieee.org/media-library/patient-in-exoskeleton-uses-walker-flanked-by-doctor-in-lab-coat-and-man-in-suit.jpg?id=65427643&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Ekso Bionics CEO Eythor Bender and Mount Sinai physician Kristjan Ragnarsson were both on hand for Woo’s early trials of the Ekso device. Ragnarsson says he saw physical and psychological benefits of exoskeleton use. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Robert Woo </small></p><p>The longer Woo tested, the further ahead he started thinking. With motors only at the hips and knees, every exoskeleton still required crutches. Add powered ankles, he told the Ekso and ReWalk teams, and the suits could balance themselves, freeing the user’s hands. But Woo was ahead of his time. “They said they weren’t going to do that. They weren’t going to change their whole platform,” he remembers. Years later, though, hands-free exoskeletons like those from Wandercraft would emerge built around exactly that principle.</p><h2>When the Exoskeleton Came Home </h2><p>By the mid-2010s, Woo had pushed the technology as far as he could in clinics. What he wanted now was to use an exoskeleton at home.</p><p>That milestone came after <a href="https://spectrum.ieee.org/rewalk-robotics-new-exoskeleton-lets-paraplegic-stroll-the-streets-of-nyc" target="_blank">ReWalk’s exoskeleton</a> became the first to win <a href="https://ir.rewalk.com/news-releases/news-release-details/rewalktm-personal-exoskeleton-system-cleared-fda-home-use" target="_blank">FDA approval for home use</a> in 2014. ReWalk engineers still remember Woo’s help on the final tests for that personal-use model. It was the end of May in 2015, recalls <a href="https://www.linkedin.com/in/david-hexner-8699413/" target="_blank">David Hexner</a>, the company’s vice president of research and development. “He said, ‘Guys, this is great. I’m going to buy it.’”</p><p>Woo was the first customer to buy an exoskeleton to bring home, paying US $80,000 out of pocket. His insurance wouldn’t cover the cost, but he was able to make the purchase in part because of a legal settlement after his accident. The home-use model came with a requirement that the user have at least one companion who was fully trained in operating the device. In Woo’s case, that meant that Springer learned to suit him up, realign his balance, and help him if he fell.</p><p>On delivery day, two SUVs drove up to a hotel down the street from Woo’s condo in the Toronto area. The technicians hauled two huge boxes into a hotel room and assembled his personal exoskeleton. They took Woo’s measurements, made adjustments, checked the software. This latest version could be controlled by either weight shifting or tapping commands on a smartwatch, and Woo had the app ready. He tested out everything in the hotel room, signed off, and then the technicians drove his robot legs to his home.</p><p>That was the start of his golden period with the ReWalk—similar to the excitement many people experience with a new piece of exercise equipment. “I used it every day for a few hours, and then I started logging how many steps I’d done,” Woo says. “My last count was probably just slightly over a million steps,” he says, with half of those steps taken in his home unit and half in training programs and clinical trials.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Person using a ReWalk exoskeleton with crutches beside stacked ReWalk shipping boxes" class="rm-shortcode" data-rm-shortcode-id="3341315ea904071979a50c6d8ab999dd" data-rm-shortcode-name="rebelmouse-image" id="ddd70" loading="lazy" src="https://spectrum.ieee.org/media-library/person-using-a-rewalk-exoskeleton-with-crutches-beside-stacked-rewalk-shipping-boxes.jpg?id=65434618&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">The ReWalk was the first exoskeleton available for use outside the clinic. Robert Woo’s ReWalk arrived in two large boxes. ReWalk engineers assembled it in a hotel room, and Woo tried it out in the hallway before taking it home.  </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Robert Woo</small></p><p>Tristan, Woo’s eldest son, remembers doing laps with his dad in the condo’s underground parking garage while his dad was training for a 5-kilometer race in New York City. Tristan admits that he had previously been embarrassed about his dad, but training for the race shifted something for him. “I was so used to not wanting to tell people that my dad was in a wheelchair, but then I shared his passion for the training,” he says. “When people would come up to us, I’d tell them about it.”</p><p>The ReWalk could turn ordinary moments into small engineering projects. On weekends, Woo would take his boys to the golf course behind their condo and bring a baseball. He had rigged two holsters to the sides of the suit so he could stash a crutch and stand on three points (two legs and one arm) while he pitched or caught. Throw, switch crutches, catch. On the day of his accident, he never thought such a scene would be possible. But with the exoskeleton, it became just another design problem to solve. “It’s a little more work. It’s not perfect,” he says. “But in the end, you still get to do what you want to do—which is play ball with your sons.”</p><p>Tristan, now a college student, says he didn’t realize at the time how hard his dad worked to make those mundane activities possible. “Reflecting on it now,” he says, “he has shaped almost every element of my life, and he definitely is my hero.”</p><p>But even during that golden stretch, the ReWalk had a way of asserting its limits. Every so often it would freeze mid-stride and require a reboot—a small technical hiccup in theory, but a serious problem when there’s a person strapped inside. Once, when he was walking on his own in the parking garage (without his mandated companion), the suit glitched and went into “graceful collapse” mode, lowering him to a seated position on the ground. Woo had to ask security to bring his wheelchair and a dolly.</p><p>He had imagined the exoskeleton would be most useful in the kitchen. Woo loves to cook, and he had pictured himself standing at the stove, looking down into pots, and moving easily between counter and sink. The reality, he found out, was more complicated. “It’s actually very time-consuming and troublesome” to cook in an exoskeleton, he says.</p><p>Preparing a meal meant first rolling through the kitchen in his wheelchair to gather every ingredient and utensil, then transferring himself into the ReWalk and moving himself into position at the counter, stopping at just the right moment. “That’s when I fell once,” Woo says. “I collided with the counter and then lost my balance and fell backward.” If all went well, he’d lean either on one crutch or the counter to keep his balance while he worked. But if he’d forgotten to grab the vinegar from the cabinet, he’d have to go into walk mode, crutch over to it, and figure out how to carry the bottle back to his workstation.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Powered exoskeleton suit and crutches positioned in a modern clinical room" class="rm-shortcode" data-rm-shortcode-id="a984e71926de8dd39f35b478e1bbe279" data-rm-shortcode-name="rebelmouse-image" id="6a40f" loading="lazy" src="https://spectrum.ieee.org/media-library/powered-exoskeleton-suit-and-crutches-positioned-in-a-modern-clinical-room.jpg?id=65434518&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">Sitting unused in Robert Woo’s home, his ReWalk exoskeleton reflects both the promise and the limits of early devices.  </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Robert Woo</small></p><p>Gradually, he stopped trying. The suit, which he’d once worn every day, spent more time sitting idle in the hallway; like so many abandoned treadmills and stationary bikes, it gathered dust. Part of the reason was the exoskeleton’s practical limitations, but part of it was a shocking development: In 2024, Vivian was diagnosed with an aggressive form of breast cancer. She died in November of that year, at the age of 54.</p><p>Woo was scheduled to begin a new round of clinical trials for the Wandercraft home-use exoskeleton that month. In the aftermath of Vivian’s death, he postponed his sessions and questioned whether he would ever go back. “At the time, I thought, ‘What’s the point?’” he remembers.</p><p>He did go back, though. “He just rolled up, right into my office,” says Mount Sinai’s Riccobono. “He still had Vivian’s box of ashes on his lap. That’s how fresh it was.” Woo brought the box into a meeting of spinal cord injury patients and shared the story of losing the love of his life. And he told them that he heard his wife’s voice in his head every day, telling him to get back to work. Once again, he was figuring out how to move forward with what he had.</p><h2>How Close Are We to Everyday Exoskeletons? </h2><p>In the Wandercraft showroom last May, Woo steered toward the door to the street, technicians flanking him like spotters. The slope down to the sidewalk was barely an inch high, but everyone tensed. He shifted his weight and took a step forward. The suit halted automatically. He tried again—step, stop; step, stop—as the suit kept detecting the slight decline and a safety feature kicked in. The Wandercraft isn’t yet rated for slopes of more than 2 percent, and even the gentle pitch of Park Avenue was enough to trigger its safeguards. When he finally reached the sidewalk, Woo broke into a grin. A man in the back seat of a stopped Uber leaned out his window, filming.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Knee brace with straps and a leg showing a fresh, red incision scar." class="rm-shortcode" data-rm-shortcode-id="c7d7199f6643de021a7f81d6c256876e" data-rm-shortcode-name="rebelmouse-image" id="2235b" loading="lazy" src="https://spectrum.ieee.org/media-library/knee-brace-with-straps-and-a-leg-showing-a-fresh-red-incision-scar.jpg?id=65427649&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">During testing of the Wandercraft exoskeleton, straps caused an abrasion on Robert Woo’s leg, which he documented as part of his feedback to the company.   </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Robert Woo </small></p><p>Woo had recently completed seven sessions with the Wandercraft at the VA hospital and had been impressed overall. But at the showroom, he rolled up his pants leg to reveal an abrasion on his shin, the result of a strap that had worn away a patch of skin during a long walking session. He would later send Wandercraft a nine-page assessment with photos and a technology wish list, asking the company to work on things like padding, variable walking speeds, and deeper squats.</p><p>Wandercraft’s engineers relish that kind of user feedback, says CEO <a href="https://www.linkedin.com/in/matthieu-masselin-64585537/" target="_blank">Matthieu Masselin</a>. Exoskeletons are a far more difficult engineering problem than humanoid robots, he explains. “You basically have two systems of equal importance. You know about the robot—it’s fully quantified and measured. But you don’t know what the person is doing, and how the person is moving within the device.”</p><p>Since Woo began testing exoskeletons 15 years ago, both the technology and the market have made strides. ReWalk and Ekso won FDA clearance for clinical use in the 2010s, and both now sell home-use versions. The companies have sold thousands of exoskeletons to rehab clinics and personal users, and they see room for growth; in the United States alone, about <a href="https://msktc.org/sites/default/files/Facts-and-Figures-2025-Eng-508.pdf" target="_blank">300,000 people live with spinal cord injuries</a>, and millions more have mobility impairments from stroke, multiple sclerosis, or other conditions. The VA began supplying devices to eligible veterans in 2015, and Medicare recently <a href="https://golifeward.com/blog/medicare-reimbursement-established-for-medically-eligible-beneficiaries/" target="_blank">established a system for reimbursement</a>, a move that private insurers are beginning to follow. What was once experimental is slowly becoming established.</p><p>Researchers who test the devices say the technology still has significant limits. Pal, of the New Jersey Institute of Technology, mentions battery life, dexterity, and reliability as ongoing challenges. But, he says with a laugh, “Our bodies have evolved over many millions of years—these machines will need a bit more time.” Pal hopes the companies will keep pushing the technological frontier. “My lifetime goal is to see the day when someone like Robert Woo can wake up in the morning, put this device on, and then live an ordinary life.”</p><p>For Woo, the real question about the self-balancing Wandercraft was: Could he cook with it? In the VA hospital’s home mockup, he tried it out in the kitchen, stepping sideways to retrieve items from cabinets and squatting to grab something from the fridge’s lower shelf. For the first time in years, he could work at a counter without leaning on crutches. “The self-standing exoskeleton changes everything,” he says. He imagines a user placing a Thanksgiving turkey on a tray attached to the suit and walking it into the dining room.</p><p>Back in the showroom, Woo finishes the demo and brings the suit to a seated position before transferring back to his wheelchair. After so many years of testing prototypes, he’s now realistic about the technology’s timeline. A truly all-day exoskeleton—the kind you live in, the kind that replaces a wheelchair—may be a decade or more away. “It may not be for me,” he says. But that’s no longer the point. He’s thinking about young people who are newly injured, who are lying in hospital beds and trying to imagine how their lives can continue. “This will give them hope.” <span class="ieee-end-mark"></span></p>]]></description><pubDate>Wed, 01 Apr 2026 13:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/exoskeleton-user-experience</guid><category>Bionics</category><category>Paralysis</category><category>Exoskeleton</category><category>Spinal-cord-injury</category><category>Assistive-technology</category><dc:creator>Eliza Strickland</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/a-man-wearing-a-full-body-robotic-exoskeleton-standing-on-a-city-sidewalk.png?id=65426945&amp;width=980"></media:content></item><item><title>The ’80s Submersible That Transformed Underwater Exploration</title><link>https://spectrum.ieee.org/deep-sea-submersible</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/spherical-deep-sea-submersible-with-robotic-arms-exploring-underwater.jpg?id=65416328&width=1200&height=800&coordinates=61%2C0%2C62%2C0"/><br/><br/><p>As a kid, I loved the 1980s aquatic adventure show <a href="https://www.imdb.com/title/tt0086692/" rel="noopener noreferrer" target="_blank"><em><em>Danger Bay</em></em></a>. True to the TV show’s name, danger was always lurking at the Vancouver Aquarium, where the show was set. In one memorable episode, young Jonah and a friend get trapped in a sabotaged mini-submarine, and Jonah’s dad, a marine-mammal veterinarian, comes to the rescue in a bubble-shaped underwater vehicle. Good stuff! Only recently—as in when I started working on this column—did I learn that the rescue vehicle was not a stage prop but rather a real-world research submersible named <em><em>Deep Rover</em></em>.</p><h2>What Was <em><em>Deep Rover</em></em> and What Did It Do?</h2><p> Built in 1984 and launched the following year, <a href="https://ingenium.ca/publications/en/2025/09/deep-dive-with-deep-rover-a-canadian-made-acrylic-submersible/" rel="noopener noreferrer" target="_blank"><em><em>Deep Rover</em></em></a> was a departure from standard underwater vehicles, which typically required divers to lie in a prone position and look through tiny portholes while tethered to a support ship.</p><p><em><em>Deep Rover </em></em>was designed to satisfy human curiosity about the underwater world. As the rover moved freely through the water down to depths of 1,000 meters, the operator sat up in relative comfort in the cab, inside a clear 13-centimeter-thick acrylic bubble with panoramic views—an inverted fishbowl, with the human immersed in breathable air while the sea creatures looked in. Used for scientific research and deepwater exploration, it set a number of dive records along the way.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Photo of a man and a woman in a wood-paneled room with a scale model of an underwater vehicle in front of them." class="rm-shortcode" data-rm-shortcode-id="d011f033c593fe40f9630c519be31ea2" data-rm-shortcode-name="rebelmouse-image" id="573d9" loading="lazy" src="https://spectrum.ieee.org/media-library/photo-of-a-man-and-a-woman-in-a-wood-paneled-room-with-a-scale-model-of-an-underwater-vehicle-in-front-of-them.jpg?id=65416404&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Submarine designer Graham Hawkes [left] and marine biologist Sylvia Earle [right] came up with the idea for <i>Deep Rover</i>.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Alain Le Garsmeur/Alamy </small></p><p> The team behind <em><em>Deep Rover</em></em> included U.S. marine biologist <a href="https://www.britannica.com/biography/Sylvia-Earle" target="_blank">Sylvia Earle</a> and British marine engineer and submarine designer <a href="https://www.linkedin.com/in/graham-hawkes-8bb75558" target="_blank">Graham Hawkes</a>. Earle and Hawkes’s collaboration had begun in May 1980, when Earle complained to Hawkes about the “stupid” arms on <a href="https://www.divingheritage.com/jim.htm" target="_blank">Jim, an atmospheric diving suit</a>; she didn’t realize she was complaining to one of Jim’s designers. Hawkes explained the difficulty of designing flexible joints that could withstand dueling pressures of 101 kilopascals on the inside—that is, the normal atmospheric pressure at sea level—and up to about 4,100 kPa on the outside. But he listened carefully to Earle’s wish list for a useful manipulator. Several months later, he came back with a design for a superbly dexterous arm that could hold a pencil and write normal-size letters.</p><p>Earle and Hawkes next turned to designing a one-person bubble sub, which they considered so practical that it would be an easy sell. But after failing to attract funding, they decided to build it themselves. In the summer of 1981, they pooled their resources and cofounded Deep Ocean Technology, setting up shop in Earle’s garage in Oakland, Calif.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" rel="float: left;" style="float: left;"> <img alt="Photo of a man sitting in an underwater vehicle with the words \u201cNewtsub DeepWorker 2000\u201d across the front and the logos of NASA and the National Geographic Society." class="rm-shortcode" data-rm-shortcode-id="e350d34e1c8a7e8e47bf32da01655b60" data-rm-shortcode-name="rebelmouse-image" id="3e612" loading="lazy" src="https://spectrum.ieee.org/media-library/photo-of-a-man-sitting-in-an-underwater-vehicle-with-the-words-u201cnewtsub-deepworker-2000-u201d-across-the-front-and-the-logo.jpg?id=65416416&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Phil Nuytten, a Canadian designer of submersibles and dive systems, engineered <i>Deep Rover</i>.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Stuart Westmorland/RGB Ventures/Alamy</small></p><p>They still found that customers weren’t interested in their crewed submersible, though, so they turned to unmanned systems. Their first contract was for a remotely operated vehicle (ROV) for use in oil-rig inspection, maintenance, and repair. Other customers followed, and they ended up building 10 of these ROVs. In 1983, they returned to their original idea and contracted with the Canadian inventor and entrepreneur <a href="https://nuytco.com/history/phil-nuytten/" target="_blank">Phil Nuytten</a> to engineer <em><em>Deep Rover</em></em>.</p><p>Nuytten didn’t have to be convinced of the value of the submersible. He had grown up on the water and shared their dream. As a teenager, he opened Vancouver’s first dive shop. He then worked as a commercial diver. He founded the ocean- and research-tech companies Can-Dive Services (in 1965) and Nuytco Research (in 1982), and he developed advanced submersibles as well as diving systems. These included the <a href="https://nuytco.com/products/newtsuit/" target="_blank">Newtsuit</a>, an aluminum atmospheric diving suit for use on drilling rigs and salvage operations.</p><p class="ieee-inbody-related">RELATED: <a href="https://spectrum.ieee.org/virgin-oceanics-voyage-to-the-bottom-of-the-sea" target="_self">Virgin Oceanic’s Voyage to the Bottom of the Sea</a></p><p><em><em>Deep Rover</em></em>’s first assignment was to boost offshore oil exploration and drilling in eastern Canada. Funding came from the provincial government of Newfoundland and Labrador and the oil companies Petro-Canada and Husky Oil. But the collapse of oil prices in the mid-1980s made it uneconomical to operate the submersible. So the rover’s mission broadened to scientific research.</p><h2><em><em>Deep Rover</em></em>’s Technical Specs</h2><p>The pilot could operate <em><em>Deep Rover</em></em> safely for 4 to 6 hours at a depth of 1,000 meters and speeds of up to 1.5 knots (46 meters per minute). The submersible could be tethered to a support ship or move freely on its own. Two deep-cycle, lead-acid battery pods weighing about 170 kilograms apiece provided power. It had a VHF radio and two frequencies of through-water communications, plus tracking beacons.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Park ranger operates aircraft cockpit controls surrounded by cameras and instruments" class="rm-shortcode" data-rm-shortcode-id="a16dd08e15950afd6153e7a309cedfb0" data-rm-shortcode-name="rebelmouse-image" id="df596" loading="lazy" src="https://spectrum.ieee.org/media-library/park-ranger-operates-aircraft-cockpit-controls-surrounded-by-cameras-and-instruments.jpg?id=65416434&width=980"/> </p><p class="shortcode-media shortcode-media-rebelmouse-image rm-float-left rm-resized-container rm-resized-container-25" data-rm-resized-container="25%" style="float: left;"> <img alt="Two photos, one showing a smiling man in the cab of a heavily instrumented vehicle, the other showing the underwater view out the front of the vehicle. " class="rm-shortcode" data-rm-shortcode-id="bb0cdb46d97cd3b5b1b634058d02b321" data-rm-shortcode-name="rebelmouse-image" id="2555e" loading="lazy" src="https://spectrum.ieee.org/media-library/two-photos-one-showing-a-smiling-man-in-the-cab-of-a-heavily-instrumented-vehicle-the-other-showing-the-underwater-view-out-th.jpg?id=65416428&width=980"/> <small class="image-media media-caption" placeholder="Add Photo Caption...">From 1987 to 1989, Deep Rover did a series of dives in Oregon’s Crater Lake, the deepest lake in the United States. During one dive, National Park Service biologist Mark Buktenica [top] collected rock samples.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">NPS</small></p><p>The rover’s four thrusters—two horizontal fixed aft thrusters and two rotating wing thrusters—could be activated in any combination through microswitches built into the armrest. The pilot navigated using a gyro compass, sonar, and depth gauges (both digital and analog).</p><p>Much to Earle’s delight, <em><em>Deep Rover</em></em> had two excellent manipulators, each with four degrees of freedom, thus solving the problem that had started her down this path of invention. The pilot controlled the manipulators with a joystick at the end of each armrest. Sensory feedback systems helped the pilot “feel” the force, motion, and touch. The two arms had wraparound jaws and could lift about 90 kg.</p><p>If something went wrong, <em><em>Deep Rover</em></em> carried five days’ worth of life support stores and had a variety of redundant safety features: oxygen and carbon dioxide monitoring equipment; a halon (breathable) fire extinguisher; a full-face BIBS (built-in breathing system) that tapped into the starboard air bank; and a ground fault-detection system.</p><p>If needed, the rover could surface quickly by jettisoning equipment, including the battery pods and a 90-kg drop weight in the forward bay. In dire circumstances, the pressure hull (the acrylic bubble, that is) could separate from the frame, taking with it only its oxygen tanks, strobe, through-water communications, and wing thrusters.</p><h2>Deep Rover’s achievements</h2><p>From 1984 to 1992, <em><em>Deep Rover</em></em> conducted about 280 dives. It inspected two of the tunnels near Niagara Falls that divert water to the Sir Adam Beck II hydroelectric plant. In California’s Monterey Bay, the rover let researchers film previously unknown deep-sea marine life, which helped establish the Monterey Bay Aquarium Research Institute. At Crater Lake National Park, in Oregon, <em><em>Deep Rover</em></em> proved the existence of geothermal vents and bacteria mats, leading to the protection of the site from extractive drilling.</p><p><em><em>Deep Rover</em></em> was featured in a <a href="https://www.barbeefilm.com/discovery-ii.html" rel="noopener noreferrer" target="_blank">short film</a> shown at Vancouver’s Expo ’86, the first of several TV and movie appearances. There was <em><em>Danger Bay</em></em>. Director James Cameron used an early prototype of the submersible in his 1989 film <a href="https://www.imdb.com/title/tt0096754/" rel="noopener noreferrer" target="_blank"><em><em>The Abyss</em></em></a>. <em><em>Deep Rover </em></em>also made an appearance in Cameron’s 2005 documentary <a href="https://www.imdb.com/title/tt0417415/" rel="noopener noreferrer" target="_blank"><em><em>Aliens of the Deep</em></em></a>.</p><p>In 1992, <em><em>Deep Rover</em></em> came to the end of its working life. It now resides at <a href="https://ingenium.ca/en/" rel="noopener noreferrer" target="_blank">Ingenium</a>, Canada’s Museums of Science and Innovation, in Ottawa. For a time, Deep Ocean Engineering continued to develop later generations of the submersible. Eventually, though, uncrewed remotely operated and autonomous underwater vehicles became the norm for deep-sea missions, replacing human pilots with sensors and equipment. New ROVs can dive significantly deeper than human-piloted ones, and new cameras are so good that it feels like you’re there…almost. And yet, humans still long to have the personal experience of exploring the depths of the oceans.</p><p><em><em>Part of a </em></em><a href="https://spectrum.ieee.org/collections/past-forward/" target="_self"><em><em>continuing series</em></em></a><em> </em><em><em>looking at historical artifacts that embrace the boundless potential of technology.</em></em></p><p><em><em>An abridged version of this article appears in the April 2026 print issue as “</em></em><em><em>All Alone in the Abyss</em></em><em><em>.”</em></em></p><h3>References</h3><br/><p>My friends at <a href="https://ingenium.ca/en/" target="_blank">Ingenium</a>, Canada’s Museums of Science and Innovation, helpfully provided me with background material on why they decided to acquire <em>Deep Rover</em>. They also published a great <a href="https://ingenium.ca/publications/en/2025/09/deep-dive-with-deep-rover-a-canadian-made-acrylic-submersible/" target="_blank">blog post</a> about the rover.</p><p><a href="https://www.linkedin.com/in/dirk-rosen-b551204/" target="_blank">Dirk Rosen</a>, executive vice president of engineering at DEEP, published specifications for <em>Deep Rover</em> in his 1986 IEEE paper “<a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1160330" rel="noopener noreferrer" target="_blank">Design and Application of the Deep Rover Submersible</a>.”</p><p>Sylvia Earle, known affectionately as “Her Deepness,” has written extensively about the ocean depths. I found her book<em> Sea Change: A Message of the Oceans</em> (G.P. Putnam’s Sons, 1995) to be especially enjoyable.</p>]]></description><pubDate>Tue, 31 Mar 2026 13:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/deep-sea-submersible</guid><category>Ocean-engineering</category><category>Submersibles</category><category>Underwater-vehicles</category><category>Canada</category><category>Past-forward</category><dc:creator>Allison Marsh</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/spherical-deep-sea-submersible-with-robotic-arms-exploring-underwater.jpg?id=65416328&amp;width=980"></media:content></item><item><title>Invences Empowers Small Businesses With Smart Telecom Networks</title><link>https://spectrum.ieee.org/invences-startup-telecom-networks</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/three-men-seated-on-stage-underneath-a-large-presentation-screen-one-of-the-men-is-holding-a-microphone-while-speaking-to-the-a.jpg?id=65416492&width=1200&height=800&coordinates=0%2C83%2C0%2C84"/><br/><br/><p>To stay competitive, many small businesses need advanced wireless communication networks, not only to communicate but also to leverage technologies such as artificial intelligence, the Internet of Things, and robotics. Often, however, the businesses lack the technical expertise needed to install, configure, and maintain the systems.</p><p><a href="https://www.linkedin.com/in/bhaskara-rallabandi-40b20b36/" rel="noopener noreferrer" target="_blank">Bhaskara Rallabandi</a>, who spent more than two decades working for major telecom companies, decided to use his expertise to help small businesses. Rallabandi, an IEEE senior member, is an expert certified by the <a href="https://www.incose.org/" rel="noopener noreferrer" target="_blank">International Council on Systems Engineering</a>.</p><h3>Invences</h3><br/><p><strong>Cofounder</strong></p><p>Bhaskara Rallabandi</p>
<p><strong>Founded</strong></p><p>2023</p><p><strong>Headquarters</strong></p><p>Frisco, Texas</p><p><strong>Employees</strong></p><p>100</p><p>In 2023 he helped found <a href="https://invences.com/" rel="noopener noreferrer" target="_blank">Invences</a>, a telecommunications automation company headquartered in Frisco, Texas.</p><p>Invences services include designing, building, and installing <a href="https://spectrum.ieee.org/ai-data-centers-hts-superconductors" target="_self">data centers</a>, as well as cost-effective and secure wireless, private, <a href="https://spectrum.ieee.org/internet-of-things-5g-mit" target="_self">IoT</a>, and virtual communications networks.</p><p>The company has set up systems for farms, factories, and universities in rural and urban areas including <a href="https://spectrum.ieee.org/broadband-internet-in-nigeria" target="_self">underserved communities</a>. Its mission, Rallabandi says, is to “build autonomous, ethical, and sustainable networks that connect communities intelligently.”</p><p>For his work, he was recognized last year for “entrepreneurial leadership in founding and scaling a U.S.-based technology company, advancing innovation in 5G/6G and Open RAN [radio access network], shaping global standards, and inspiring future leaders through mentorship and community impact” with the <a href="https://ieeeusa.org/2025-ieee-usa-awards-honor-engineering-leaders/" rel="noopener noreferrer" target="_blank">IEEE-USA Entrepreneur Achievement Award for Leadership in Entrepreneurial Spirit</a>.</p><h2>Building a telecommunications career</h2><p>He began his telecommunications career in 2009 as a manager and principal network engineer at <a href="https://www.verizon.com/" rel="noopener noreferrer" target="_blank">Verizon</a>’s <a href="https://www.verizon.com/about/our-company/innovation-labs" rel="noopener noreferrer" target="_blank">Innovation Labs</a> in Waltham, Mass. He and his team ran some of the earliest long-term evolution and evolved packet core performance trials. (LTE is the 4G wireless broadband standard for mobile devices. EPC is the IP-based, high-performance core network architecture for 4G LTE networks.)</p><p>That work at Innovation Labs, he says, was key to the development of the first 4G systems. It set the stage for scalable, interoperable broadband architectures that underpin today’s 5G and 6G designs.</p><p>“We built the first bridge between legacy and cloud-native networks,” he says.</p><p>He left in 2011 to join <a href="https://about.att.com/sites/labs" rel="noopener noreferrer" target="_blank">AT&T Labs</a> in Redmond, Wash. As senior manager and principal solutions architect, he oversaw the design, integration, and testing of the company’s next-generation wireless systems. He also led projects that redefined automation of networks and set up cloud computing systems including <a href="https://www.firstnet.com/" rel="noopener noreferrer" target="_blank">FirstNet</a>, the nationwide broadband network for first responders, and VoLTE, the <a href="https://www.rcrwireless.com/20151123/carriers/att-volte-video-calling-rcs-messaging-launched-with-limited-support-tag2" rel="noopener noreferrer" target="_blank">first voice-over-video LTE</a> for conducting video calls.</p><p>In 2018 Rallabandi was hired as a principal and a senior manager of engineering at <a href="https://www.samsung.com/us/business//networking/" target="_blank">Samsung Networks Division’s Technology Solutions Division,</a> in Plano, Texas.<span> He led the development of 5G virtualization and Open RAN initiatives, which enable more flexible, scalable, and efficient large network deployments and interoperability among vendors.</span></p><h2>Designing networks for small businesses</h2><p>Feeling that he wasn’t reaching his full potential in the corporate world, and to help small businesses, he opted to start his own venture in 2023 with his wife, <a href="https://www.linkedin.com/in/lakshmi-rallabandi-04a17977/" target="_blank">Lakshmi Rallabandi</a>, a computer science engineer. She is Invences’s CEO, and he is its founding principal and chief technology advisor.</p><p>Invences, which is self-funded and employs about 100 people, has more than 50 customers from around the world.</p><p>“I wanted to do something more interesting where I could use the knowledge I gained working for these big companies to fill the gaps they overlooked in terms of automation” for small businesses, he says. “I have a team of people who, combined, have 200 years of technology experience.”</p><p>The startup builds networks that simplify its clients’ operations and reduce their costs, he says.</p><p>Instead of duplicating how major telecom carriers build networks for dense urban areas, he says, his designs reimagine the network architecture to lower its complexity, costs, and operational overhead.</p><p class="pull-quote">“Connectivity should not be a luxury. Rural communities deserve an infrastructure that fits their needs.”</p><p>The systems integrate new technologies such as Open RAN, virtualized RAN, digital twins, telemetry, and advanced analytics. Some networks also incorporate agentic AI, an autonomous system that runs independently of humans and uses AI agents that plan and act across the network. Digital twins evaluate the agent’s decisions before releasing them.</p><p>“Autonomy is not about removing humans from the loop,” Rallabandi says. “It is about giving systems the ability to manage complexity so humans can focus on intent and outcomes.”</p><p>Rallabandi also has worked on AI-driven telecom observability technologies designed to allow networks to detect anomalies and optimize performance automatically.</p><p>He has developed a virtual O-RAN innovation lab, where clients can test the interoperability of their 5G systems, try out their enhancements, run trials of future functions, and experiment with updates.</p><p>Invences partnered with <a href="https://trilogynet.com/" target="_blank">Trilogy Networks</a> to build the <a href="https://trilogynet.com/farmgrid" rel="noopener noreferrer" target="_blank">FarmGrid platform</a> for farms in Fargo, N.D., and Yuma, Ariz. FarmGrid used private 5G networks, edge-computing AI, and digital twins to make the operations more efficient.</p><p>“The project connects farms with sensors, analytics platforms, and autonomous equipment to enable precision agriculture, water optimization, and real-time decision-making,” Rallabandi says.</p><p class="shortcode-media shortcode-media-youtube"> <span class="rm-shortcode" data-rm-shortcode-id="0cfc80cc609775b5ff498c9749ec208b" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/TrNkW-Gnw9Y?rel=0&start=47" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span><small class="image-media media-caption" placeholder="Add Photo Caption...">IEEE Senior Member Bhaskara Rallabandi talks about partnering with Trilogy Networks to build the FarmGrid platform for farms in Fargo, N.D., and Yuma, Ariz.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">TECKNEXUS</small></p><h2>Paying it forward through IEEE programs</h2><p>Rallabandi says he believes staying involved with IEEE is important to his career development and a way to give back to the profession. He is a frequent invited <a href="https://events.vtools.ieee.org/m/495517" rel="noopener noreferrer" target="_blank">speaker</a> at IEEE conferences.</p><p>He is active with <a href="https://futurenetworks.ieee.org/" rel="noopener noreferrer" target="_blank">IEEE Future Networks</a> and its <a href="https://ctu.ieee.org/" rel="noopener noreferrer" target="_blank">Connecting the Unconnected</a> (CTU) initiative. Members of the Future Networks technical community work to develop, standardize, and deploy 5G and 6G networks as well as successive generations.</p><p>CTU aims to bridge the digital divide by bringing Internet service to underserved communities. During its<a href="https://ctu.ieee.org/challenge/2025-ctu-challenge/" rel="noopener noreferrer" target="_blank">annual challenge</a>, Rallabandi works with the winning students, researchers, and innovators to help them turn their concepts into affordable, cost-effective options.</p><p>“CTU represents the best of IEEE,” he says. “It is about taking innovation out of conferences and into communities that need it the most.</p><p>“Connectivity should not be a luxury. Rural communities deserve an infrastructure that fits their needs.”</p><p>He participates in the recently launched <a href="https://fnem.futurenetworks.ieee.org/" rel="noopener noreferrer" target="_blank">IEEE Future Networks Empowerment Through Mentorship initiative</a>, which helps innovators, entrepreneurs, and startups expand their companies by educating them about finance, marketing, and related concepts.</p><p>“IEEE gives me both a voice and a responsibility,” Rallabandi says. “We’re not just developing technology; we are shaping how humanity connects.”</p>]]></description><pubDate>Mon, 30 Mar 2026 18:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/invences-startup-telecom-networks</guid><category>Ieee-member-news</category><category>Startups</category><category>Invences</category><category>Telecommunications</category><category>Ieee-future-network</category><category>Careers</category><category>Type-ti</category><dc:creator>Kathy Pretz</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/three-men-seated-on-stage-underneath-a-large-presentation-screen-one-of-the-men-is-holding-a-microphone-while-speaking-to-the-a.jpg?id=65416492&amp;width=980"></media:content></item><item><title>Facial Recognition Is Spreading Everywhere</title><link>https://spectrum.ieee.org/facial-recognition-gone-wrong</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/illustration-34-orange-women-icons-1-blue-man-icon-labels-for-skin-tone-and-gender-comparisons.jpg?id=65407585&width=1200&height=800&coordinates=0%2C12%2C0%2C13"/><br/><br/><p>Facial recognition technology (FRT) dates back 60 years. Just over a decade ago, deep-learning methods tipped the technology into more useful—<a href="https://spectrum.ieee.org/china-facial-recognition" target="_blank">and menacing</a>—territory. Now, retailers, your neighbors, and law enforcement are all storing your face and building up a fragmentary photo album of your life.</p><p>Yet the story those photos can tell inevitably has errors. FRT makers, like those of any diagnostic technology, must balance two types of errors: false positives and false negatives. There are three possible outcomes.</p><div class="ieee-sidebar-medium"><h3>Three Possible Outcomes</h3><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="White figures and an orange hooded figure, focusing on the hooded figure in a split design." class="rm-shortcode" data-rm-shortcode-id="8a762ebf2761a791f12500ed10596cc3" data-rm-shortcode-name="rebelmouse-image" id="f4d64" loading="lazy" src="https://spectrum.ieee.org/media-library/white-figures-and-an-orange-hooded-figure-focusing-on-the-hooded-figure-in-a-split-design.png?id=65407894&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">a) identifies the suspect, since the two images are of the same person, according to the software. Success!</small></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Abstract figures: orange hoodie enlarged, white, yellow, and orange on left, black background." class="rm-shortcode" data-rm-shortcode-id="3d130b8e4c73ee49898645524cecd1f6" data-rm-shortcode-name="rebelmouse-image" id="30881" loading="lazy" src="https://spectrum.ieee.org/media-library/abstract-figures-orange-hoodie-enlarged-white-yellow-and-orange-on-left-black-background.png?id=65407867&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">b) matches another person in the footage with the suspect’s probe image. A false positive, coupled with sloppy verification, could put the wrong person behind bars and lets the real criminal escape justice.</small></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Three white icons and one orange hoodie icon on left, large orange hoodie icon on right." class="rm-shortcode" data-rm-shortcode-id="4cdaa23680c5144a5c284fcd8cb6f3df" data-rm-shortcode-name="rebelmouse-image" id="fbc8f" loading="lazy" src="https://spectrum.ieee.org/media-library/three-white-icons-and-one-orange-hoodie-icon-on-left-large-orange-hoodie-icon-on-right.png?id=65407858&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">c) fails to find a match at all. The suspect may be evading cameras, but if cameras just have low-light or bad-angle images, this creates a false negative. This type of error might let a suspect off and raise the cost of the manhunt.</small></p></div><p>In best-case scenarios—such as comparing someone’s passport photo to a photo taken by a border agent—false-negative rates are <a href="https://face.nist.gov/frte/reportcards/11/clearviewai_003.html" target="_blank">around two in 1,000 and false positives are less than one in 1 million</a>.</p><p>In the rare event you’re one of those false negatives, a border agent might ask you to show your passport and take a second look at your face. But as people ask more of the technology, more ambitious applications could lead to more catastrophic errors. Let’s say that police are searching for a suspect, and they’re comparing an image taken with a security camera with a previous “mug shot” of the suspect.</p><p>Training-data composition, differences in how sensors detect faces, and intrinsic differences between groups, such as age, all affect an algorithm’s performance. The <a href="https://assets.publishing.service.gov.uk/media/693002a4cdec734f4dff4149/1a_Cognitec_NPL_Equitability_Report_October_25.pdf&sa=D&source=docs&ust=1774557264829489&usg=AOvVaw13R0ue8NITZ-0tPVLcJ8S-" target="_blank">United Kingdom estimated</a> that its FRT exposed some groups, such as women and darker-skinned people, to risks of misidentification as high as two orders of magnitude greater than it did to others.</p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Five faces arranged left to right, from easy to hard to recognize." class="rm-shortcode" data-rm-shortcode-id="ce19d3eb3745de15489274ebe5083f06" data-rm-shortcode-name="rebelmouse-image" id="3ab1e" loading="lazy" src="https://spectrum.ieee.org/media-library/five-faces-arranged-left-to-right-from-easy-to-hard-to-recognize.png?id=65407777&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">Less clear photographs are harder for FRT to process.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">iStock</small></p><p>What happens with photos of people who aren’t cooperating, or vendors that train algorithms on biased datasets, or field agents who demand a swift match from a huge dataset? Here, things get murky.</p><div class="ieee-sidebar-medium"><h3>Facial Recognition Gone Wrong</h3><p><strong>THE NEGATIVES OF FALSE POSITIVES</strong></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Detroit Police SUV with American flag decal on side under bright sunlight." class="rm-shortcode" data-rm-shortcode-id="1a424f342f44dff48e8b6b05c79f5032" data-rm-shortcode-name="rebelmouse-image" id="c102c" loading="lazy" src="https://spectrum.ieee.org/media-library/detroit-police-suv-with-american-flag-decal-on-side-under-bright-sunlight.png?id=65407650&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">2020: <a href="https://quadrangle.michigan.law.umich.edu/issues/winter-2024-2025/flawed-facial-recognition-technology-leads-wrongful-arrest-and-historic&sa=D&source=docs&ust=1774557264902408&usg=AOvVaw3xUv5_o_zg1Fh0EScZ9lTW" target="_blank">Robert Williams’s wrongful arrest</a> cost him detention. The ensuing settlement requires Detroit police to enact policies that recognize FRT’s limits. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">iStock</small></p><p><strong>ALGORITHMIC BIAS</strong></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt='Red sign reads "Security cameras in use" with camera graphic.' class="rm-shortcode" data-rm-shortcode-id="014ac05f2fe587ca01643c64c750e331" data-rm-shortcode-name="rebelmouse-image" id="f4f1f" loading="lazy" src="https://spectrum.ieee.org/media-library/red-sign-reads-security-cameras-in-use-with-camera-graphic.png?id=65407620&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">2023: <a href="https://incidentdatabase.ai/cite/619/&sa=D&source=docs&ust=1774557264903427&usg=AOvVaw3fBw_78OyUB3Sa_cPpxmCi" target="_blank">Court bans Rite Aid from using facial recognition for five years</a> over its use of a racially biased algorithm. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">iStock</small></p><p><strong>TOO FAST, TOO FURIOUS?</strong></p><p class="shortcode-media shortcode-media-rebelmouse-image"> <img alt="Back of ICE officer in tactical gear facing a house." class="rm-shortcode" data-rm-shortcode-id="0004b023a075c21698cdf88cfd0b4106" data-rm-shortcode-name="rebelmouse-image" id="889f9" loading="lazy" src="https://spectrum.ieee.org/media-library/back-of-ice-officer-in-tactical-gear-facing-a-house.png?id=65407619&width=980"/><small class="image-media media-caption" placeholder="Add Photo Caption...">2026: U.S. immigration agents <a href="https://www.404media.co/ices-facial-recognition-app-misidentified-a-woman-twice/&sa=D&source=docs&ust=1774557264904407&usg=AOvVaw03DUrBl3YxN6c3uhHa611f" target="_blank">misidentify a woman they’d detained as two different women</a>. </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">VICTOR J. BLUE/BLOOMBERG/GETTY IMAGES </small></p></div><p><span>Consider a busy trade fair using FRT to check attendees against a database, or gallery, of images of the 10,000 registrants, for example. Even at 99.9 percent accuracy you’ll get about a dozen false positives or negatives, which may be worth the trade-off to the fair organizers. But if police start using something like that across a city of 1 million people, the number of potential victims of mistaken identity rises, as do the stakes.</span></p><p><span>What if we ask FRT to tell us if the government has ever recorded and stored an image of a given person? That’s what U.S. Immigration and Customs Enforcement <a href="https://illinoisattorneygeneral.gov/News-Room/Current-News/001%20-%20Complaint%201.12.26.pdf?language_id=1" target="_blank">agents have done since June 2025</a>, using the Mobile Fortify app. The agency conducted more than 100,000 FRT searches in the first six months. The size of the potential gallery is at least <a href="https://sam.gov/opp/b016354c5bd045fa92e4886878747dc8/view" target="_blank">1.2 billion images</a>.</span></p><p><span>At that size, assuming even best-case images, the system is likely to return around 1 million false matches, but at a rate at least 10 times as high for darker-skinned people, depending on the subgroup.</span></p><p>Responsible use of this powerful technology would involve independent identity checks, multiple sources of data, and a clear understanding of the error thresholds, says computer scientist <a href="https://www.cics.umass.edu/about/directory/erik-learned-miller" target="_blank">Erik Learned-Miller</a> of the University of Massachusetts Amherst: “<a href="https://spectrum.ieee.org/joy-buolamwini" target="_blank">The care we take</a> in deploying such systems should be proportional to the stakes.”</p>]]></description><pubDate>Mon, 30 Mar 2026 13:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/facial-recognition-gone-wrong</guid><category>Facial-recognition</category><category>Privacy</category><category>Surveillance</category><category>Machine-vision</category><category>Computer-vision</category><dc:creator>Lucas Laursen</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/illustration-34-orange-women-icons-1-blue-man-icon-labels-for-skin-tone-and-gender-comparisons.jpg?id=65407585&amp;width=980"></media:content></item><item><title>5G Non-Terrestrial Networks Enable Ubiquitous Global Connectivity</title><link>https://content.knowledgehub.wiley.com/5g-ntn-takes-flight-technical-overview-of-5g-non-terrestrial-networks/</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/rohde-schwarz-logo.png?id=26851523&width=980"/><br/><br/><p><span>5G covers under 40% of landmass. This Whitepaper details how 3GPP Release 17 addresses six satellite challenges: delay, Doppler, path loss, polarization, spectrum, and architecture.</span></p><p><span></span><strong><span>What Attendees will Learn</span></strong></p><ol><li><span>Why non-terrestrial networks are now integral to the 5G roadmap — Understand how the Third Generation Partnership Project (3GPP) Release 17 incorporates satellite-based connectivity into the 5G system, targeting ubiquitous coverage across maritime, remote, and polar regions where terrestrial networks reach less than 40% of the world’s landmass. Learn the distinction between New Radio non-terrestrial networks for mobile broadband and Internet of Things non-terrestrial networks for low-power machine-type communications.</span></li><li>How satellite constellation design shapes coverage, capacity, and latency — Examine how orbit altitude (low earth orbit, medium earth orbit, geostationary earth orbit), beam footprint geometry, elevation angle, and inclination determine coverage area, round-trip time, and differential delay across user equipment within a single beam. Explore the trade-offs between transparent bent-pipe and regenerative onboard-processing payload architectures.</li><li>What radio frequency challenges distinguish satellite links from terrestrial propagation — Explore the six major technical challenges: high free-space path loss, time-variant Doppler, differential delay across large beam footprints, Faraday rotation of polarization through the ionosphere, and spectrum coexistence between terrestrial and non-terrestrial bands in the S-band and L-band.</li><li>How 5G protocols must adapt to support non-terrestrial connectivity — Learn the specific amendments to hybrid automatic repeat request operation, timing advance control (split into common and user-equipment-specific components), random access procedure timing extensions, discontinuous reception power saving adaptations, earth-fixed tracking area management, conditional handover mechanisms, and feeder link switching for service continuity in a unique propagation environment.</li></ol><p><a href="https://content.knowledgehub.wiley.com/5g-ntn-takes-flight-technical-overview-of-5g-non-terrestrial-networks/" target="_blank">Download this free whitepaper now!</a></p>]]></description><pubDate>Mon, 30 Mar 2026 10:00:03 +0000</pubDate><guid>https://content.knowledgehub.wiley.com/5g-ntn-takes-flight-technical-overview-of-5g-non-terrestrial-networks/</guid><category>Satellites</category><category>Nonterrestrial-networks</category><category>5g</category><category>Radio-frequencies</category><category>Type-whitepaper</category><dc:creator>Rohde &amp; Schwarz</dc:creator><media:content medium="image" type="image/png" url="https://assets.rbl.ms/26851523/origin.png"></media:content></item><item><title>Social Media Addiction Trial Should Lead to Platform Redesigns</title><link>https://spectrum.ieee.org/social-media-trial</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-woman-in-a-pink-jacket-with-a-large-button-of-a-teenage-girl-affixed-to-it-stands-in-front-of-a-large-banner-with-the-names-an.jpg?id=65404697&width=1200&height=800&coordinates=183%2C0%2C183%2C0"/><br/><br/><p>In a landmark case, a jury found this week that Meta and YouTube negligently designed their platforms and harmed the plaintiff, a 20-year-old woman referred to as Kaley G.M. The jury agreed with the plaintiff that <a href="https://spectrum.ieee.org/medical-experts-say-addiction-to-technology-is-a-growing-concern" target="_blank">social media is addictive</a> and harmful and was deliberately designed to be that way. This finding aligns with my view as a clinical psychologist: that social media addiction is not a failure of users, but a feature of the platforms themselves. I believe that accountability must extend beyond individuals to the systems and incentives that shape their behavior.</p><div class="rm-embed embed-media"><iframe height="110px" id="noa-web-audio-player" src="https://embed-player.newsoveraudio.com/v4?key=q5m19e&id=https://spectrum.ieee.org/social-media-trial&bgColor=F5F5F5&color=1b1b1c&playColor=1b1b1c&progressBgColor=F5F5F5&progressBorderColor=bdbbbb&titleColor=1b1b1c&timeColor=1b1b1c&speedColor=1b1b1c&noaLinkColor=556B7D&noaLinkHighlightColor=FF4B00&feedbackButton=true" style="border: none" width="100%"></iframe></div><p>In my clinical practice, I regularly see patients struggling with compulsive social media use. Many describe a pattern of “doomscrolling,” often using social media to numb themselves after a long day. Afterwards, they feel guilty and stressed about the time lost yet have had limited success changing this pattern on their own.</p><p><span>It’s easy to understand why scrolling can be so addictive. Social media interfaces are built around a powerful behavioral mechanism known as intermittent reinforcement, says </span><a href="https://vivo.brown.edu/display/jbrewer2" target="_blank">Judson Brewer</a><span>, an addiction researcher at Brown University, which is the strongest and most effective type of reinforcement learning. This is the same mechanism that slot machines rely on: Users never know when the next reward—a shower of quarters, or a slew of likes and comments—will appear. Not all the videos in our feeds captivate us, but if we scroll long enough, we are bound to arrive at one that does. The ongoing search for rewards ensnares us and reinforces itself.</span></p><h2>Why Social Media Feels Addictive </h2><p>Individuals typically struggle on their own to address compulsive social media use. This should be no surprise, as habits are not typically broken through sheer discipline but rather by altering the reinforcement loops that sustain them. Brewer argues that “there’s actually no neuroscientific evidence for the presence of willpower.” Placing the burden to self-regulate solely on users misses the deeper issue: These platforms are engineered to override individual control.</p><p><a href="https://www.hhs.gov/surgeongeneral/reports-and-publications/youth-mental-health/social-media/index.html?utm_source=chatgpt.com" target="_blank">A growing body of research</a> identifies social media use and constant digital connectivity as important influences on the growing incidence of adolescent mental health problems. Brewer notes that adolescents are particularly vulnerable, as they are in a “developmental phase” in which reinforcement learning processes are especially strong. This vulnerability can be exploited by the design features of large social media platforms.</p><h2>How Platforms Are Designed to Maximize Engagement </h2><p><a href="https://www.npr.org/2024/10/11/nx-s1-5150088/the-biggest-findings-from-uncensored-tiktok-lawsuit-documents" target="_blank">NPR uncovered records</a> from a recent lawsuit filed by Kentucky’s attorney general against TikTok. According to these documents, TikTok implemented interface mechanisms such as autoplay, infinite scrolling, and a highly personalized recommendation algorithm that were systematically optimized to maximize user engagement. </p><p>TikTok’s algorithmically tailored “For You” content continuously tracks user behaviors, such as how long a video is watched, whether it is replayed, or quickly skipped. The feed then curates short videos, or reels, for the user based on past scrolling behavior and what is most likely to hold attention.</p><p>These documents show one example of a tech company knowingly designing products to maximize attention. I believe social media companies also have the capacity to reduce addictiveness through intentional design choices.</p><h2>How Governments Are Regulating Social Media</h2><p>The good news is we are not helpless. There are multiple levers for change: how we collectively talk about social media, how our governments regulate its design and access, and how we hold companies accountable for practices that shape user behavior.</p><p>Some countries are moving quickly to set policy around social media use. Australia has imposed a minimum age of 16 for social media accounts, with similar bans <a href="https://techcrunch.com/2026/03/06/social-media-ban-children-countries-list" rel="noopener noreferrer" target="_blank">pending</a> in Denmark, France, and Malaysia.</p><p>These bans typically rely on age verification. Users without verified accounts can still passively watch videos on platforms like YouTube, but this approach removes many of the most addictive features, including infinite scroll, personalized feeds, notifications, and systems for followers and likes. At the same time, <a href="https://spectrum.ieee.org/age-verification" target="_self">age verification may cause different problems</a> in the online ecosystem.</p><p>Other countries are targeting social media use in specific contexts. South Korea, for example, <a href="https://www.bbc.com/news/articles/c776ye6lrvzo" rel="noopener noreferrer" target="_blank">banned smartphone use in classrooms</a>. And the United Kingdom is taking a different approach; its <a href="https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/childrens-information/childrens-code-guidance-and-resources/age-appropriate-design-a-code-of-practice-for-online-services/" rel="noopener noreferrer" target="_blank">Age Appropriate Design Code</a> instructs platforms to prioritize children’s safety while designing products. The code includes strong privacy defaults, limits on data collection, and constraints on features that nudge users toward greater engagement.</p><h2>How Social Media Platforms Could Be Redesigned</h2><p>A <a href="https://mhanational.org/wp-content/uploads/2025/03/Breaking-the-Algorithm-report.pdf)." rel="noopener noreferrer" target="_blank">report</a> called <em>Breaking the Algorithm</em>, from Mental Health America, argues that social media platforms should shift from maximizing engagement to supporting well-being. It calls for revamping recommendation systems to spot patterns of unhealthy use and adjusting feeds accordingly—for example, by limiting extreme or distressing content. </p><p>The report also argues that users should not have to intentionally opt out of harmful design features. Instead, the safest settings should be the default. The report supports regulatory measures aimed at limiting features such as autoplay and infinite scroll while enforcing privacy and safety settings. </p><p>Platforms could also give users more control by adding natural speed bumps, such as stopping points or break reminders during scrolling. <a href="https://dl.acm.org/doi/fullHtml/10.1145/3334480.3382810" rel="noopener noreferrer" target="_blank">Research</a> shows that interrupting infinite scroll with prompts such as “Do you want to keep going?” substantially reduces mindless scrolling and improves memory of content.</p><p>Some social media platforms are already experimenting with more ethical engagement. <a href="https://mastodon.social/explore" rel="noopener noreferrer" target="_blank">Mastodon</a>, an open-source, decentralized platform, displays posts chronologically rather than ranking them for engagement, and does not offer algorithmically generated feeds like “For You.” <a href="https://bsky.app/" rel="noopener noreferrer" target="_blank">Bluesky</a> gives users control by letting them customize their own algorithms and toggle between different feed types, such as chronological or topic-based filters.</p><p>In light of the recent verdict, it is time for a national conversation about accountability for social media companies. Individual responsibility will always be important, but so are the mechanisms employed by big tech to shape user behavior. If social media platforms are currently designed to capture attention, they can also be designed to give some of it back. </p>]]></description><pubDate>Fri, 27 Mar 2026 19:05:59 +0000</pubDate><guid>https://spectrum.ieee.org/social-media-trial</guid><category>Addiction</category><category>Screen-addiction</category><category>Internet-addiction</category><category>Facebook</category><category>Google</category><category>Social-media</category><dc:creator>Daniel Katz</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-woman-in-a-pink-jacket-with-a-large-button-of-a-teenage-girl-affixed-to-it-stands-in-front-of-a-large-banner-with-the-names-an.jpg?id=65404697&amp;width=980"></media:content></item><item><title>IEEE Professional Development Suite Teaches In-Demand Skills</title><link>https://spectrum.ieee.org/ieee-professional-development-suite</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-woman-in-a-cleansuit-carefully-inspecting-a-semiconductor-wafer-in-a-lab.jpg?id=65416186&width=1200&height=800&coordinates=0%2C208%2C0%2C209"/><br/><br/><p>In today’s technological landscape, the only constant is the rate of obsolescence. As engineers move deeper into the eras of 6G, ubiquitous artificial intelligence, and hyper-miniaturized electronics, a traditional degree is only a starting point.</p><p>To remain competitive in today’s job market, technical specialists must evolve into future-ready professionals by cultivating more than just niche expertise. Success now demands a high degree of adaptive intelligence and strategic communication, allowing specialists to translate complex data into actionable business decisions as industry shifts accelerate.</p><p>To bridge the gap between technical proficiency and organizational leadership, the <a href="https://innovationatwork.ieee.org/professional-development/" rel="noopener noreferrer" target="_blank">IEEE Professional Development Suite</a> offers training on programs designed to build the strategic competencies required to navigate today’s complex landscape. The suite provides deep technical dives into domains such as telecommunications connectivity and microelectronics reliability. Organizations can stay ahead of the curve through informed decision-making and a future-ready workforce.</p><h2>Mastery of electrostatic discharge and 5G networks</h2><p>Within the semiconductor sector, which is <a href="https://www.mckinsey.com/industries/semiconductors/our-insights/semiconductors-have-a-big-opportunity-but-barriers-to-scale-remain" rel="noopener noreferrer" target="_blank">projected to become a US $1 billion industry by 2030</a>, electrostatic discharge (ESD) is a major reliability challenge. Because even a microscopic, unnoticed discharge can compromise a semiconductor, ESD issues account for <a href="https://www.escatec.com/blog/esd-electronics-manufacturing" rel="noopener noreferrer" target="_blank">up to one-third of all field failures</a>, according to the <a href="https://www.esda.org/about-us/" rel="noopener noreferrer" target="_blank">EOS/ESD Association</a>.</p><p>IEEE’s targeted training—the online <a href="https://innovationatwork.ieee.org/professional-development/ieee-practical-esd-protection-design/" rel="noopener noreferrer" target="_blank">Practical ESD Protection Design certificate program</a>—equips teams with technical protocols to mitigate the risks and ensure long-term hardware reliability. Specialized ESD <a href="https://spectrum.ieee.org/electrostatic-discharge" target="_self">training</a> has become essential for chip designers and manufacturing professionals seeking to improve discharge control.</p><p>The interactive modules cover theory, real-world case studies, and practical mitigation techniques. The standards-based instruction is aligned with <a href="https://blog.ansi.org/ansi/ansi-esd-s20-20-2021-protection-electronic-parts/" rel="noopener noreferrer" target="_blank">ANSI/ESD S20.20–21: Protection of Electrical and Electronic Parts</a> and other industry guidelines.</p><p>As 5G network capabilities expand globally, so does the demand for engineers who can master the protocols and procedures required to manage complex telecommunications systems. The IEEE <a href="https://innovationatwork.ieee.org/professional-development/5g-6g-essential-protocols-and-procedures-training-and-innovation-testbed/" rel="noopener noreferrer" target="_blank">5G/6G Essential Protocols and Procedures Training and Innovation Testbed</a>, in partnership with <a href="https://wraycastle.com/" rel="noopener noreferrer" target="_blank">Wray Castle</a>, takes a deep dive into the 5G network function framework, registration processes, and packet data unit session establishment. The <a href="https://spectrum.ieee.org/ieee-5g-and-6g-training" target="_self">program</a> is designed for system engineers, integrators, and technical professionals responsible for 5G signaling. Stakeholders such as network operators, equipment vendors, regulators, and handset manufacturers could find the program to be beneficial as well.</p><p class="pull-quote"><span>“The IEEE Professional Development Suite ensures that learners are not just keeping pace with change but helping to drive it.”</span></p><p>To bridge the gap between theory and practice, the course includes three months of free access to the <a href="https://spectrum.ieee.org/ieee-5g-and-6g-training" target="_self">IEEE 5G/6G Innovation Testbed</a>. The secure, cloud-based platform offers a private, end-to-end 5G network environment where individuals and teams can gain hands-on experience with critical system signaling and troubleshooting.</p><h2>Leadership training programs</h2><p>Technical knowledge alone is not enough to climb the corporate ladder. To thrive today, engineering leaders must have a strategic vision and people-centric leadership skills.</p><p>The <a href="https://innovationatwork.ieee.org/professional-development/leading-technical-teams/" target="_blank">IEEE Leading Technical Teams</a> training program focuses on the challenges of managing engineers in R&D environments and fostering creative problem-solving through an immersive learning experience. It’s designed for professionals who have been in a leadership position for at least six months. Participants can gain self-awareness.</p><p>The program includes a 360-degree assessment that gathers feedback about the individual from peers and direct reports to build a personalized development plan. The goal is to help technical professionals transition from high-performing individual contributors into leaders who drive innovation by inspiring their teams rather than just managing tasks.</p><p>Organizations can enroll groups of 10 or more to learn as a cohort—which can ensure that everyone stays on the same page while setting a training schedule that fits the team’s deadlines.</p><p>In collaboration with the <a href="https://www.business.rutgers.edu/" target="_blank">Rutgers Business School</a>, IEEE offers two mini MBA programs to bridge the gap between technical expertise and executive leadership. The programs offer flexibility to fit the demanding schedules of senior professionals. The online format lets participants engage with content as their time permits, while live virtual office hours with faculty provide opportunities for real-time interaction.</p><p>During the <a href="https://innovationatwork.ieee.org/professional-development/rutgers-online-mini-mba-for-engineers/" rel="noopener noreferrer" target="_blank">mini MBA for engineers</a> 12-week curriculum, technical professionals master core competencies such as financial analysis, business strategy, and negotiation to effectively transition into management roles.</p><p>The <a href="https://innovationatwork.ieee.org/professional-development/rutgers-online-mini-mba-artificial-intelligence/" rel="noopener noreferrer" target="_blank">mini MBA in artificial intelligence</a> embeds AI literacy directly into business strategy rather than treating the technology as a standalone subject. Participants learn to evaluate AI through financial modeling and governance frameworks, gaining a practical foundation to lead initiatives that incorporate the technology.</p><p>The programs are offered to individuals as well as to organizations interested in training groups of 10 employees or more.</p><h2>Earning credits that count</h2><p>All the programs within the IEEE Professional Development Suite offer continuing education units and professional development hours.</p><p>Earning globally recognized credits provides a professional advantage, signaling a commitment to growth that often serves as a prerequisite for advancing into senior, lead, or principal roles. Additionally, the credits satisfy annual professional engineering license renewal requirements, ensuring practitioners remain compliant while expanding their capabilities.</p><h2>Why curated content matters</h2><p>Developed by <a href="https://ea.ieee.org" rel="noopener noreferrer" target="_blank">IEEE Educational Activities</a>, the training programs are peer-reviewed and built to align with industry needs. By focusing on upskilling (improving current skills) and reskilling (learning new ones), the IEEE Professional Development Suite ensures that learners are not just keeping pace with change but helping to drive it.</p>]]></description><pubDate>Fri, 27 Mar 2026 18:00:04 +0000</pubDate><guid>https://spectrum.ieee.org/ieee-professional-development-suite</guid><category>Ieee-products-and-services</category><category>Education</category><category>Training</category><category>Ieee-educational-activities</category><category>Careers</category><category>Ieee-professional-development-suite</category><category>Type-ti</category><dc:creator>Angelique Parashis</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-woman-in-a-cleansuit-carefully-inspecting-a-semiconductor-wafer-in-a-lab.jpg?id=65416186&amp;width=980"></media:content></item><item><title>Video Friday: Beep! Beep! Roadrunner Bipedal Bot Breaks the Mold</title><link>https://spectrum.ieee.org/roadrunner-bipedal-robot</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/two-wheeled-balancing-robot-leans-while-rolling-in-an-indoor-testing-lab.png?id=65415603&width=1200&height=800&coordinates=159%2C0%2C159%2C0"/><br/><br/><p><span>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at </span><em>IEEE Spectrum</em><span> robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please </span><a href="mailto:automaton@ieee.org?subject=Robotics%20event%20suggestion%20for%20Video%20Friday">send us your events</a><span> for inclusion.</span></p><h5><a href="https://2026.ieee-icra.org/">ICRA 2026</a>: 1–5 June 2026, VIENNA</h5><h5><a href="https://roboticsconference.org/">RSS 2026</a>: 13–17 July 2026, SYDNEY</h5><h5><a href="https://mrs.fel.cvut.cz/summer-school-2026/">Summer School on Multi-Robot Systems</a>: 29 July–4 August 2026, PRAGUE</h5><p>Enjoy today’s videos!</p><div class="horizontal-rule"></div><div style="page-break-after: always"><span style="display:none"> </span></div><blockquote class="rm-anchors" id="9kae-uame1u"><em>“Roadrunner” is a new bipedal wheeled robot prototype designed for multimodal locomotion. It weighs around 15 kg (33 lb) and can seamlessly switch between its side-by-side and in-line wheel modes and stepping configurations depending on what is required for navigating its environment. The robot’s legs are entirely symmetric, allowing it to point its knees forward or backward, which can be used to avoid obstacles or manage specific movements. A single control policy was trained to handle both side-by-side and in-line driving. Several behaviors, including standing up from various ground configurations and balancing on one wheel, were successfully deployed zero-shot on the hardware.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="76bd6c7edd7ff24700dad004edd086aa" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/9kae-UAME1U?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://rai-inst.com/">Robotics and AI Institute</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="tyasuwrkv4e">Incredibly (INCREDIBLY!) <a data-linked-post="2657767692" href="https://spectrum.ieee.org/nasa-mars-sample-return" target="_blank">NASA</a> says that this is actually happening.</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="bc72d2ac20028faf8c32287c722f0ce9" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/TYasUWRkv4E?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><blockquote><em>NASA’s SkyFall mission will build on the success of the Ingenuity Mars helicopter, which achieved the first powered, controlled flight on another planet. Using a daring midair deployment, SkyFall will deliver a team of next-gen Mars helicopters to scout human landing sites and map subsurface water ice.</em></blockquote><p>[ <a href="https://www.nasa.gov/news-release/nasa-unveils-initiatives-to-achieve-americas-national-space-policy/">NASA</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="jsk-ff2mycg"><em>NASA’s MoonFall mission will blaze a path for future <a data-linked-post="2662067231" href="https://spectrum.ieee.org/video-friday-training-artemis" target="_blank">Artemis</a> missions by sending four highly mobile drones to survey the lunar surface around the Moon’s South Pole ahead of astronauts’ arrival there. MoonFall is built on the legacy of NASA’s Ingenuity Mars Helicopter. The drones will be launched together and released during descent to the surface. They will land and operate independently over the course of a lunar day (14 Earth days) and will be able to explore hard-to-reach areas, including permanently shadowed regions (PSRs), surveying terrain with high-definition optical cameras and other potential instruments.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="24cd6ef18a5608c71e3afdc55a0d2507" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/JsK-ff2Mycg?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>For what it’s worth, <a data-linked-post="2671177906" href="https://spectrum.ieee.org/moon-landing-2025" target="_blank">Moon landings</a> have a success rate well under 50%. So let’s send some robots there to land over and over!</p><p>[ <a href="https://www.nasa.gov/news-release/nasa-unveils-initiatives-to-achieve-americas-national-space-policy/">NASA</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="hdjiukrfvca"><em>In Science Robotics, researchers from the Tangible Media group led by Professor Hiroshi Ishii, together with colleagues from Politecnico di Bari, present Electrofluidic Fiber Muscles: a new class of artificial muscle fibers for robots and wearables. Unlike the rigid servo motors used in most robots, these fiber-shaped muscles are soft and flexible. They combine electrohydrodynamic (EHD) fiber pumps—slender tubes that move liquid using electric fields to generate pressure silently, with no moving parts—with fluid-filled fiber actuators. These artificial muscles could enable more agile untethered robots, as well as wearable assistive systems with compact actuation integrated directly into textiles.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="401e33c5be7f9feea5a4219dd786d2ab" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/HdjIukrfvcA?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.media.mit.edu/projects/electrofluidicmuscle/overview/">MIT Media Lab</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="xzfzkmq2rrq"><em>In this study, we developed MEVIUS2, an open-source quadruped robot. It is comparable in size to the Boston Dynamics Spot, equipped with two lidars and a C1 camera, and can freely climb stairs and steep slopes! All hardware, software, and learning environments are released as open source.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="ef4dd2071d09d4ac4c97d9e6993be2ea" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/xzfZkmQ2rrQ?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://github.com/haraduka/mevius2">MEVIUS2</a> ]</p><p>Thanks, Kento!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="zj07hhjnrto"><em>What goes into preparing for a live performance? Arun highlights the reliability testing that goes into trying a new behavior for Spot.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="075596c69914e064444994a7d74fe2dc" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/zj07hHJnrto?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://bostondynamics.com/">Boston Dynamics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="41kpw6jwxty"><em>In this work, a multirobot planning and control framework is presented and demonstrated with a team of 40 indoor robots, including both ground and aerial robots.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="e8811d7981e9be82f23859aafea31249" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/41kPW6JwXtY?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>That soundtrack, though.</p><p>[ <a href="https://proroklab.github.io/agile-mapf/">GitHub</a> ]</p><p>Thanks, Keisuke!</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="img5a_ykjms"><em>Quadrupedal robots can navigate cluttered environments like their animal counterparts, but their floating-base configuration makes them vulnerable to real-world uncertainties. Controllers that rely only on proprioception (body sensing) must physically collide with obstacles to detect them. Those that add exteroception (vision) need precisely modeled terrain maps that are hard to maintain in the wild. DreamWaQ++ bridges this gap by fusing both modalities through a resilient multimodal reinforcement learning framework. The result: a single controller that handles rough terrains, steep slopes, and high-rise stairs—while gracefully recovering from sensor failures and situations it has never seen before.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="61dd08501e1c8f10d63a43acb5bb2911" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Img5a_yKjMs?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>That cliff behavior is slightly uncanny.</p><p>[ <a href="https://dreamwaqpp.github.io/">DreamWaQ++</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="toh8pd4o34u">I take issue with this from iRobot:</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="1d86fae43d52011c45db0102b9fdc86b" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/tOH8pD4O34U?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>While the <a data-linked-post="2650276443" href="https://spectrum.ieee.org/robotic-blimp-could-explore-hidden-chambers-of-great-pyramid" target="_blank">pyramid exploration</a> that iRobot did was very cool, they did it with a custom-made robot designed for a very specific environment. Cleaning your floors is way, way harder. Here’s a bit more detail on the pyramids thing:</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="1b4538cb0137311b0b433425e56096f0" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Pts3w2Pw8F4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.youtube.com/watch?v=Pts3w2Pw8F4">iRobot</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="t1vub0knci4">More robots in the circus, please!</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="89aa286bf5c7d16563d9223df6cc3d2b" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/T1VUb0kncI4?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://danielsimu.com/acrobot/">Daniel Simu</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="f2hasoladgm"><em>MIT engineers have designed a wristband that lets wearers control a robotic hand with their own movements. By moving their hands and fingers, users can direct a robot to perform specific tasks, or they can manipulate objects in a virtual environment with high-dexterity control.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="88281d6e7db31cc58ef4b327756809b2" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/F2HaSoladgM?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://news.mit.edu/2026/wristband-enables-wearers-control-robotic-hand-with-own-movements-0325">MIT</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="0ozaw6rryie"><em>At <a data-linked-post="2676218078" href="https://spectrum.ieee.org/nvidia-groq-3" target="_blank">Nvidia GTC 2026</a>, we showcased how AI is moving into the physical world. Visitors interacted with robots using voice commands, watching them interpret intent and act in real time—powered by our KinetIQ AI brain.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="95460eeec4fec87fd729fe5aa4314531" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/0oZAw6rryIE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://thehumanoid.ai/">Humanoid</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="7sl93jl8_o8">Props to Sony for its continued support and updates for <a data-linked-post="2670284977" href="https://spectrum.ieee.org/aibo" target="_blank">Aibo</a>!</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="f05e5074c48cd251f832782efa434226" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/7sL93Jl8_O8?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://us.aibo.com/myaibo/">Aibo</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="yd7enmgniei">This robot looks like it could be a little curvier than normal?</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="3be1fe9e24c6ee745f0f1fa7a2a1b201" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Yd7eNmGNIeI?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>[ <a href="https://www.limxdynamics.com/en">LimX Dynamics</a> ]</p><div class="horizontal-rule"></div><blockquote class="rm-anchors" id="dncww0qmkce"><em>Developed by Zhejiang Humanoid Robot Innovation Center Co., Ltd., the Naviai Robot is an intelligent cooking device. It can autonomously process ingredients, perform cooking tasks with high accuracy, adjust smart kitchen equipment in real time, and complete postcooking cleaning. Equipped with multimodal perception technology, it adapts to daily kitchen environments and ensures safe and stable operation.</em></blockquote><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="f58863823d5082a3e5e104c47b9e68f6" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/dNcWW0qMkcE?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><p>That 7x is doing some heavy lifting.</p><p>[ <a href="https://en.zhejianglab.com/institutescenters/researchunits/interdisciplinaryresearchcenters/researchcenterforintelligentrobot/">Zhejiang Lab</a> ]</p><div class="horizontal-rule"></div><p class="rm-anchors" id="gthxsfhdt8q">This CMU RI Seminar is by Hadas Kress-Gazit from Cornell, on “Formal Methods for Robotics in the Age of Big Data.”</p><p class="shortcode-media shortcode-media-youtube"><span class="rm-shortcode" data-rm-shortcode-id="a0150919b813daa034367d7a41c9d68e" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/gthXSFhDt8Q?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span></p><blockquote><em>Formal methods—mathematical techniques for describing systems, capturing requirements, and providing guarantees—have been used to synthesize robot control from high-level specification, and to verify robot behavior. Given the recent advances in robot learning and data-driven models, what role can, and should, formal methods play in advancing robotics? In this talk I will give a few examples for what we can do with formal methods, discuss their promise and challenges, and describe the synergies I see with data-driven approaches.</em></blockquote><p>[ <a href="https://www.ri.cmu.edu/event/formal-methods-for-robotics-in-the-age-of-big-data/">Carnegie Mellon University Robotics Institute</a> ]</p><div class="horizontal-rule"></div>]]></description><pubDate>Fri, 27 Mar 2026 16:30:03 +0000</pubDate><guid>https://spectrum.ieee.org/roadrunner-bipedal-robot</guid><category>Video-friday</category><category>Nasa</category><category>Bipedal-robots</category><category>Quadruped-robots</category><category>Artificial-muscles</category><category>Humanoid-robots</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/png" url="https://spectrum.ieee.org/media-library/two-wheeled-balancing-robot-leans-while-rolling-in-an-indoor-testing-lab.png?id=65415603&amp;width=980"></media:content></item><item><title>Andrew Ng: Unbiggen AI</title><link>https://spectrum.ieee.org/andrew-ng-data-centric-ai</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/andrew-ng-listens-during-the-power-of-data-sooner-than-you-think-global-technology-conference-in-brooklyn-new-york-on-wednes.jpg?id=29206806&width=1200&height=800&coordinates=0%2C0%2C0%2C210"/><br/><br/><p><strong><a href="https://en.wikipedia.org/wiki/Andrew_Ng" rel="noopener noreferrer" target="_blank">Andrew Ng</a> has serious street cred</strong> in artificial intelligence. He pioneered the use of graphics processing units (GPUs) to train deep learning models in the late 2000s with his students at <a href="https://stanfordmlgroup.github.io/" rel="noopener noreferrer" target="_blank">Stanford University</a>, cofounded <a href="https://research.google/teams/brain/" rel="noopener noreferrer" target="_blank">Google Brain</a> in 2011, and then served for three years as chief scientist for <a href="https://ir.baidu.com/" rel="noopener noreferrer" target="_blank">Baidu</a>, where he helped build the Chinese tech giant’s AI group. So when he says he has identified the next big shift in artificial intelligence, people listen. And that’s what he told <em>IEEE Spectrum</em> in an exclusive Q&A.</p><hr/><p>
	Ng’s current efforts are focused on his company 
	<a href="https://landing.ai/about/" rel="noopener noreferrer" target="_blank">Landing AI</a>, which built a platform called LandingLens to help manufacturers improve visual inspection with computer vision. He has also become something of an evangelist for what he calls the <a href="https://www.youtube.com/watch?v=06-AZXmwHjo" target="_blank">data-centric AI movement</a>, which he says can yield “small data” solutions to big issues in AI, including model efficiency, accuracy, and bias.
</p><p>
	Andrew Ng on...
</p><ul>
<li><a href="#big">What’s next for really big models</a></li>
<li><a href="#career">The career advice he didn’t listen to</a></li>
<li><a href="#defining">Defining the data-centric AI movement</a></li>
<li><a href="#synthetic">Synthetic data</a></li>
<li><a href="#work">Why Landing AI asks its customers to do the work</a></li>
</ul><p>
<strong>The great advances in deep learning over the past decade or so have been powered by ever-bigger models crunching ever-bigger amounts of data. Some people argue that that’s an <a href="https://spectrum.ieee.org/deep-learning-computational-cost" target="_self">unsustainable trajectory</a>. Do you agree that it can’t go on that way?</strong>
</p><p>
<strong>Andrew Ng: </strong>This is a big question. We’ve seen foundation models in NLP [natural language processing]. I’m excited about NLP models getting even bigger, and also about the potential of building foundation models in computer vision. I think there’s lots of signal to still be exploited in video: We have not been able to build foundation models yet for video because of compute bandwidth and the cost of processing video, as opposed to tokenized text. So I think that this engine of scaling up deep learning algorithms, which has been running for something like 15 years now, still has steam in it. Having said that, it only applies to certain problems, and there’s a set of other problems that need small data solutions.
</p><p>
<strong>When you say you want a foundation model for computer vision, what do you mean by that?</strong>
</p><p>
<strong>Ng:</strong> This is a term coined by <a href="https://cs.stanford.edu/~pliang/" rel="noopener noreferrer" target="_blank">Percy Liang</a> and <a href="https://crfm.stanford.edu/" rel="noopener noreferrer" target="_blank">some of my friends at Stanford</a> to refer to very large models, trained on very large data sets, that can be tuned for specific applications. For example, <a href="https://spectrum.ieee.org/open-ais-powerful-text-generating-tool-is-ready-for-business" target="_self">GPT-3</a> is an example of a foundation model [for NLP]. Foundation models offer a lot of promise as a new paradigm in developing machine learning applications, but also challenges in terms of making sure that they’re reasonably fair and free from bias, especially if many of us will be building on top of them.
</p><p>
<strong>What needs to happen for someone to build a foundation model for video?</strong>
</p><p>
<strong>Ng:</strong> I think there is a scalability problem. The compute power needed to process the large volume of images for video is significant, and I think that’s why foundation models have arisen first in NLP. Many researchers are working on this, and I think we’re seeing early signs of such models being developed in computer vision. But I’m confident that if a semiconductor maker gave us 10 times more processor power, we could easily find 10 times more video to build such models for vision.
</p><p>
	Having said that, a lot of what’s happened over the past decade is that deep learning has happened in consumer-facing companies that have large user bases, sometimes billions of users, and therefore very large data sets. While that paradigm of machine learning has driven a lot of economic value in consumer software, I find that that recipe of scale doesn’t work for other industries.
</p><p>
<a href="#top">Back to top</a>
</p><p>
<strong>It’s funny to hear you say that, because your early work was at a consumer-facing company with millions of users.</strong>
</p><p>
<strong>Ng: </strong>Over a decade ago, when I proposed starting the <a href="https://research.google/teams/brain/" rel="noopener noreferrer" target="_blank">Google Brain</a> project to use Google’s compute infrastructure to build very large neural networks, it was a controversial step. One very senior person pulled me aside and warned me that starting Google Brain would be bad for my career. I think he felt that the action couldn’t just be in scaling up, and that I should instead focus on architecture innovation.
</p><p class="pull-quote">
	“In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.”<br/>
	—Andrew Ng, CEO & Founder, Landing AI
</p><p>
	I remember when my students and I published the first 
	<a href="https://nips.cc/" rel="noopener noreferrer" target="_blank">NeurIPS</a> workshop paper advocating using <a href="https://developer.nvidia.com/cuda-zone" rel="noopener noreferrer" target="_blank">CUDA</a>, a platform for processing on GPUs, for deep learning—a different senior person in AI sat me down and said, “CUDA is really complicated to program. As a programming paradigm, this seems like too much work.” I did manage to convince him; the other person I did not convince.
</p><p>
<strong>I expect they’re both convinced now.</strong>
</p><p>
<strong>Ng:</strong> I think so, yes.
</p><p>
	Over the past year as I’ve been speaking to people about the data-centric AI movement, I’ve been getting flashbacks to when I was speaking to people about deep learning and scalability 10 or 15 years ago. In the past year, I’ve been getting the same mix of “there’s nothing new here” and “this seems like the wrong direction.”
</p><p>
<a href="#top">Back to top</a>
</p><p>
<strong>How do you define data-centric AI, and why do you consider it a movement?</strong>
</p><p>
<strong>Ng:</strong> Data-centric AI is the discipline of systematically engineering the data needed to successfully build an AI system. For an AI system, you have to implement some algorithm, say a neural network, in code and then train it on your data set. The dominant paradigm over the last decade was to download the data set while you focus on improving the code. Thanks to that paradigm, over the last decade deep learning networks have improved significantly, to the point where for a lot of applications the code—the neural network architecture—is basically a solved problem. So for many practical applications, it’s now more productive to hold the neural network architecture fixed, and instead find ways to improve the data.
</p><p>
	When I started speaking about this, there were many practitioners who, completely appropriately, raised their hands and said, “Yes, we’ve been doing this for 20 years.” This is the time to take the things that some individuals have been doing intuitively and make it a systematic engineering discipline.
</p><p>
	The data-centric AI movement is much bigger than one company or group of researchers. My collaborators and I organized a 
	<a href="https://neurips.cc/virtual/2021/workshop/21860" rel="noopener noreferrer" target="_blank">data-centric AI workshop at NeurIPS</a>, and I was really delighted at the number of authors and presenters that showed up.
</p><p>
<strong>You often talk about companies or institutions that have only a small amount of data to work with. How can data-centric AI help them?</strong>
</p><p>
<strong>Ng: </strong>You hear a lot about vision systems built with millions of images—I once built a face recognition system using 350 million images. Architectures built for hundreds of millions of images don’t work with only 50 images. But it turns out, if you have 50 really good examples, you can build something valuable, like a defect-inspection system. In many industries where giant data sets simply don’t exist, I think the focus has to shift from big data to good data. Having 50 thoughtfully engineered examples can be sufficient to explain to the neural network what you want it to learn.
</p><p>
<strong>When you talk about training a model with just 50 images, does that really mean you’re taking an existing model that was trained on a very large data set and fine-tuning it? Or do you mean a brand new model that’s designed to learn only from that small data set?</strong>
</p><p>
<strong>Ng: </strong>Let me describe what Landing AI does. When doing visual inspection for manufacturers, we often use our own flavor of <a href="https://developers.arcgis.com/python/guide/how-retinanet-works/" rel="noopener noreferrer" target="_blank">RetinaNet</a>. It is a pretrained model. Having said that, the pretraining is a small piece of the puzzle. What’s a bigger piece of the puzzle is providing tools that enable the manufacturer to pick the right set of images [to use for fine-tuning] and label them in a consistent way. There’s a very practical problem we’ve seen spanning vision, NLP, and speech, where even human annotators don’t agree on the appropriate label. For big data applications, the common response has been: If the data is noisy, let’s just get a lot of data and the algorithm will average over it. But if you can develop tools that flag where the data’s inconsistent and give you a very targeted way to improve the consistency of the data, that turns out to be a more efficient way to get a high-performing system.
</p><p class="pull-quote">
	“Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.”<br/>
	—Andrew Ng
</p><p>
	For example, if you have 10,000 images where 30 images are of one class, and those 30 images are labeled inconsistently, one of the things we do is build tools to draw your attention to the subset of data that’s inconsistent. So you can very quickly relabel those images to be more consistent, and this leads to improvement in performance.
</p><p>
<strong>Could this focus on high-quality data help with bias in data sets? If you’re able to curate the data more before training?</strong>
</p><p>
<strong>Ng:</strong> Very much so. Many researchers have pointed out that biased data is one factor among many leading to biased systems. There have been many thoughtful efforts to engineer the data. At the NeurIPS workshop, <a href="https://www.cs.princeton.edu/~olgarus/" rel="noopener noreferrer" target="_blank">Olga Russakovsky</a> gave a really nice talk on this. At the main NeurIPS conference, I also really enjoyed <a href="https://neurips.cc/virtual/2021/invited-talk/22281" rel="noopener noreferrer" target="_blank">Mary Gray’s presentation,</a> which touched on how data-centric AI is one piece of the solution, but not the entire solution. New tools like <a href="https://www.microsoft.com/en-us/research/project/datasheets-for-datasets/" rel="noopener noreferrer" target="_blank">Datasheets for Datasets</a> also seem like an important piece of the puzzle.
</p><p>
	One of the powerful tools that data-centric AI gives us is the ability to engineer a subset of the data. Imagine training a machine-learning system and finding that its performance is okay for most of the data set, but its performance is biased for just a subset of the data. If you try to change the whole neural network architecture to improve the performance on just that subset, it’s quite difficult. But if you can engineer a subset of the data you can address the problem in a much more targeted way.
</p><p>
<strong>When you talk about engineering the data, what do you mean exactly?</strong>
</p><p>
<strong>Ng: </strong>In AI, data cleaning is important, but the way the data has been cleaned has often been in very manual ways. In computer vision, someone may visualize images through a <a href="https://jupyter.org/" rel="noopener noreferrer" target="_blank">Jupyter notebook</a> and maybe spot the problem, and maybe fix it. But I’m excited about tools that allow you to have a very large data set, tools that draw your attention quickly and efficiently to the subset of data where, say, the labels are noisy. Or to quickly bring your attention to the one class among 100 classes where it would benefit you to collect more data. Collecting more data often helps, but if you try to collect more data for everything, that can be a very expensive activity.
</p><p>
	For example, I once figured out that a speech-recognition system was performing poorly when there was car noise in the background. Knowing that allowed me to collect more data with car noise in the background, rather than trying to collect more data for everything, which would have been expensive and slow.
</p><p>
<a href="#top">Back to top</a>
</p><p>
<strong>What about using synthetic data, is that often a good solution?</strong>
</p><p>
<strong>Ng: </strong>I think synthetic data is an important tool in the tool chest of data-centric AI. At the NeurIPS workshop, <a href="https://tensorlab.cms.caltech.edu/users/anima/" rel="noopener noreferrer" target="_blank">Anima Anandkumar</a> gave a great talk that touched on synthetic data. I think there are important uses of synthetic data that go beyond just being a preprocessing step for increasing the data set for a learning algorithm. I’d love to see more tools to let developers use synthetic data generation as part of the closed loop of iterative machine learning development.
</p><p>
<strong>Do you mean that synthetic data would allow you to try the model on more data sets?</strong>
</p><p>
<strong>Ng: </strong>Not really. Here’s an example. Let’s say you’re trying to detect defects in a smartphone casing. There are many different types of defects on smartphones. It could be a scratch, a dent, pit marks, discoloration of the material, other types of blemishes. If you train the model and then find through error analysis that it’s doing well overall but it’s performing poorly on pit marks, then synthetic data generation allows you to address the problem in a more targeted way. You could generate more data just for the pit-mark category.
</p><p class="pull-quote">
	“In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models.”<br/>
	—Andrew Ng
</p><p>
	Synthetic data generation is a very powerful tool, but there are many simpler tools that I will often try first. Such as data augmentation, improving labeling consistency, or just asking a factory to collect more data.
</p><p>
<a href="#top">Back to top</a>
</p><p>
<strong>To make these issues more concrete, can you walk me through an example? When a company approaches <a href="https://landing.ai/" rel="noopener noreferrer" target="_blank">Landing AI</a> and says it has a problem with visual inspection, how do you onboard them and work toward deployment?</strong>
</p><p>
<strong>Ng: </strong>When a customer approaches us we usually have a conversation about their inspection problem and look at a few images to verify that the problem is feasible with computer vision. Assuming it is, we ask them to upload the data to the <a href="https://landing.ai/platform/" rel="noopener noreferrer" target="_blank">LandingLens</a> platform. We often advise them on the methodology of data-centric AI and help them label the data.
</p><p>
	One of the foci of Landing AI is to empower manufacturing companies to do the machine learning work themselves. A lot of our work is making sure the software is fast and easy to use. Through the iterative process of machine learning development, we advise customers on things like how to train models on the platform, when and how to improve the labeling of data so the performance of the model improves. Our training and software supports them all the way through deploying the trained model to an edge device in the factory.
</p><p>
<strong>How do you deal with changing needs? If products change or lighting conditions change in the factory, can the model keep up?</strong>
</p><p>
<strong>Ng:</strong> It varies by manufacturer. There is data drift in many contexts. But there are some manufacturers that have been running the same manufacturing line for 20 years now with few changes, so they don’t expect changes in the next five years. Those stable environments make things easier. For other manufacturers, we provide tools to flag when there’s a significant data-drift issue. I find it really important to empower manufacturing customers to correct data, retrain, and update the model. Because if something changes and it’s 3 a.m. in the United States, I want them to be able to adapt their learning algorithm right away to maintain operations.
</p><p>
	In the consumer software Internet, we could train a handful of machine-learning models to serve a billion users. In manufacturing, you might have 10,000 manufacturers building 10,000 custom AI models. The challenge is, how do you do that without Landing AI having to hire 10,000 machine learning specialists?
</p><p>
<strong>So you’re saying that to make it scale, you have to empower customers to do a lot of the training and other work.</strong>
</p><p>
<strong>Ng: </strong>Yes, exactly! This is an industry-wide problem in AI, not just in manufacturing. Look at health care. Every hospital has its own slightly different format for electronic health records. How can every hospital train its own custom AI model? Expecting every hospital’s IT personnel to invent new neural-network architectures is unrealistic. The only way out of this dilemma is to build tools that empower the customers to build their own models by giving them tools to engineer the data and express their domain knowledge. That’s what Landing AI is executing in computer vision, and the field of AI needs other teams to execute this in other domains.
</p><p>
<strong>Is there anything else you think it’s important for people to understand about the work you’re doing or the data-centric AI movement?</strong>
</p><p>
<strong>Ng: </strong>In the last decade, the biggest shift in AI was a shift to deep learning. I think it’s quite possible that in this decade the biggest shift will be to data-centric AI. With the maturity of today’s neural network architectures, I think for a lot of the practical applications the bottleneck will be whether we can efficiently get the data we need to develop systems that work well. The data-centric AI movement has tremendous energy and momentum across the whole community. I hope more researchers and developers will jump in and work on it.
</p><p>
<a href="#top">Back to top</a>
</p><p><em>This article appears in the April 2022 print issue as “Andrew Ng, AI Minimalist</em><em>.”</em></p>]]></description><pubDate>Wed, 09 Feb 2022 15:31:12 +0000</pubDate><guid>https://spectrum.ieee.org/andrew-ng-data-centric-ai</guid><category>Deep-learning</category><category>Artificial-intelligence</category><category>Andrew-ng</category><category>Type-cover</category><dc:creator>Eliza Strickland</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/andrew-ng-listens-during-the-power-of-data-sooner-than-you-think-global-technology-conference-in-brooklyn-new-york-on-wednes.jpg?id=29206806&amp;width=980"></media:content></item><item><title>How AI Will Change Chip Design</title><link>https://spectrum.ieee.org/ai-chip-design-matlab</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/layered-rendering-of-colorful-semiconductor-wafers-with-a-bright-white-light-sitting-on-one.jpg?id=29285079&width=1200&height=800&coordinates=0%2C0%2C0%2C0"/><br/><br/><p>The end of <a href="https://spectrum.ieee.org/on-beyond-moores-law-4-new-laws-of-computing" target="_self">Moore’s Law</a> is looming. Engineers and designers can do only so much to <a href="https://spectrum.ieee.org/ibm-introduces-the-worlds-first-2nm-node-chip" target="_self">miniaturize transistors</a> and <a href="https://spectrum.ieee.org/cerebras-giant-ai-chip-now-has-a-trillions-more-transistors" target="_self">pack as many of them as possible into chips</a>. So they’re turning to other approaches to chip design, incorporating technologies like AI into the process.</p><p>Samsung, for instance, is <a href="https://spectrum.ieee.org/processing-in-dram-accelerates-ai" target="_self">adding AI to its memory chips</a> to enable processing in memory, thereby saving energy and speeding up machine learning. Speaking of speed, Google’s TPU V4 AI chip has <a href="https://spectrum.ieee.org/heres-how-googles-tpu-v4-ai-chip-stacked-up-in-training-tests" target="_self">doubled its processing power</a> compared with that of  its previous version.</p><p>But AI holds still more promise and potential for the semiconductor industry. To better understand how AI is set to revolutionize chip design, we spoke with <a href="https://www.linkedin.com/in/heather-gorr-phd" rel="noopener noreferrer" target="_blank">Heather Gorr</a>, senior product manager for <a href="https://www.mathworks.com/" rel="noopener noreferrer" target="_blank">MathWorks</a>’ MATLAB platform.</p><p><strong>How is AI currently being used to design the next generation of chips?</strong></p><p><strong>Heather Gorr:</strong> AI is such an important technology because it’s involved in most parts of the cycle, including the design and manufacturing process. There’s a lot of important applications here, even in the general process engineering where we want to optimize things. I think defect detection is a big one at all phases of the process, especially in manufacturing. But even thinking ahead in the design process, [AI now plays a significant role] when you’re designing the light and the sensors and all the different components. There’s a lot of anomaly detection and fault mitigation that you really want to consider.</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-resized-container rm-resized-container-25 rm-float-left" data-rm-resized-container="25%" style="float: left;">
<img alt="Portrait of a woman with blonde-red hair smiling at the camera" class="rm-shortcode rm-resized-image" data-rm-shortcode-id="1f18a02ccaf51f5c766af2ebc4af18e1" data-rm-shortcode-name="rebelmouse-image" id="2dc00" loading="lazy" src="https://spectrum.ieee.org/media-library/portrait-of-a-woman-with-blonde-red-hair-smiling-at-the-camera.jpg?id=29288554&width=980" style="max-width: 100%"/>
<small class="image-media media-caption" placeholder="Add Photo Caption..." style="max-width: 100%;">Heather Gorr</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit..." style="max-width: 100%;">MathWorks</small></p><p>Then, thinking about the logistical modeling that you see in any industry, there is always planned downtime that you want to mitigate; but you also end up having unplanned downtime. So, looking back at that historical data of when you’ve had those moments where maybe it took a bit longer than expected to manufacture something, you can take a look at all of that data and use AI to try to identify the proximate cause or to see  something that might jump out even in the processing and design phases. We think of AI oftentimes as a predictive tool, or as a robot doing something, but a lot of times you get a lot of insight from the data through AI.</p><p><strong>What are the benefits of using AI for chip design?</strong></p><p><strong>Gorr:</strong> Historically, we’ve seen a lot of physics-based modeling, which is a very intensive process. We want to do a <a href="https://en.wikipedia.org/wiki/Model_order_reduction" rel="noopener noreferrer" target="_blank">reduced order model</a>, where instead of solving such a computationally expensive and extensive model, we can do something a little cheaper. You could create a surrogate model, so to speak, of that physics-based model, use the data, and then do your parameter sweeps, your optimizations, your <a href="https://www.ibm.com/cloud/learn/monte-carlo-simulation" rel="noopener noreferrer" target="_blank">Monte Carlo simulations</a> using the surrogate model. That takes a lot less time computationally than solving the physics-based equations directly. So, we’re seeing that benefit in many ways, including the efficiency and economy that are the results of iterating quickly on the experiments and the simulations that will really help in the design.</p><p><strong>So it’s like having a digital twin in a sense?</strong></p><p><strong>Gorr:</strong> Exactly. That’s pretty much what people are doing, where you have the physical system model and the experimental data. Then, in conjunction, you have this other model that you could tweak and tune and try different parameters and experiments that let sweep through all of those different situations and come up with a better design in the end.</p><p><strong>So, it’s going to be more efficient and, as you said, cheaper?</strong></p><p><strong>Gorr:</strong> Yeah, definitely. Especially in the experimentation and design phases, where you’re trying different things. That’s obviously going to yield dramatic cost savings if you’re actually manufacturing and producing [the chips]. You want to simulate, test, experiment as much as possible without making something using the actual process engineering.</p><p><strong>We’ve talked about the benefits. How about the drawbacks?</strong></p><p><strong>Gorr: </strong>The [AI-based experimental models] tend to not be as accurate as physics-based models. Of course, that’s why you do many simulations and parameter sweeps. But that’s also the benefit of having that digital twin, where you can keep that in mind—it’s not going to be as accurate as that precise model that we’ve developed over the years.</p><p>Both chip design and manufacturing are system intensive; you have to consider every little part. And that can be really challenging. It’s a case where you might have models to predict something and different parts of it, but you still need to bring it all together.</p><p>One of the other things to think about too is that you need the data to build the models. You have to incorporate data from all sorts of different sensors and different sorts of teams, and so that heightens the challenge.</p><p><strong>How can engineers use AI to better prepare and extract insights from hardware or sensor data?</strong></p><p><strong>Gorr: </strong>We always think about using AI to predict something or do some robot task, but you can use AI to come up with patterns and pick out things you might not have noticed before on your own. People will use AI when they have high-frequency data coming from many different sensors, and a lot of times it’s useful to explore the frequency domain and things like data synchronization or resampling. Those can be really challenging if you’re not sure where to start.</p><p>One of the things I would say is, use the tools that are available. There’s a vast community of people working on these things, and you can find lots of examples [of applications and techniques] on <a href="https://github.com/" rel="noopener noreferrer" target="_blank">GitHub</a> or <a href="https://www.mathworks.com/matlabcentral/" rel="noopener noreferrer" target="_blank">MATLAB Central</a>, where people have shared nice examples, even little apps they’ve created. I think many of us are buried in data and just not sure what to do with it, so definitely take advantage of what’s already out there in the community. You can explore and see what makes sense to you, and bring in that balance of domain knowledge and the insight you get from the tools and AI.</p><p><strong>What should engineers and designers consider wh</strong><strong>en using AI for chip design?</strong></p><p><strong>Gorr:</strong> Think through what problems you’re trying to solve or what insights you might hope to find, and try to be clear about that. Consider all of the different components, and document and test each of those different parts. Consider all of the people involved, and explain and hand off in a way that is sensible for the whole team.</p><p><strong>How do you think AI will affect chip designers’ jobs?</strong></p><p><strong>Gorr:</strong> It’s going to free up a lot of human capital for more advanced tasks. We can use AI to reduce waste, to optimize the materials, to optimize the design, but then you still have that human involved whenever it comes to decision-making. I think it’s a great example of people and technology working hand in hand. It’s also an industry where all people involved—even on the manufacturing floor—need to have some level of understanding of what’s happening, so this is a great industry for advancing AI because of how we test things and how we think about them before we put them on the chip.</p><p><strong>How do you envision the future of AI and chip design?</strong></p><p><strong>Gorr</strong><strong>:</strong> It’s very much dependent on that human element—involving people in the process and having that interpretable model. We can do many things with the mathematical minutiae of modeling, but it comes down to how people are using it, how everybody in the process is understanding and applying it. Communication and involvement of people of all skill levels in the process are going to be really important. We’re going to see less of those superprecise predictions and more transparency of information, sharing, and that digital twin—not only using AI but also using our human knowledge and all of the work that many people have done over the years.</p>]]></description><pubDate>Tue, 08 Feb 2022 14:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/ai-chip-design-matlab</guid><category>Chip-fabrication</category><category>Matlab</category><category>Moores-law</category><category>Chip-design</category><category>Ai</category><category>Digital-twins</category><dc:creator>Rina Diane Caballar</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/layered-rendering-of-colorful-semiconductor-wafers-with-a-bright-white-light-sitting-on-one.jpg?id=29285079&amp;width=980"></media:content></item><item><title>Atomically Thin Materials Significantly Shrink Qubits</title><link>https://spectrum.ieee.org/2d-hbn-qubit</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/a-golden-square-package-holds-a-small-processor-sitting-on-top-is-a-metal-square-with-mit-etched-into-it.jpg?id=29281587&width=1200&height=800&coordinates=0%2C0%2C0%2C0"/><br/><br/><p>Quantum computing is a devilishly complex technology, with many technical hurdles impacting its development. Of these challenges two critical issues stand out: miniaturization and qubit quality.</p><p>IBM has adopted the superconducting qubit road map of <a href="https://spectrum.ieee.org/ibms-envisons-the-road-to-quantum-computing-like-an-apollo-mission" target="_self">reaching a 1,121-qubit processor by 2023</a>, leading to the expectation that 1,000 qubits with today’s qubit form factor is feasible. However, current approaches will require very large chips (50 millimeters on a side, or larger) at the scale of small wafers, or the use of chiplets on multichip modules. While this approach will work, the aim is to attain a better path toward scalability.</p><p>Now researchers at <a href="https://www.nature.com/articles/s41563-021-01187-w" rel="noopener noreferrer" target="_blank">MIT have been able to both reduce the size of the qubits</a> and done so in a way that reduces the interference that occurs between neighboring qubits. The MIT researchers have increased the number of superconducting qubits that can be added onto a device by a factor of 100.</p><p>“We are addressing both qubit miniaturization and quality,” said <a href="https://equs.mit.edu/william-d-oliver/" rel="noopener noreferrer" target="_blank">William Oliver</a>, the director for the <a href="https://cqe.mit.edu/" target="_blank">Center for Quantum Engineering</a> at MIT. “Unlike conventional transistor scaling, where only the number really matters, for qubits, large numbers are not sufficient, they must also be high-performance. Sacrificing performance for qubit number is not a useful trade in quantum computing. They must go hand in hand.”</p><p>The key to this big increase in qubit density and reduction of interference comes down to the use of two-dimensional materials, in particular the 2D insulator hexagonal boron nitride (hBN). The MIT researchers demonstrated that a few atomic monolayers of hBN can be stacked to form the insulator in the capacitors of a superconducting qubit.</p><p>Just like other capacitors, the capacitors in these superconducting circuits take the form of a sandwich in which an insulator material is sandwiched between two metal plates. The big difference for these capacitors is that the superconducting circuits can operate only at extremely low temperatures—less than 0.02 degrees above absolute zero (-273.15 °C).</p><p class="shortcode-media shortcode-media-rebelmouse-image rm-resized-container rm-resized-container-25 rm-float-left" data-rm-resized-container="25%" style="float: left;">
<img alt="Golden dilution refrigerator hanging vertically" class="rm-shortcode rm-resized-image" data-rm-shortcode-id="694399af8a1c345e51a695ff73909eda" data-rm-shortcode-name="rebelmouse-image" id="6c615" loading="lazy" src="https://spectrum.ieee.org/media-library/golden-dilution-refrigerator-hanging-vertically.jpg?id=29281593&width=980" style="max-width: 100%"/>
<small class="image-media media-caption" placeholder="Add Photo Caption..." style="max-width: 100%;">Superconducting qubits are measured at temperatures as low as 20 millikelvin in a dilution refrigerator.</small><small class="image-media media-photo-credit" placeholder="Add Photo Credit..." style="max-width: 100%;">Nathan Fiske/MIT</small></p><p>In that environment, insulating materials that are available for the job, such as PE-CVD silicon oxide or silicon nitride, have quite a few defects that are too lossy for quantum computing applications. To get around these material shortcomings, most superconducting circuits use what are called coplanar capacitors. In these capacitors, the plates are positioned laterally to one another, rather than on top of one another.</p><p>As a result, the intrinsic silicon substrate below the plates and to a smaller degree the vacuum above the plates serve as the capacitor dielectric. Intrinsic silicon is chemically pure and therefore has few defects, and the large size dilutes the electric field at the plate interfaces, all of which leads to a low-loss capacitor. The lateral size of each plate in this open-face design ends up being quite large (typically 100 by 100 micrometers) in order to achieve the required capacitance.</p><p>In an effort to move away from the large lateral configuration, the MIT researchers embarked on a search for an insulator that has very few defects and is compatible with superconducting capacitor plates.</p><p>“We chose to study hBN because it is the most widely used insulator in 2D material research due to its cleanliness and chemical inertness,” said colead author <a href="https://equs.mit.edu/joel-wang/" rel="noopener noreferrer" target="_blank">Joel Wang</a>, a research scientist in the Engineering Quantum Systems group of the MIT Research Laboratory for Electronics. </p><p>On either side of the hBN, the MIT researchers used the 2D superconducting material, niobium diselenide. One of the trickiest aspects of fabricating the capacitors was working with the niobium diselenide, which oxidizes in seconds when exposed to air, according to Wang. This necessitates that the assembly of the capacitor occur in a glove box filled with argon gas.</p><p>While this would seemingly complicate the scaling up of the production of these capacitors, Wang doesn’t regard this as a limiting factor.</p><p>“What determines the quality factor of the capacitor are the two interfaces between the two materials,” said Wang. “Once the sandwich is made, the two interfaces are “sealed” and we don’t see any noticeable degradation over time when exposed to the atmosphere.”</p><p>This lack of degradation is because around 90 percent of the electric field is contained within the sandwich structure, so the oxidation of the outer surface of the niobium diselenide does not play a significant role anymore. This ultimately makes the capacitor footprint much smaller, and it accounts for the reduction in cross talk between the neighboring qubits.</p><p>“The main challenge for scaling up the fabrication will be the wafer-scale growth of hBN and 2D superconductors like [niobium diselenide], and how one can do wafer-scale stacking of these films,” added Wang.</p><p>Wang believes that this research has shown 2D hBN to be a good insulator candidate for superconducting qubits. He says that the groundwork the MIT team has done will serve as a road map for using other hybrid 2D materials to build superconducting circuits.</p>]]></description><pubDate>Mon, 07 Feb 2022 16:12:05 +0000</pubDate><guid>https://spectrum.ieee.org/2d-hbn-qubit</guid><category>Quantum-computing</category><category>2d-materials</category><category>Ibm</category><category>Qubits</category><category>Hexagonal-boron-nitride</category><category>Superconducting-qubits</category><category>Mit</category><dc:creator>Dexter Johnson</dc:creator><media:content medium="image" type="image/jpeg" url="https://spectrum.ieee.org/media-library/a-golden-square-package-holds-a-small-processor-sitting-on-top-is-a-metal-square-with-mit-etched-into-it.jpg?id=29281587&amp;width=980"></media:content></item></channel></rss>