<?xml version="1.0" encoding="utf-8" standalone="no"?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:media="http://search.yahoo.com/mrss/" version="2.0"><channel><title>IEEE Spectrum</title><link>https://spectrum.ieee.org/</link><description>IEEE Spectrum</description><atom:link href="https://spectrum.ieee.org/feeds/type/podcast.rss" rel="self"/><language>en-us</language><lastBuildDate>Thu, 27 Feb 2025 16:59:59 -0000</lastBuildDate><itunes:explicit>no</itunes:explicit><itunes:subtitle>IEEE Spectrum</itunes:subtitle><item><title>Using AI to Clear Land Mines in Ukraine</title><link>https://spectrum.ieee.org/clear-land-mines-drones-ai</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/fixing-the-future-podcast-logo.webp?id=52333475&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/5e21cb4d" width="100%"></iframe><p>
<strong>Stephen Cass:</strong> Hello. I’m <a href="https://spectrum.ieee.org/u/stephen-cass" target="_self"><u>Stephen Cass,</u></a> Special Projects Director at <em><em>IEEE Spectrum</em></em>. Before starting today’s episode hosted by <a href="https://spectrum.ieee.org/u/eliza-strickland" target="_self"><u>Eliza Strickland</u></a>, I wanted to give you all listening out there some news about this show.
</p><p>
	This is our last episode of <em><em>Fixing the Future</em></em>. We’ve really enjoyed bringing you some concrete solutions to some of the world’s toughest problems, but we’ve decided we’d like to be able to go deeper into topics than we can in the course of a single episode. So we’ll be returning later in the year with a program of limited series that will enable us to do those deep dives into fascinating and challenging stories in the world of technology. I want to thank you all for listening and I hope you’ll join us again. And now, on to today’s episode.
</p><p>
<strong>Eliza Strickland: </strong>Hi, I’m Eliza Strickland for <em><em>IEEE Spectrum</em></em>‘s <em><em>Fixing the Future</em></em> podcast. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum’s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to <a href="https://spectrum.ieee.org/newsletters/" target="_self"><u>spectrum.IEEE.org/newsletters</u></a> to subscribe.
</p><p>
	Around the world, about 60 countries are contaminated with land mines and unexploded ordnance, and Ukraine is the worst off. Today, about a third of its land, an area the size of Florida, is estimated to be contaminated with dangerous explosives. My guest today is <a href="https://www.linkedin.com/in/gabriel-steinberg/" rel="noopener noreferrer" target="_blank"><u>Gabriel Steinberg</u></a>, who co-founded both the nonprofit <a href="https://www.de-mine.com/" rel="noopener noreferrer" target="_blank"><u>Demining Research Community</u></a> and the startup <a href="https://safeproai.com/" rel="noopener noreferrer" target="_blank"><u>Safe Pro AI</u></a> with his friend, <a href="https://eesc.columbia.edu/content/jasper-baur" rel="noopener noreferrer" target="_blank"><u>Jasper Baur</u></a>. Their technology uses drones and artificial intelligence to radically speed up the process of finding land mines and other explosives. Okay, Gabriel, thank you so much for joining me on <em><em>Fixing the Future</em></em> today.
</p><p>
<strong>Gabriel Steinberg: </strong>Yeah, thank you for having me.
</p><p>
<strong>Strickland: </strong>So I want to start by hearing about the typical process for demining, and so the standard operating procedure. What tools do people use? How long does it take? What are the risks involved? All that kind of stuff.
</p><p>
<strong>Steinberg:</strong> Sure. So humanitarian demining hasn’t changed significantly. There’s been evolutions, of course, since its inception and about the end of World War I. But mostly, the processes have been the same. People stand from a safe location and walk around an area in areas that they know are safe, and try to get as much intelligence about the contamination as they can. They ask villagers or farmers, people who work around the area and live around the area, about accidents and potential sightings of minefields and former battle positions and stuff. The result of this is a very general idea, a polygon, of where the contamination is. After that polygon and some prioritization based on danger to civilians and economic utility, the field goes into clearance. The first part is the non-technical survey, and then this is clearance. Clearance happens one of three ways, usually, but it always ends up with a person on the ground basically doing extreme gardening. They dig out a certain standard amount of the soil, usually 13 centimeters. And with a metal detector, they walk around the field and a mine probe. They find the land mines and nonexploded ordnance. So that always is how it ends.
</p><p>
	To get to that point, you can also use mechanical assets, which are large tillers, and sometimes dogs and other animals are used to walk in lanes across the contaminated polygon to sniff out the land mines and tell the clearance operators where the land mines are.
</p><p>
<strong>Strickland: </strong>How do you hope that your technology will change this process?
</p><p>
<strong>Steinberg: </strong>Well, my technology is a drone-based mapping solution, basically. So we provide a software to the humanitarian deminers. They are already flying drones over these areas. Really, it started ramping up in Ukraine. The humanitarian demining organizations have started really adopting drones just because it’s such a massive problem. The extent is so extreme that they need to innovate. So we provide AI and mapping software for the deminers to analyze their drone imagery much more effectively. We hope that this process, or our software, will decrease the amount of time that deminers use to analyze the imagery of the land, thereby more quickly and more effectively constraining the areas with the most contamination. So if you can constrain an area, a polygon with a certainty of contamination and a high density of contamination, then you can deploy the most expensive parts of the clearance process, which are the humans and the machines and the dogs. You can deploy them to a very specific area. You can much more cost-effectively and efficiently demine large areas.
</p><p>
<strong>Strickland: </strong>Got it. So it doesn’t replace the humans walking around with metal detectors and dogs, but it gets them to the right spots faster.
</p><p>
<strong>Steinberg: </strong>Exactly. Exactly. At the moment, there is no conception of replacing a human in demining operations, and people that try to push that eventuality are usually disregarded pretty quickly.
</p><p>
<strong>Strickland: </strong>How did you and your co-founder, Jasper, first start experimenting with the use of drones and AI for detecting explosives?
</p><p>
<strong>Steinberg: </strong>So it started in 2016 with my partner, Jasper Baur, doing a research project at Binghamton University in the remote sensing and geophysics lab. And the project was to detect a specific anti-personnel land mine, the<a href="https://en.wikipedia.org/wiki/PFM-1_mine" rel="noopener noreferrer" target="_blank"> <u>PFM-1</u></a>. Then found— it’s a Russian-made land mine. It was previously found in Afghanistan. It still is found in Afghanistan, but it’s found in much higher quantities right now in Ukraine. And so his project was to detect the PFM-1 anti-personnel land mine using thermal imagery from drones. It sort of snowballed into quite an intensive research project. It had multiple papers from it, multiple researchers, some awards, and most notably, it beat NASA at a particular Tech Briefs competition. So that was quite a morale boost.
</p><p>
	And at some point, Jasper had the idea to integrate AI into the project. Rightfully, he saw the real bottleneck as not the detecting of land mines in drone imagery, but the analysis of land mines in drone imagery. And that really has become— I mean, he knew, somehow, that that would really become the issue that everybody is facing. And everybody we talked to in Ukraine is facing that issue. So machine learning really was the key for solving that problem. And I joined the project in 2018 to integrate machine learning into the research project. We had some more papers, some more presentations, and we were nearing the end of our college tenure, of our undergraduate degree, in 2020. So at that time– but at that time, we realized how much the field needed this. We started getting more and more into the mine action field, and realizing how neglected the field was in terms of technology and innovation. And we felt an obligation to bring our technology, really, to the real world instead of just a research project. There were plenty of research projects about this, but we knew that it could be more and that it should. It really should be more. And we felt we had the– for some reason, we felt like we had the capability to make that happen.
</p><p>
	So we formed a nonprofit, the Demining Research Community, in 2020 to try to raise some funding for this project. Our for-profit end of that, of our endeavors, was acquired by a company called Safe Pro Group in 2023. Yeah, 2023, about one year ago exactly. And the drone and AI technology became Safe Pro AI and our flagship product spotlight. And that’s where we’re bringing the technology to the real world. The Demining Research Community is providing resources for other organizations who want to do a similar thing, and is doing more research into more nascent technologies. But yeah, the real drone and AI stuff that’s happening in the real world right now is through Safe Pro.
</p><p>
<strong>Strickland: </strong>So in that early undergraduate work, you were using thermal sensors. I know now the Spotlight AI system is using more visual. Can you talk about the different modalities of sensing explosives and the sort of trade-offs you get with them?
</p><p>
<strong>Steinberg: </strong>Sure. So I feel like I should preface this by saying the more high tech and nascent the technology is, the more people want to see it apply to land mine detection. But really, we have found from the problems that people are facing, by far the most effective modality right now is just visual imagery. People have really good visual sensors built into their face, and you don’t need a trained geophysicist to observe the data and very, very quickly get actionable intelligence. There’s also plenty of other benefits. It’s cheaper, much more readily accessible in Ukraine and around the world to get built-in visual sensors on drones. And yeah, just processing the data, and getting the intelligence from the data, is way easier than anything else.
</p><p>
	I’ll talk about three different modalities. Well, I guess I could talk about four. There’s thermal, ground penetrating radar, magnetometry, and lidar. So thermal is what we started with. Thermal is really good at detecting living things, as I’m sure most people can surmise. But it’s also pretty good at detecting land mines, mostly large anti-tank land mines buried under a couple millimeters, or up to a couple centimeters, of soil. It’s not super good at this. The research is still not super conclusive, and you have to do it at a very specific time of day, in the morning and at night when, basically the soil around the land mine heats up faster than the land mine and you cause a thermal anomaly, or the sun causes a thermal anomaly. So it can detect things, land mines, in some amount of depth in certain soils, in certain weather conditions, and can only detect certain types of land mines that are big and hefty enough. So yeah, that’s thermal.
</p><p>
	Ground penetrating radar is really good for some things. It’s not really great for land mine detection. You have to have really expensive equipment. It takes a really long time to do the surveys. However, it can get plastic land mines under the surface. And it’s kind of the only modality that can do that with reliability. However, you need to train geophysicists to analyze the data. And a lot of the time, the signatures are really non-unique and there’s going to be a lot of false positives. Magnetometry is the other-- by the way, all of this is airborne that I’m referring to. Ground-based GPR and magnetometry are used in demining of various types, but airborne is really what I’m talking about.
</p><p>
	For magnetometry, it’s more developed and more capable than ground penetrating radar. It’s used, actually, in the field in Ukraine in some scenarios, but it’s still very expensive. It needs a trained geophysicist to analyze the data, and the signatures are non-unique. So whether it’s a bottle can or a small anti-personnel land mine, you really don’t know until you dig it up. However, I think if I were to bet on one of the other modalities becoming increasingly useful in the next couple of years, it would be airborne magnetometry.
</p><p>
	Lidar is another modality that people use. It’s pretty quick, also very expensive, but it can reliably map and find surface anomalies. So if you want to find former fighting positions, sometimes an indicator of that is a trench line or foxholes. Lidar is really good at doing that in conflicts from long ago. So there’s a paper that the<a href="https://www.halousa.org/" rel="noopener noreferrer" target="_blank"> <u>HALO Trust</u></a> published of flying<a href="https://www.routescene.com/case-studies/uav-lidar-improves-land-mine-clearance-planning/" rel="noopener noreferrer" target="_blank"> <u>a lidar mission over former fighting positions</u></a>, I believe, in Angola. And they reliably found a former trench line. And from that information, they confirmed that as a hazardous area. Because if there is a former front line on this position, you can pretty reliably say that there is going to be some explosives there.
</p><p>
<strong>Strickland: </strong>And so you’ve done some experiments with some of these modalities, but in the end, you found that the visual sensor was really the best bet for you guys?
</p><p>
<strong>Steinberg: </strong>Yeah. It’s different. The requirements are different for different scenarios and different locations, really. Ukraine has a lot of surface ordnance. Yeah. And that’s really the main factor that allows visual imagery to be so powerful.
</p><p>
<strong>Strickland: </strong>So tell me about what role machine learning plays in your Spotlight AI software system. Did you create a model trained on a lot of— did you create a model based on a lot of data showing land mines on the surface?
</p><p>
<strong>Steinberg: </strong>Yeah. Exactly. We used real-world data from inert, non-explosive items, and flew drone missions over them, and did some physical augmentation and some programmatic augmentation. But all of the items that we are training on are real-life Russian or American ordnance, mostly. We’re also using the real-world data in real minefields that we’re getting from Ukraine right now. That is, obviously, the most valuable data and the most effective in building a machine learning model. But yeah, a lot of our data is from inert explosives, as well.
</p><p>
<strong>Strickland: </strong>So you’ve talked a little bit about the current situation in Ukraine, but can you tell me more about what people are dealing with there? Are there a lot of areas where the battle has moved on and civilians are trying to reclaim roads or fields?
</p><p>
<strong>Steinberg: </strong>Yeah. So the fighting is constantly ongoing, obviously, in eastern Ukraine, but I think sometimes there’s a perspective of a stalemate. I think that’s a little misleading. There’s lots of action and violence happening on the front line, which constantly contaminates, cumulatively, the areas that are the front line and the gray zone, as well as areas up to 50 kilometers back from both sides. So there’s constantly artillery shells going into villages and cities along the front line. There’s constantly land mines, new mines, being laid to reinforce the positions. And there’s constantly mortars. And everything is constant. In some fights—I just watched the video yesterday—one of the soldiers said you could not count to five without an explosion going off. And this is just one location in one city along the front. So you can imagine the amount of explosive ordnance that are being fired, and inevitably 10, 20, 30 percent of them are sometimes not exploding upon impact, on top of all the land mines that are being purposely laid and not detonating from a vehicle or a person. These all just remain after the war. They don’t go anywhere. So yeah, Ukraine is really being littered with explosive ordnance and land mines every day.
</p><p>
	This past year, there hasn’t been terribly much movement on the front line. But in the Ukrainian counteroffensive in 2020— I guess the last major Ukrainian counteroffensive where areas of Mykolaiv, which is in the southeast, were reclaimed, the civilians started repopulating the city almost immediately. There are definitely some villages that are heavily contaminated, that people just deserted and never came back to, and still haven’t come back to after them being liberated. But a lot of the areas that have been liberated, they’re people’s homes. And even if they’re destroyed, people would rather be in their homes than be refugees. And I mean, I totally understand that. And it just puts the responsibility on the deminers and the Ukrainian government to try to clear the land as fast as possible. Because after large liberations are made, people want to come back almost all the time. So it is a very urgent problem as the lines change and as land is liberated.
</p><p>
<strong>Strickland: </strong>And I think it was about a year ago that you and Jasper went to the Ukraine for a technology demonstration set up by the United Nations. Can you tell about that, and what the task was, and how your technology fared?
</p><p>
<strong>Steinberg: </strong>Sure. So yeah, the <a href="https://www.undp.org/" rel="noopener noreferrer" target="_blank"><u>United Nations Development Program</u></a> invited us to do a demonstration in northern Ukraine to see how our technology, and other technologies similar to it, performed in a military training facility in Ukraine. So everybody who’s doing this kind of thing, which is not many people, but there are some other organizations, they have their own metrics and their own test fields— not always, but it would be good if they did. But the UNDP said, “No, we want to standardize this and try to give recommendations to the organizations on the ground who are trying to adopt these technologies.” So we had five hours to survey the field and collect as much data as we could. And then we had 72 hours to return the results. We—
</p><p>
<strong>Strickland: </strong>Sorry. How big was the field?
</p><p>
<strong>Steinberg: </strong>The field was 25 hectares. So yeah, the audience at home can type 25 hectares to amount of football fields. I think it’s about 60. But it’s a large area. So we’d never done anything like that. That was really, really a shock that it was that large of an area. I think we’d only done half a hectare at a time up to that point. So yeah, it was pretty daunting. But we basically slept very, very little in those 72 hours, and as a result, produced what I think is one of the best results that the UNDP got from that test. We didn’t detect everything, but we detected most of the ordnance and land mines that they had laid. We also detected some that they didn’t know were there because it was a military training facility. So there were some mortars being fired that they didn’t know about.
</p><p>
<strong>Strickland: </strong>And I think Jasper told me that you had to sort of rewrite your software on the fly. You realized that the existing approach wasn’t going to work and you had to do some all-nighter to recode?
</p><p>
<strong>Steinberg: </strong>Yeah. Yeah, I remember us sitting in a Georgian restaurant— Georgia, the country, not the state, and racking our brain, trying to figure out how we were going to map this amount of land. We just found out how big the area was going to be and we were a little bit stunned. So we devised a plan to do it in two stages. The first stage was where we figured out in the drone images where the contaminated regions were. And then the second stage was to map those areas, just those areas. Now, our software can actually map the whole thing, and pretty casually too. So not to brag. But at the time, we had lots less development under our belt. And yeah, therefore we just had to brute force it through Georgian food and brainpower.
</p><p>
<strong>Strickland: </strong>You and Jasper just got back from another trip to the Ukraine a couple of weeks ago, I think. Can you talk about what you were doing on this trip, and who you met with?
</p><p>
<strong>Steinberg: </strong>Sure. This trip was much less stressful, although stressful in different ways than the UNDP demo. Our main objectives were to see operations in action. We had never actually been to real minefields before. We’d been in some perhaps contaminated areas, but never in a real minefield where you can say, “Here was the Russian position. There are the land mines. Do not go there.” So that was one of the main objectives. That was very powerful for us to see the villages that were destroyed and are denied to the citizens because of land mines and unexploded ordnance. It’s impossible to describe how that feels being there. It’s really impactful, and it makes the work that I’m doing feel not like I have a choice anymore. I feel very much obligated to do my absolute best to help these people.
</p><p>
<strong>Strickland: </strong>Well, I hope your work continues. I hope there’s less and less need for it over time. But yeah, thank you for doing this. It’s important work. And thanks for joining me on Fixing the Future.
</p><p>
<strong>Steinberg: </strong>My pleasure. Thank you for having me.
</p><p>
<strong>Strickland: </strong>That was Gabriel Steinberg speaking to me about the technology that he and Jasper Baur developed to help rid the world of land mines. I’m Eliza Strickland, and I hope you’ll join us next time on <em><em>Fixing the Future</em></em>.
</p>]]></description><pubDate>Wed, 29 May 2024 09:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/clear-land-mines-drones-ai</guid><category>Land-mines</category><category>Type-podcast</category><category>Fixing-the-future</category><category>Demining</category><category>Drones</category><category>Machine-learning</category><dc:creator>Eliza Strickland</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/52333475/origin.webp"/></item><item><title>Never Recharge Your Consumer Electronics Again?</title><link>https://spectrum.ieee.org/dye-solar-charges-consumer-electronics</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/fixing-the-future-podcast-logo.jpg?id=46469653&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/69483bcf" width="100%">
</iframe>
<p>
<strong>Stephen Cass:</strong> Hello and welcome to <em>Fixing the Future</em>, an <em>IEEE Spectrum</em> podcast where we look at concrete solutions to tough problems. I’m your host <a href="https://spectrum.ieee.org/u/stephen-cass" target="_self">Stephen Cass</a>, a senior editor at <em>IEEE Spectrum</em>. And before I start, I just wanted to tell you that you can get the latest coverage of <em>Spectrum</em>‘s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe.
</p>
<p>
	We all love our mobile devices where the progress of Moore’s Law has meant we’re able to pack an enormous amount of computing power in something that’s small enough that we can wear it as jewelery. But their Achilles heel is power. They eat up battery life requiring frequent battery changes or charging. One company that’s hoping to reduce our battery anxiety is <a href="https://www.exeger.com/" rel="noopener noreferrer" target="_blank">Exeger</a>, which wants to enable self-charging devices that convert ambient light into energy on the go. Here to talk about its so-called Powerfoyle solar cell technology is Exeger’s founder and CEO, <a href="https://www.exeger.com/about-us/management/" rel="noopener noreferrer" target="_blank">Giovanni Fili</a>. Giovanni, welcome to the show.
</p>
<p>
<strong>Giovanni Fili: </strong>Thank you.
</p>
<p>
<strong>Cass:</strong> So before we get into the details of the Powerfoyle technology, was I right in saying that the Achilles heel of our mobile devices is battery life? And if we could reduce or eliminate that problem, how would that actually influence the development of mobile and wearable tech beyond just not having to recharge as often?
</p>
<p>
<strong>Fili: </strong>Yeah. I mean, for sure, I think the global common problem or pain point is for sure battery anxiety in different ways, ranging from your mobile phone to your other portable devices, and of course, even EV like cars and all that. So what we’re doing is we’re trying to eliminate this or reduce or eliminate this battery anxiety by integrating— seamlessly integrating, I should say, a solar cell. So our solar cell can convert any light energy to electrical energy. So indoor, outdoor from any angle. We’re not angle dependent. And the solar cell can take the shape. It can look like leather, textile, brushed steel, wood, carbon fiber, almost anything, and can take light from all angles as well, and can be in different colors. It’s also very durable. So our idea is to integrate this flexible, thin film into any device and allow it to be self-powered, allowing for increased functionality in the device. Just look at the smartwatches. I mean, the first one that came, you could wear them for a few hours, and you had to charge them. And they packed them with more functionality. You still have to charge them every day. And you still have to charge them every day, regardless. But now, they’re packed with even more stuff. So as soon as you get more energy efficiency, you pack them with more functionality. So we’re enabling this sort of jump in functionality without compromising design, battery, sustainability, all of that. So yeah, so it’s been a long journey since I started working with this 17 years ago.
</p>
<p>
<strong>Cass:</strong> I actually wanted to ask about that. So how is Exeger positioned to attack this problem? Because it’s not like you’re the first company to try and do nice mobile charging solutions for mobile devices.
</p>
<p>
<strong>Fili:</strong> I can mention there, I think that the main thing that differentiates us from all other previous solutions is that we have invented a new electrode material, the anode and the cathode with a similar almost like battery. So we have anode, cathode. We have electrolytes inside. So this is a—
</p>
<p>
<strong>Cass:</strong> So just for readers who might not be familiar, a battery is basically you have an anode, which is the positive terminal—I hope I didn’t forgot that—cathode, which is a negative terminal, and then you have an electrolyte between them in the battery, and then chemical reactions between these three components, and it can get kind of complicated, produce an electric potential between one side and the other. And in a solar cell, also there’s an anode and a cathode and so on. Have I got that right, my little, brief sketch?
</p>
<p>
<strong>Fili:</strong> Yeah. Yeah. Yeah. And so what we add to that architecture is we add one layer of titanium dioxide nanoparticles. Titanium dioxide is the white in white wall paint, toothpaste, sunscreen, all that. And it’s a very safe and abundant material. And we use that porous layer of titanium nanoparticles. And then we deposit a dye, a color, a pigment on this layer. And this dye can be red, black, blue, green, any kind of color. And the dye will then absorb the photons, excite electrons that are injected into the titanium dioxide layer and then collected by the anode and then conducted out to the cable. And now, we use the electrons to light the lamp or a motor or whatever we do with it. And then they turn back to the cathode on the other side and inside the cell. So the electrons goes the other way and the inner way. So the plus, you can say, go inside ions in the electrolytes. So it’s a regenerative system.
</p>
<p>
	So our innovation is a new— I mean, all solar cells, they have electrodes to collect the electrons. If you have silicon wafers or whatever you have, right? And you know that all these solar cells that you’ve seen, they have silver lines crossing the surface. The silver lines are there because the conductivity is quite poor, funny enough, in these materials. So high resistance. So then you need to deposit the silver lines there, and they’re called current collectors. So you need to collect the current. Our innovation is a new electrode material that has 1,000 times better conductivity than other flexible electrode materials. That allows us as the only company in the world to eliminate the silver lines. And we print all our layers as well. And as you print in your house, you can print a photo, an apple with a bite in it, you can print the name, you can print anything you want. We can print anything we want, and it will also be converting light energy to electric energy. So a solar cell.
</p>
<p>
<strong>Cass: </strong>So the key part is that the color dye is doing that initial work of converting the light. Do different colors affect the efficiency? I did see on your site that it comes in all these kind of different colors, but. And I was thinking to myself, well, is the black one the best? Is the red one the best? Or is it relatively insensitive to the visible color that I see when I look at these dyes?
</p>
<p>
<strong>Fili:</strong> So you’re completely right there. So black would give you the most. And if you go to different colors, typically you lose like 20, 30 percent. But fortunately enough for us, over 50 percent of the consumer electronic market is black products. So that’s good. So I think that you asked me how we’re positioned. I mean, with our totally unique integration possibilities, imagine this super thin, flexible film that works all day, every day from morning to sunset, indoor, outdoor, can look like leather. So we’ve made like a leather bag, right? The leather bag is the solar cell. The entire bag is the solar cell. You wouldn’t see it. It just looks like a normal leather bag.
</p>
<p>
	Cass: So when you talk about flexible, you actually mean this— so sometimes when people talk about flexible electronics, they mean it can be put into a shape, but then you’re not supposed to bend it afterwards. When you’re talking about flexible electronics, you’re talking about the entire thing remains flexible and you can use it flexibly instead of just you can conform it once to a shape and then you kind of leave it alone.
</p>
<p>
<strong>Fili: </strong>Correct. So we just recently released a hearing protector with <a href="https://www.3m.com/" rel="noopener noreferrer" target="_blank">3M</a>. This great American company with more than 60,000 products across the world. So we have a global exclusivity contract with them where they have integrated our bendable, flexible solar film in the headband. So the headband is the solar cell, right? And where you previously had to change disposable battery every second week, two batteries every second week, now you never need to change the battery again. We just recharge this small rechargeable battery indoor and outdoor, just continues to charge all the time. And they have added a lot of extra really cool new functionality as well. So we’re eliminating the need for disposable batteries. We’re saving millions and millions of batteries. We’re saving the end user, the contractor, the guy who uses them a lot of hassle to buy this battery, store them. And we increase reliability and functionality because they will always be charged. You can trust them that they always work. So that’s where we are totally unique. The solar cell is super durable. If we can be in a professional hearing protector to use on airports, construction sites, mines, whatever you use, factories, oil rig platforms, you can do almost anything. So I don’t think any other solar cell would be able to pass those durability tests that we did. It’s crazy.
</p>
<p>
<strong>Cass:</strong> So I have a question. It kind of it’s more appropriate from my experience with utility solar cells and things you put on roofs. But how many watts per square meter can you deliver, we’ll say, in direct sunlight?
</p>
<p>
<strong>Fili:</strong> So our focus is on indirect sunlight, like shade, suboptimal light conditions, because that’s where you would typically be with these products. But if you compare to more of a silicon, which is what you typically use for calculators and all that stuff. So we are probably around twice as what they deliver in this dark conditions, two to three times, depending. If you use glass, if you use flexible, we’re probably three times even more, but. So we don’t do full sunshine utility scale solar. But if you look at these products like the hearing protector, we have done a lot of headphones with Adidas and other huge brands, we typically recharge like four times what they use. So if you look at— if you go outside, not in full sunshine, but half sunshine, let’s say 50,000 lux, you’re probably talking at about 13, 14 minutes to charge one hour of listening. So yeah, so we have sold a few hundred thousand products over the last three years when we started selling commercially. And - I don’t know - I haven’t heard anyone who has charged since. I mean, surely someone has, but typically the user never need to charge them again, just charge themself.
</p>
<p>
<strong>Cass: </strong>Well, that’s right, because for many years, I went to <a href="https://spectrum.ieee.org/tag/ces" target="_self">CES</a>, and I often would buy these, or acquire these, little solar cell chargers. And it was such a disappointing experience because they really would only work in direct sunlight. And even then, it would take a very long time. So I want to talk a little bit about, then, to get to that, what were some of the biggest challenges you had to overcome on the way to developing this tech?
</p>
<p>
<strong>Fili: </strong>I mean, this is the fourth commercial solar cell technology in the world after 110 or something years of research. I mean, the Americans, the Bell Laboratory sent the first silicon cell, I think it’s in like 1955 or something, to space. And then there’s been this constant development and trying to find, but to develop a new energy source is as close to impossible as you get, more or less. Everybody tried and everybody failed. We didn’t know that, luckily enough. So just the whole-- so when I try to explain this, I get this question quite a lot. Imagine you found out something really cool, but there’s no one to ask. There’s no book to read. You just realize, “Okay, I have to make like hundreds of thousands, maybe millions of experiments to learn. And all of them, except finally one, they will all fail. But that’s okay.” You will fail, fail, fail. And then, “Oh, here’s the solution. Something that works. Okay. Good.” So we had to build on just constant failing, but it’s okay because you’re in a research phase. So we had to. I mean, we started off with this new nanomaterials, and then we had to make components of these materials. And then we had to make solar cells of the components, but there were no machines either. We have had to invent all the machines from scratch as well to make these components and the solar cells and some of the non-materials. That was also tough. How do you design a machine for something that doesn’t exist? It’s pretty difficult specification to give to a machine builder. So in the end, we had to build our own machine building capacity here. We’re like 50 guys building machines, so.
</p>
<p>
	But now, I mean, today we have over 300 granted patents, another 90 that will be approved soon. We have a complete machine park that’s proprietary. We are now building the largest solar cell factory— one of the largest solar cell factories in Europe. It’s already operational, phase one. Now we’re expanding into phase two. And we’re completely vertically integrated. We don’t source anything from Russia, China; never did. Only US, Japan, and Europe. We run the factories on 100 percent renewable energy. We have zero emissions to air and water. And we don’t have any rare earth metals, no strange stuff in it. It’s like it all worked out. And now we have signed, like I said, global exclusivity deal with 3M. We have a global exclusivity deal with the largest company in the world on computer peripherals, like mouse, keyboard, that stuff. They can only work with us for years. We have signed one of the large, the big fives, the Americans, the huge CE company. Can’t tell you yet the name. We have a globally exclusive deal for electronic shelf labels, the small price tags in the stores. So we have a global solution with Vision Group, that’s the largest. They have 50 percent of the world market as well. And they have Walmart, IKEA, Target, all these huge companies. So now it’s happening. So we’re rolling out, starting to deploy massive volumes later this year.
</p>
<p>
<strong>Cass:</strong>So I’ll talk a little bit about that commercial experience because you talked about you had to create verticals. I mean, in <em>Spectrum</em>, we do cover other startups which have had these— they’re kind of starting from scratch. And they develop a technology, and it’s a great demo technology. But then it comes that point where you’re trying to integrate in as a supplier or as a technology partner with a large commercial entity, which has very specific ideas and how things are to be manufactured and delivered and so on. So can you talk a little bit about what it was like adapting to these partners like 3M and what changes you had to make and what things you learned in that process where you go from, “Okay, we have a great product and we could make our own small products, but we want to now connect in as part of this larger supply chain.”
</p>
<p>
<strong>Fili:</strong> It’s a very good question and it’s extremely tough. It’s a tough journey, right? Like to your point, these are the largest companies in the world. They have their way. And one of the first really tough lessons that we learned was that one factory wasn’t enough. We had to build two factories to have redundancy in manufacturing. Because single source is bad. Single source, single factory, that’s really bad. So we had to build two factories and we had to show them we were ready, willing and able to be a supplier to them. Because one thing is the product, right? But the second thing is, are you worthy supplier? And that means how much money you have in the bank. Are you going to be here in two, three, four years? What’s your ISO certifications like? REACH, RoHS, Prop 65. What’s your <a href="https://en.wikipedia.org/wiki/Life-cycle_assessment" rel="noopener noreferrer" target="_blank">LCA</a>? What’s your view on this? Blah, blah, blah. Do you have professional supply chain? Did you do audits on your suppliers? But now, I mean, we’ve had audits here by five of the largest companies in the world. We’ve all passed them. And so then you qualify as a worthy supplier. Then comes your product integration work, like you mentioned. And I think it’s a lot about— I mean, that’s our main feature. The main unique selling point with Exeger is that we can integrate into other people’s products. Because when you develop this kind of crazy technology-- “Okay, so this is solar cell. Wow. Okay.” And it can look like anything. And it works all the time. And all the other stuff is sustainable and all that. Which product do you go for? So I asked myself—I’m an entrepreneur since the age of 15. I’ve started a number of companies. I lost so much money. I can’t believe it. And managed to earn a little bit more. But I realized, “Okay, how do you select? Where do you start? Which product?”
</p>
<p>
	Okay, so I sat down. I was like, “When does it sell well? When do you see market success?” When something is important. When something is important, it’s going to work. It’s not the best tech. It has to be important enough. And then, you need distribution and scale and all that. Okay, how do you know if something is important? You can’t. Okay. What if you take something that’s already is— I mean, something new, you can’t know if it’s going to work. But if we can integrate into something that’s already selling in the billions of units per year, like headphones— I think this year, one billion headphones are going to be sold or something. Okay, apparently, obviously that’s important for people. Okay, let’s develop technology that can be integrated into something that’s already important and allow it to stay, keep all the good stuff, the design, the weight, the thickness, all of that, even improve the LCA better for the environment. And it’s self-powered. And it will allow the user to participate and help a little bit to a better world, right? With no charge cable, no charging in the wall, less batteries and all that. So our strategy was to develop such a strong technology so that we could integrate into these companies/partners products.
</p>
<p>
<strong>Cass:</strong> So I guess the question there is— so you come to a company, the company has its own internal development engineers. It’s got its own people coming up with product ideas and so on. How do you evangelize within a company to say, “Look, you get in the door, you show your demo,” to say, product manager who’s thinking of new product lines, “You guys should think about making products with our technology.” How do you evangelize that they think, “Okay, yeah, I’m going to spend the next six months of my life betting on these headphones, on this technology that I didn’t invent that I’m kind of trusting.” How do you get that internal buy-in with the internal engineers and the internal product developers and product managers?
</p>
<p>
<strong>Fili:</strong> That’s the Holy Grail, right? It’s very, very, very difficult. Takes a lot of time. It’s very expensive. And the point, I think you’re touching a little bit when you’re asking me now, because they don’t have a guy waiting to buy or a division or department waiting to buy this flexible indoor solar cell that can look like leather. They don’t have anyone. Who’s going to buy? Who’s the decision maker? There is not one. There’s a bunch, right? Because this will affect the battery people. This will affect the antenna people. This will affect the branding people. It will affect the mechanic people, etc., etc., etc. So there’s so many people that can say no. No one can say yes alone. All of them can say no alone. Any one of them can block the project, but to proceed, all of them have to say yes. So it’s a very, very tough equation. So that’s why when we realized this— this was another big learning that we had that we couldn’t go with the sales guy. We couldn’t go with two sales guys. We had to go with an entire team. So we needed to bring our design guy, our branding person, our mechanics person, our software engineer. We had to go like huge teams to be able to answer all the questions and mitigate and explain.
</p>
<p>
	So we had to go both top down and explain to the head of product or head of sustainability, “Okay, if you have 100 million products out in five years and they’re going to be using 50 batteries per year, that’s 5 billion batteries per year. That’s not good, right? What if we can eliminate all these batteries? That’s good for sustainability.” “Okay. Good.” “That’s also good for total cost. We can lower total cost of ownership.” “Okay, that’s also good.” “And you can sell this and this and this way. And by the way, here’s a narrative we offer you. We have also made some assets, movies, pictures, texts. This is how other people talk about this.” But it’s a very, very tough start. How do you get the first big name in? And big companies, they have a lot to risk, a lot to lose as well. So my advice would be to start smaller. I mean, we started mainly due to COVID, to be honest. Because Sweden stayed open during COVID, which was great. We lived our lives almost like normal. But we couldn’t work with any international companies because they were all closed or no one went to the office. So we had to turn to Swedish companies, and we developed a few products during COVID. We launched like four or five products on the market with smaller Swedish companies, and we launched so much. And then we could just send these headphones to the large companies and tell them, “You know what? Here’s a headphone. Use it for a few months. We’ll call you later.” And then they call us that, “You know what? We have used them for three months. No one has charged. This is sick. It actually works.” We’re like, “Yeah, we know.” And then that just made it so much easier. And now anyone who wants to make a deal with us, they can just buy these products anywhere online or in-store across the whole world and try them for themselves.
</p>
<p>
	And we send them also samples. They can buy, they can order from our website, like development kits. We have software, we have partnered up with <a href="https://www.qualcomm.com/" rel="noopener noreferrer" target="_blank">Qualcomm</a>, early semiconductor. All the big electronics companies, we’re now qualified partners with them. So all the electronics is powerful already. So now it’s very easy now to build prototypes if you want to test something. We have offices across the world. So now it’s much easier. But my advice to anyone who would want to start with this is try and get a few customers in. The important thing is that they also care about the project. If we go to one of these large companies, 3M, they have 60,000 products. If they have 60,001, yeah. But for us, it’s like <em>the</em> project. And we have managed to land it in a way. So it’s also important for them now because it just touches so many of their important areas that they work with, so.
</p>
<p>
<strong>Cass:</strong> So in terms of future directions for the technology, do you have a development pathway? What kind of future milestones are you hoping to hit?
</p>
<p>
<strong>Fili:</strong> For sure. So at the moment, we’re focusing on consumer electronics market, IoT, smart home. So I think the next big thing will be the smart workplace where you see huge construction sites and other areas where we connect the workers, anything from the smart helmet. You get hit in your head, how hard was it? I mean, why can’t we tell you that? That’s just ridiculous. There’s all these sensors already available. Someone just needs to power the helmet. Location services. Is the right person in the right place with the proper training or not? On the construction side, do you have the training to work with dynamite, for example, or heavy lifts or different stuff? So you can add the geofencing in different sites. You can add health data, digital health tracking, pulse, breathing, temperature, different stuff. Compliance, of course. Are you following all the rules? Are you wearing your helmet? Is the helmet buttoned? Are you wearing the proper other gear, whatever it is? Otherwise, you can’t start your engine, or you can’t go into this site, or you can’t whatever. I think that’s going to greatly improve the proactive safety and health a lot and increase profits for employers a lot too at the same time. In a few years, I think we’re going to see the American unions are going to be our best sales force. Because when they see the greatness of this whole system, they’re going to demand it in all tenders, all biggest projects. They’re going to say, “Hey, we want to have the connected worker safety stuff here.” Because you can just stream-- if you’re working, you can stream music, talk to your colleagues, enjoy connected safety without invading the privacy, knowing that you’re good. If you fall over, if you faint, if you get a heart attack, whatever, in a few seconds, the right people will know and they will take their appropriate actions. It’s just really, really cool, this stuff.
</p>
<p>
<strong>Cass:</strong> Well, it’ll be interesting to see how that turns out. But I’m afraid that’s all we have time for today, although this is fascinating. But today, so Giovanni, I want to thank you very much for coming on the show.
</p>
<p>
<strong>Fili:</strong> Thank you so much for having me.
</p>
<p>
<strong>Cass: </strong>So today we were talking with Giovanni Fili, who is Exeger’s founder and CEO, about their new flexible powerfoyle solar cell technology. For <em>IEEE Spectrum</em>‘s <em>Fixing the Future</em>, I’m Stephen Cass, and I hope you’ll join me next time.
</p>]]></description><pubDate>Wed, 15 May 2024 16:25:20 +0000</pubDate><guid>https://spectrum.ieee.org/dye-solar-charges-consumer-electronics</guid><category>Type-podcast</category><category>Fixing-the-future</category><category>Flexible-electronics</category><category>Dye-solar-cells</category><dc:creator>Stephen Cass</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/46469653/origin.jpg"/></item><item><title>The UK's ARIA Is Searching For Better AI Tech</title><link>https://spectrum.ieee.org/uk-arianew-ai-tech</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/fixing-the-future-podcast-logo.jpg?id=46469653&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/7a07edf9" width="100%">
</iframe><p><strong>Dina Genkina:</strong> Hi, I’m <a href="https://spectrum.ieee.org/u/dina-genkina" target="_self">Dina Genkina</a> for <em>IEEE Spectrum</em>‘s <em>Fixing the Future</em>. Before we start, I want to tell you that you can get the latest coverage from some of <em>Spectrum</em>‘s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to <a href="https://spectrum.ieee.org/newsletters/" target="_self">spectrum.ieee.org/newsletters</a> to subscribe. And today our guest on the show is Suraj Bramhavar. Recently, Bramhavar left his job as a co-founder and CTO of Sync Computing to start a new chapter. The UK government has just founded the <a href="https://www.aria.org.uk/about-aria/" rel="noopener noreferrer" target="_blank">Advanced Research Invention Agency</a>, or ARIA, modeled after the US’s own DARPA funding agency. Bramhavar is heading up ARIA’s first program, which officially launched on March 12th of this year. Bramhavar’s program aims to develop new technology to make AI computation 1,000 times more cost efficient than it is today. Siraj, welcome to the show.</p><p><strong>Suraj Bramhavar:</strong> Thanks for having me.</p><p><strong>Genkina:</strong> So your program wants to reduce AI training costs by a factor of 1,000, which is pretty ambitious. Why did you choose to focus on this problem?</p><p><strong>Bramhavar:</strong> So there’s a couple of reasons why. The first one is economical. I mean, AI is basically to become the primary economic driver of the entire computing industry. And to train a modern large-scale AI model costs somewhere between 10 million to 100 million pounds now. And AI is really unique in the sense that the capabilities grow with more computing power thrown at the problem. So there’s kind of no sign of those costs coming down anytime in the future. And so this has a number of knock-on effects. If I’m a world-class AI researcher, I basically have to choose whether I go work for a very large tech company that has the compute resources available for me to do my work or go raise 100 million pounds from some investor to be able to do cutting edge research. And this has a variety of effects. It dictates, first off, who gets to do the work and also what types of problems get addressed. So that’s the economic problem. And then separately, there’s a technological one, which is that all of this stuff that we call AI is built upon a very, very narrow set of algorithms and an even narrower set of hardware. And this has scaled phenomenally well. And we can probably continue to scale along kind of the known trajectories that we have. But it’s starting to show signs of strain. Like I just mentioned, there’s an economic strain, there’s an energy cost to all this. There’s logistical supply chain constraints. And we’re seeing this now with kind of the GPU crunch that you read about in the news.</p><p>And in some ways, the strength of the existing paradigm has kind of forced us to overlook a lot of possible alternative mechanisms that we could use to kind of perform similar computations. And this program is designed to kind of shine a light on those alternatives.</p><p><strong>Genkina:</strong> Yeah, cool. So you seem to think that there’s potential for pretty impactful alternatives that are orders of magnitude better than what we have. So maybe we can dive into some specific ideas of what those are. And you talk about in your thesis that you wrote up for the start of this program, you talk about natural computing systems. So computing systems that take some inspiration from nature. So can you explain a little bit what you mean by that and what some of the examples of that are?</p><p><strong>Bramhavar:</strong> Yeah. So when I say natural-based or nature-based computing, what I really mean is any computing system that either takes inspiration from nature to perform the computation or utilizes physics in a new and exciting way to perform computation. So you can think about kind of people have heard about neuromorphic computing. <a href="https://spectrum.ieee.org/connectome-neuromorphic-chips" target="_self">Neuromorphic computing</a> fits into this category, right? It takes inspiration from nature and usually performs a computation in most cases using digital logic. But that represents a really small slice of the overall breadth of technologies that incorporate nature. And part of what we want to do is highlight some of those other possible technologies. So what do I mean when I say nature-based computing? I think we have a solicitation call out right now, which calls out a few things that we’re interested in. Things like new types of in-memory computing architectures, rethinking AI models from an energy context. And we also call out a couple of technologies that are pivotal for the overall system to function, but are not necessarily so eye-catching, like how you interconnect chips together, and how you simulate a large-scale system of any novel technology outside of the digital landscape. I think these are critical pieces to realizing the overall program goals. And we want to put some funding towards kind of boosting that workup as well.</p><p><strong>Genkina:</strong> Okay, so you mentioned neuromorphic computing is a small part of the landscape that you’re aiming to explore here. But maybe let’s start with that. People may have heard of neuromorphic computing, but might not know exactly what it is. So can you give us the elevator pitch of neuromorphic computing?</p><p><strong>Bramhavar: </strong>Yeah, my translation of neuromorphic computing— and this may differ from person to person, but my translation of it is when you kind of encode the information in a neural network via spikes rather than kind of discrete values. And that modality has shown to work pretty well in certain situations. So if I have some camera and I need a neural network next to that camera that can recognize an image with very, very low power or very, very high speed, neuromorphic systems have shown to work remarkably well. And they’ve worked in a variety of other applications as well. One of the things that I haven’t seen, or maybe one of the drawbacks of that technology that I think I would love to see someone solve for is being able to use that modality to train large-scale neural networks. So if people have ideas on how to use neuromorphic systems to train models at commercially relevant scales, we would love to hear about them and that they should submit to this program call, which is out.</p><p><strong>Genkina: </strong>Is there a reason to expect that these kinds of— that neuromorphic computing might be a platform that promises these orders of magnitude cost improvements?</p><p><strong>Bramhavar: </strong>I don’t know. I mean, I don’t know actually if neuromorphic computing is the right technological direction to realize that these types of orders of magnitude cost improvements. It might be, but I think we’ve intentionally kind of designed the program to encompass more than just that particular technological slice of the pie, in part because it’s entirely possible that that is not the right direction to go. And there are other more fruitful directions to put funding towards. Part of what we’re thinking about when we’re designing these programs is we don’t really want to be prescriptive about a specific technology, be it neuromorphic computing or probabilistic computing or any particular thing that has a name that you can attach to it. Part of what we tried to do is set a very specific goal or a problem that we want to solve. Put out a funding call and let the community kind of tell us which technologies they think can best meet that goal. And that’s the way we’ve been trying to operate with this program specifically. So there are particular technologies we’re kind of intrigued by, but I don’t think we have any one of them selected as like kind of this is the path forward.</p><p><strong>Genkina:</strong> Cool. Yeah, so you’re kind of trying to see what architecture needs to happen to make computers as efficient as brains or closer to the brain’s efficiency.</p><p><strong>Bramhavar:</strong> And you kind of see this happening in the AI algorithms world. As these models get bigger and bigger and grow their capabilities, they’re starting to introduce things that we see in nature all the time. I think probably the most relevant example is this stable diffusion, this neural network model where you can type in text and generate an image. It’s got diffusion in the name. Diffusion is a natural process. Noise is a core element of this algorithm. And so there’s lots of examples like this where they’ve kind of— that community is taking bits and pieces or inspiration from nature and implementing it into these artificial neural networks. But in doing that, they’re doing it incredibly inefficiently.</p><p><strong>Genkina: </strong>Yeah. Okay, so great. So the idea is to take some of the efficiencies out in nature and kind of bring them into our technology. And I know you said you’re not prescribing any particular solution and you just want that general idea. But nevertheless, let’s talk about some particular solutions that have been worked on in the past because you’re not starting from zero and there are some ideas about how to do this. So I guess neuromorphic computing is one such idea. Another is this noise-based computing, something like probabilistic computing. Can you explain what that is?</p><p><strong>Bramhavar: </strong>Noise is a very intriguing property? And there’s kind of two ways I’m thinking about noise. One is just how do we deal with it? When you’re designing a digital computer, you’re effectively designing noise out of your system, right? You’re trying to eliminate noise. And you go through great pains to do that. And as soon as you move away from digital logic into something a little bit more analog, you spend a lot of resources fighting noise. And in most cases, you eliminate any benefit that you get from your kind of newfangled technology because you have to fight this noise. But in the context of neural networks, what’s very interesting is that over time, we’ve kind of seen algorithms researchers discover that they actually didn’t need to be as precise as they thought they needed to be. You’re seeing the precision kind of come down over time. The precision requirements of these networks come down over time. And we really haven’t hit the limit there as far as I know. And so with that in mind, you start to ask the question, “Okay, how precise do we actually have to be with these types of computations to perform the computation effectively?” And if we don’t need to be as precise as we thought, can we rethink the types of hardware platforms that we use to perform the computations?</p><p>So that’s one angle is just how do we better handle noise? The other angle is how do we exploit noise? And so there’s kind of entire textbooks full of algorithms where randomness is a key feature. I’m not talking necessarily about neural networks only. I’m talking about all algorithms where randomness plays a key role. Neural networks are kind of one area where this is also important. I mean, the primary way we train neural networks is stochastic gradient descent. So noise is kind of baked in there. I talked about stable diffusion models like that where noise becomes a key central element. In almost all of these cases, all of these algorithms, noise is kind of implemented using some digital random number generator. And so there the thought process would be, “Is it possible to redesign our hardware to make better use of the noise, given that we’re using noisy hardware to start with?” Notionally, there should be some savings that come from that. That presumes that the interface between whatever novel hardware you have that is creating this noise, and the hardware you have that’s performing the computing doesn’t eat away all your gains, right? I think that’s kind of the big technological roadblock that I’d be keen to see solutions for, outside of the algorithmic piece, which is just how do you make efficient use of noise.</p><p>When you’re thinking about implementing it in hardware, it becomes very, very tricky to implement it in a way where whatever gains you think you had are actually realized at the full system level. And in some ways, we want the solutions to be very, very tricky. The agency is designed to fund very high risk, high reward type of activities. And so there in some ways shouldn’t be consensus around a specific technological approach. Otherwise, somebody else would have likely funded it.</p><p><strong>Genkina: </strong>You’re already becoming British. You said you were keen on the solution.</p><p><strong>Bramhavar:</strong> I’ve been here long enough.</p><p><strong>Genkina: </strong>It’s showing. Great. Okay, so we talked a little bit about neuromorphic computing. We talked a little bit about noise. And you also mentioned some alternatives to backpropagation in your thesis. So maybe first, can you explain for those that might not be familiar what backpropagation is and why it might need to be changed?</p><p><strong>Bramhavar: </strong>Yeah, so this algorithm is essentially the bedrock of all AI training currently you use today. Essentially, what you’re doing is you have this large neural network. The neural network is composed of— you can think about it as this long chain of knobs. And you really have to tune all the knobs just right in order to get this network to perform a specific task, like when you give it an image of a cat, it says that it is a cat. And so what backpropagation allows you to do is to tune those knobs in a very, very efficient way. Starting from the end of your network, you kind of tune the knob a little bit, see if your answer gets a little bit closer to what you’d expect it to be. Use that information to then tune the knobs in the previous layer of your network and keep on doing that iteratively. And if you do this over and over again, you can eventually find all the right positions of your knobs such that your network does whatever you’re trying to do. And so this is great. Now, the issue is every time you tune one of these knobs, you’re performing this massive mathematical computation. And you’re typically doing that across many, many GPUs. And you do that just to tweak the knob a little bit. And so you have to do it over and over and over and over again to get the knobs where you need to go.</p><p>There’s a whole bevy of algorithms. What you’re really doing is kind of minimizing error between what you want the network to do and what it’s actually doing. And if you think about it along those terms, there’s a whole bevy of algorithms in the literature that kind of minimize energy or error in that way. None of them work as well as backpropagation. In some ways, the algorithm is beautiful and extraordinarily simple. And most importantly, it’s very, very well suited to be parallelized on GPUs. And I think that is part of its success. But one of the things I think both algorithmic researchers and hardware researchers fall victim to is this chicken and egg problem, right? Algorithms researchers build algorithms that work well on the hardware platforms that they have available to them. And at the same time, hardware researchers develop hardware for the existing algorithms of the day. And so one of the things we want to try to do with this program is blend those worlds and allow algorithms researchers to think about what is the field of algorithms that I could explore if I could rethink some of the bottlenecks in the hardware that I have available to me. Similarly in the opposite direction.</p><p><strong>Genkina: </strong>Imagine that you succeeded at your goal and the program and the wider community came up with a 1/1000s compute cost architecture, both hardware and software together. What does your gut say that that would look like? Just an example. I know you don’t know what’s going to come out of this, but give us a vision.</p><p><strong>Bramhavar:</strong> Similarly, like I said, I don’t think I can prescribe a specific technology. What I can say is that— I can say with pretty high confidence, it’s not going to just be one particular technological kind of pinch point that gets unlocked. It’s going to be a systems level thing. So there may be individual technology at the chip level or the hardware level. Those technologies then also have to meld with things at the systems level as well and the algorithms level as well. And I think all of those are going to be necessary in order to reach these goals. I’m talking kind of generally, but what I really mean is like what I said before is we got to think about new types of hardware. We also have to think about, “Okay, if we’re going to scale these things and manufacture them in large volumes cost effectively, we’re going to have to build larger systems out of building blocks of these things. So we’re going to have to think about how to stitch them together in a way that makes sense and doesn’t eat away any of the benefits. We’re also going to have to think about how to simulate the behavior of these things before we build them.” I think part of the power of the digital electronics ecosystem comes from the fact that you have cadence and synopsis and these EDA platforms that allow you with very high accuracy to predict how your circuits are going to perform before you build them. And once you get out of that ecosystem, you don’t really have that.</p><p>So I think it’s going to take all of these things in order to actually reach these goals. And I think part of what this program is designed to do is kind of change the conversation around what is possible. So by the end of this, it’s a four-year program. We want to show that there is a viable path towards this end goal. And that viable path could incorporate kind of all of these aspects of what I just mentioned.</p><p><strong>Genkina:</strong> Okay. So the program is four years, but you don’t necessarily expect like a finished product of a 1/1000s cost computer by the end of the four years, right? You kind of just expect to develop a path towards it.</p><p><strong>Bramhavar: </strong>Yeah. I mean, ARIA was kind of set up with this kind of decadal time horizon. We want to push out-- we want to fund, as I mentioned, high-risk, high reward technologies. We have this kind of long time horizon to think about these things. I think the program is designed around four years in order to kind of shift the window of what the world thinks is possible in that timeframe. And in the hopes that we change the conversation. Other folks will pick up this work at the end of that four years, and it will have this kind of large-scale impact on a decadal.</p><p><strong>Genkina:</strong> Great. Well, thank you so much for coming today. Today we spoke with Dr. Suraj Bramhavar, lead of the first program headed up by the UK’s newest funding agency, ARIA. He filled us in on his plans to reduce AI costs by a factor of 1,000, and we’ll have to check back with him in a few years to see what progress has been made towards this grand vision. For <em>IEEE Spectrum</em>, I’m Dina Genkina, and I hope you’ll join us next time on <em>Fixing the Future</em>.</p>]]></description><pubDate>Wed, 01 May 2024 16:10:09 +0000</pubDate><guid>https://spectrum.ieee.org/uk-arianew-ai-tech</guid><category>Type-podcast</category><category>Fixing-the-future</category><category>Aria</category><category>Neuromorphic-computing</category><dc:creator>Dina Genkina</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/46469653/origin.jpg"/><enclosure length="17500" type="application/json; charset=utf-8" url="https://www.aria.org.uk/our-team/"/><itunes:explicit/><itunes:subtitle>Dina Genkina: Hi, I’m Dina Genkina for IEEE Spectrum‘s Fixing the Future. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum‘s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe. And today our guest on the show is Suraj Bramhavar. Recently, Bramhavar left his job as a co-founder and CTO of Sync Computing to start a new chapter. The UK government has just founded the Advanced Research Invention Agency, or ARIA, modeled after the US’s own DARPA funding agency. Bramhavar is heading up ARIA’s first program, which officially launched on March 12th of this year. Bramhavar’s program aims to develop new technology to make AI computation 1,000 times more cost efficient than it is today. Siraj, welcome to the show. Suraj Bramhavar: Thanks for having me. Genkina: So your program wants to reduce AI training costs by a factor of 1,000, which is pretty ambitious. Why did you choose to focus on this problem? Bramhavar: So there’s a couple of reasons why. The first one is economical. I mean, AI is basically to become the primary economic driver of the entire computing industry. And to train a modern large-scale AI model costs somewhere between 10 million to 100 million pounds now. And AI is really unique in the sense that the capabilities grow with more computing power thrown at the problem. So there’s kind of no sign of those costs coming down anytime in the future. And so this has a number of knock-on effects. If I’m a world-class AI researcher, I basically have to choose whether I go work for a very large tech company that has the compute resources available for me to do my work or go raise 100 million pounds from some investor to be able to do cutting edge research. And this has a variety of effects. It dictates, first off, who gets to do the work and also what types of problems get addressed. So that’s the economic problem. And then separately, there’s a technological one, which is that all of this stuff that we call AI is built upon a very, very narrow set of algorithms and an even narrower set of hardware. And this has scaled phenomenally well. And we can probably continue to scale along kind of the known trajectories that we have. But it’s starting to show signs of strain. Like I just mentioned, there’s an economic strain, there’s an energy cost to all this. There’s logistical supply chain constraints. And we’re seeing this now with kind of the GPU crunch that you read about in the news. And in some ways, the strength of the existing paradigm has kind of forced us to overlook a lot of possible alternative mechanisms that we could use to kind of perform similar computations. And this program is designed to kind of shine a light on those alternatives. Genkina: Yeah, cool. So you seem to think that there’s potential for pretty impactful alternatives that are orders of magnitude better than what we have. So maybe we can dive into some specific ideas of what those are. And you talk about in your thesis that you wrote up for the start of this program, you talk about natural computing systems. So computing systems that take some inspiration from nature. So can you explain a little bit what you mean by that and what some of the examples of that are? Bramhavar: Yeah. So when I say natural-based or nature-based computing, what I really mean is any computing system that either takes inspiration from nature to perform the computation or utilizes physics in a new and exciting way to perform computation. So you can think about kind of people have heard about neuromorphic computing. Neuromorphic computing fits into this category, right? It takes inspiration from nature and usually performs a computation in most cases using digital logic. But that represents a really small slice of the overall breadth of technologies that incorporate nature. And part of what we want to do is highlight some of those other possible technologies. So what do I mean when I say nature-based computing? I think we have a solicitation call out right now, which calls out a few things that we’re interested in. Things like new types of in-memory computing architectures, rethinking AI models from an energy context. And we also call out a couple of technologies that are pivotal for the overall system to function, but are not necessarily so eye-catching, like how you interconnect chips together, and how you simulate a large-scale system of any novel technology outside of the digital landscape. I think these are critical pieces to realizing the overall program goals. And we want to put some funding towards kind of boosting that workup as well. Genkina: Okay, so you mentioned neuromorphic computing is a small part of the landscape that you’re aiming to explore here. But maybe let’s start with that. People may have heard of neuromorphic computing, but might not know exactly what it is. So can you give us the elevator pitch of neuromorphic computing? Bramhavar: Yeah, my translation of neuromorphic computing— and this may differ from person to person, but my translation of it is when you kind of encode the information in a neural network via spikes rather than kind of discrete values. And that modality has shown to work pretty well in certain situations. So if I have some camera and I need a neural network next to that camera that can recognize an image with very, very low power or very, very high speed, neuromorphic systems have shown to work remarkably well. And they’ve worked in a variety of other applications as well. One of the things that I haven’t seen, or maybe one of the drawbacks of that technology that I think I would love to see someone solve for is being able to use that modality to train large-scale neural networks. So if people have ideas on how to use neuromorphic systems to train models at commercially relevant scales, we would love to hear about them and that they should submit to this program call, which is out. Genkina: Is there a reason to expect that these kinds of— that neuromorphic computing might be a platform that promises these orders of magnitude cost improvements? Bramhavar: I don’t know. I mean, I don’t know actually if neuromorphic computing is the right technological direction to realize that these types of orders of magnitude cost improvements. It might be, but I think we’ve intentionally kind of designed the program to encompass more than just that particular technological slice of the pie, in part because it’s entirely possible that that is not the right direction to go. And there are other more fruitful directions to put funding towards. Part of what we’re thinking about when we’re designing these programs is we don’t really want to be prescriptive about a specific technology, be it neuromorphic computing or probabilistic computing or any particular thing that has a name that you can attach to it. Part of what we tried to do is set a very specific goal or a problem that we want to solve. Put out a funding call and let the community kind of tell us which technologies they think can best meet that goal. And that’s the way we’ve been trying to operate with this program specifically. So there are particular technologies we’re kind of intrigued by, but I don’t think we have any one of them selected as like kind of this is the path forward. Genkina: Cool. Yeah, so you’re kind of trying to see what architecture needs to happen to make computers as efficient as brains or closer to the brain’s efficiency. Bramhavar: And you kind of see this happening in the AI algorithms world. As these models get bigger and bigger and grow their capabilities, they’re starting to introduce things that we see in nature all the time. I think probably the most relevant example is this stable diffusion, this neural network model where you can type in text and generate an image. It’s got diffusion in the name. Diffusion is a natural process. Noise is a core element of this algorithm. And so there’s lots of examples like this where they’ve kind of— that community is taking bits and pieces or inspiration from nature and implementing it into these artificial neural networks. But in doing that, they’re doing it incredibly inefficiently. Genkina: Yeah. Okay, so great. So the idea is to take some of the efficiencies out in nature and kind of bring them into our technology. And I know you said you’re not prescribing any particular solution and you just want that general idea. But nevertheless, let’s talk about some particular solutions that have been worked on in the past because you’re not starting from zero and there are some ideas about how to do this. So I guess neuromorphic computing is one such idea. Another is this noise-based computing, something like probabilistic computing. Can you explain what that is? Bramhavar: Noise is a very intriguing property? And there’s kind of two ways I’m thinking about noise. One is just how do we deal with it? When you’re designing a digital computer, you’re effectively designing noise out of your system, right? You’re trying to eliminate noise. And you go through great pains to do that. And as soon as you move away from digital logic into something a little bit more analog, you spend a lot of resources fighting noise. And in most cases, you eliminate any benefit that you get from your kind of newfangled technology because you have to fight this noise. But in the context of neural networks, what’s very interesting is that over time, we’ve kind of seen algorithms researchers discover that they actually didn’t need to be as precise as they thought they needed to be. You’re seeing the precision kind of come down over time. The precision requirements of these networks come down over time. And we really haven’t hit the limit there as far as I know. And so with that in mind, you start to ask the question, “Okay, how precise do we actually have to be with these types of computations to perform the computation effectively?” And if we don’t need to be as precise as we thought, can we rethink the types of hardware platforms that we use to perform the computations? So that’s one angle is just how do we better handle noise? The other angle is how do we exploit noise? And so there’s kind of entire textbooks full of algorithms where randomness is a key feature. I’m not talking necessarily about neural networks only. I’m talking about all algorithms where randomness plays a key role. Neural networks are kind of one area where this is also important. I mean, the primary way we train neural networks is stochastic gradient descent. So noise is kind of baked in there. I talked about stable diffusion models like that where noise becomes a key central element. In almost all of these cases, all of these algorithms, noise is kind of implemented using some digital random number generator. And so there the thought process would be, “Is it possible to redesign our hardware to make better use of the noise, given that we’re using noisy hardware to start with?” Notionally, there should be some savings that come from that. That presumes that the interface between whatever novel hardware you have that is creating this noise, and the hardware you have that’s performing the computing doesn’t eat away all your gains, right? I think that’s kind of the big technological roadblock that I’d be keen to see solutions for, outside of the algorithmic piece, which is just how do you make efficient use of noise. When you’re thinking about implementing it in hardware, it becomes very, very tricky to implement it in a way where whatever gains you think you had are actually realized at the full system level. And in some ways, we want the solutions to be very, very tricky. The agency is designed to fund very high risk, high reward type of activities. And so there in some ways shouldn’t be consensus around a specific technological approach. Otherwise, somebody else would have likely funded it. Genkina: You’re already becoming British. You said you were keen on the solution. Bramhavar: I’ve been here long enough. Genkina: It’s showing. Great. Okay, so we talked a little bit about neuromorphic computing. We talked a little bit about noise. And you also mentioned some alternatives to backpropagation in your thesis. So maybe first, can you explain for those that might not be familiar what backpropagation is and why it might need to be changed? Bramhavar: Yeah, so this algorithm is essentially the bedrock of all AI training currently you use today. Essentially, what you’re doing is you have this large neural network. The neural network is composed of— you can think about it as this long chain of knobs. And you really have to tune all the knobs just right in order to get this network to perform a specific task, like when you give it an image of a cat, it says that it is a cat. And so what backpropagation allows you to do is to tune those knobs in a very, very efficient way. Starting from the end of your network, you kind of tune the knob a little bit, see if your answer gets a little bit closer to what you’d expect it to be. Use that information to then tune the knobs in the previous layer of your network and keep on doing that iteratively. And if you do this over and over again, you can eventually find all the right positions of your knobs such that your network does whatever you’re trying to do. And so this is great. Now, the issue is every time you tune one of these knobs, you’re performing this massive mathematical computation. And you’re typically doing that across many, many GPUs. And you do that just to tweak the knob a little bit. And so you have to do it over and over and over and over again to get the knobs where you need to go. There’s a whole bevy of algorithms. What you’re really doing is kind of minimizing error between what you want the network to do and what it’s actually doing. And if you think about it along those terms, there’s a whole bevy of algorithms in the literature that kind of minimize energy or error in that way. None of them work as well as backpropagation. In some ways, the algorithm is beautiful and extraordinarily simple. And most importantly, it’s very, very well suited to be parallelized on GPUs. And I think that is part of its success. But one of the things I think both algorithmic researchers and hardware researchers fall victim to is this chicken and egg problem, right? Algorithms researchers build algorithms that work well on the hardware platforms that they have available to them. And at the same time, hardware researchers develop hardware for the existing algorithms of the day. And so one of the things we want to try to do with this program is blend those worlds and allow algorithms researchers to think about what is the field of algorithms that I could explore if I could rethink some of the bottlenecks in the hardware that I have available to me. Similarly in the opposite direction. Genkina: Imagine that you succeeded at your goal and the program and the wider community came up with a 1/1000s compute cost architecture, both hardware and software together. What does your gut say that that would look like? Just an example. I know you don’t know what’s going to come out of this, but give us a vision. Bramhavar: Similarly, like I said, I don’t think I can prescribe a specific technology. What I can say is that— I can say with pretty high confidence, it’s not going to just be one particular technological kind of pinch point that gets unlocked. It’s going to be a systems level thing. So there may be individual technology at the chip level or the hardware level. Those technologies then also have to meld with things at the systems level as well and the algorithms level as well. And I think all of those are going to be necessary in order to reach these goals. I’m talking kind of generally, but what I really mean is like what I said before is we got to think about new types of hardware. We also have to think about, “Okay, if we’re going to scale these things and manufacture them in large volumes cost effectively, we’re going to have to build larger systems out of building blocks of these things. So we’re going to have to think about how to stitch them together in a way that makes sense and doesn’t eat away any of the benefits. We’re also going to have to think about how to simulate the behavior of these things before we build them.” I think part of the power of the digital electronics ecosystem comes from the fact that you have cadence and synopsis and these EDA platforms that allow you with very high accuracy to predict how your circuits are going to perform before you build them. And once you get out of that ecosystem, you don’t really have that. So I think it’s going to take all of these things in order to actually reach these goals. And I think part of what this program is designed to do is kind of change the conversation around what is possible. So by the end of this, it’s a four-year program. We want to show that there is a viable path towards this end goal. And that viable path could incorporate kind of all of these aspects of what I just mentioned. Genkina: Okay. So the program is four years, but you don’t necessarily expect like a finished product of a 1/1000s cost computer by the end of the four years, right? You kind of just expect to develop a path towards it. Bramhavar: Yeah. I mean, ARIA was kind of set up with this kind of decadal time horizon. We want to push out-- we want to fund, as I mentioned, high-risk, high reward technologies. We have this kind of long time horizon to think about these things. I think the program is designed around four years in order to kind of shift the window of what the world thinks is possible in that timeframe. And in the hopes that we change the conversation. Other folks will pick up this work at the end of that four years, and it will have this kind of large-scale impact on a decadal. Genkina: Great. Well, thank you so much for coming today. Today we spoke with Dr. Suraj Bramhavar, lead of the first program headed up by the UK’s newest funding agency, ARIA. He filled us in on his plans to reduce AI costs by a factor of 1,000, and we’ll have to check back with him in a few years to see what progress has been made towards this grand vision. For IEEE Spectrum, I’m Dina Genkina, and I hope you’ll join us next time on Fixing the Future.</itunes:subtitle><itunes:summary>Dina Genkina: Hi, I’m Dina Genkina for IEEE Spectrum‘s Fixing the Future. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum‘s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe. And today our guest on the show is Suraj Bramhavar. Recently, Bramhavar left his job as a co-founder and CTO of Sync Computing to start a new chapter. The UK government has just founded the Advanced Research Invention Agency, or ARIA, modeled after the US’s own DARPA funding agency. Bramhavar is heading up ARIA’s first program, which officially launched on March 12th of this year. Bramhavar’s program aims to develop new technology to make AI computation 1,000 times more cost efficient than it is today. Siraj, welcome to the show. Suraj Bramhavar: Thanks for having me. Genkina: So your program wants to reduce AI training costs by a factor of 1,000, which is pretty ambitious. Why did you choose to focus on this problem? Bramhavar: So there’s a couple of reasons why. The first one is economical. I mean, AI is basically to become the primary economic driver of the entire computing industry. And to train a modern large-scale AI model costs somewhere between 10 million to 100 million pounds now. And AI is really unique in the sense that the capabilities grow with more computing power thrown at the problem. So there’s kind of no sign of those costs coming down anytime in the future. And so this has a number of knock-on effects. If I’m a world-class AI researcher, I basically have to choose whether I go work for a very large tech company that has the compute resources available for me to do my work or go raise 100 million pounds from some investor to be able to do cutting edge research. And this has a variety of effects. It dictates, first off, who gets to do the work and also what types of problems get addressed. So that’s the economic problem. And then separately, there’s a technological one, which is that all of this stuff that we call AI is built upon a very, very narrow set of algorithms and an even narrower set of hardware. And this has scaled phenomenally well. And we can probably continue to scale along kind of the known trajectories that we have. But it’s starting to show signs of strain. Like I just mentioned, there’s an economic strain, there’s an energy cost to all this. There’s logistical supply chain constraints. And we’re seeing this now with kind of the GPU crunch that you read about in the news. And in some ways, the strength of the existing paradigm has kind of forced us to overlook a lot of possible alternative mechanisms that we could use to kind of perform similar computations. And this program is designed to kind of shine a light on those alternatives. Genkina: Yeah, cool. So you seem to think that there’s potential for pretty impactful alternatives that are orders of magnitude better than what we have. So maybe we can dive into some specific ideas of what those are. And you talk about in your thesis that you wrote up for the start of this program, you talk about natural computing systems. So computing systems that take some inspiration from nature. So can you explain a little bit what you mean by that and what some of the examples of that are? Bramhavar: Yeah. So when I say natural-based or nature-based computing, what I really mean is any computing system that either takes inspiration from nature to perform the computation or utilizes physics in a new and exciting way to perform computation. So you can think about kind of people have heard about neuromorphic computing. Neuromorphic computing fits into this category, right? It takes inspiration from nature and usually performs a computation in most cases using digital logic. But that represents a really small slice of the overall breadth of technologies that incorporate nature. And part of what we want to do is highlight some of those other possible technologies. So what do I mean when I say nature-based computing? I think we have a solicitation call out right now, which calls out a few things that we’re interested in. Things like new types of in-memory computing architectures, rethinking AI models from an energy context. And we also call out a couple of technologies that are pivotal for the overall system to function, but are not necessarily so eye-catching, like how you interconnect chips together, and how you simulate a large-scale system of any novel technology outside of the digital landscape. I think these are critical pieces to realizing the overall program goals. And we want to put some funding towards kind of boosting that workup as well. Genkina: Okay, so you mentioned neuromorphic computing is a small part of the landscape that you’re aiming to explore here. But maybe let’s start with that. People may have heard of neuromorphic computing, but might not know exactly what it is. So can you give us the elevator pitch of neuromorphic computing? Bramhavar: Yeah, my translation of neuromorphic computing— and this may differ from person to person, but my translation of it is when you kind of encode the information in a neural network via spikes rather than kind of discrete values. And that modality has shown to work pretty well in certain situations. So if I have some camera and I need a neural network next to that camera that can recognize an image with very, very low power or very, very high speed, neuromorphic systems have shown to work remarkably well. And they’ve worked in a variety of other applications as well. One of the things that I haven’t seen, or maybe one of the drawbacks of that technology that I think I would love to see someone solve for is being able to use that modality to train large-scale neural networks. So if people have ideas on how to use neuromorphic systems to train models at commercially relevant scales, we would love to hear about them and that they should submit to this program call, which is out. Genkina: Is there a reason to expect that these kinds of— that neuromorphic computing might be a platform that promises these orders of magnitude cost improvements? Bramhavar: I don’t know. I mean, I don’t know actually if neuromorphic computing is the right technological direction to realize that these types of orders of magnitude cost improvements. It might be, but I think we’ve intentionally kind of designed the program to encompass more than just that particular technological slice of the pie, in part because it’s entirely possible that that is not the right direction to go. And there are other more fruitful directions to put funding towards. Part of what we’re thinking about when we’re designing these programs is we don’t really want to be prescriptive about a specific technology, be it neuromorphic computing or probabilistic computing or any particular thing that has a name that you can attach to it. Part of what we tried to do is set a very specific goal or a problem that we want to solve. Put out a funding call and let the community kind of tell us which technologies they think can best meet that goal. And that’s the way we’ve been trying to operate with this program specifically. So there are particular technologies we’re kind of intrigued by, but I don’t think we have any one of them selected as like kind of this is the path forward. Genkina: Cool. Yeah, so you’re kind of trying to see what architecture needs to happen to make computers as efficient as brains or closer to the brain’s efficiency. Bramhavar: And you kind of see this happening in the AI algorithms world. As these models get bigger and bigger and grow their capabilities, they’re starting to introduce things that we see in nature all the time. I think probably the most relevant example is this stable diffusion, this neural network model where you can type in text and generate an image. It’s got diffusion in the name. Diffusion is a natural process. Noise is a core element of this algorithm. And so there’s lots of examples like this where they’ve kind of— that community is taking bits and pieces or inspiration from nature and implementing it into these artificial neural networks. But in doing that, they’re doing it incredibly inefficiently. Genkina: Yeah. Okay, so great. So the idea is to take some of the efficiencies out in nature and kind of bring them into our technology. And I know you said you’re not prescribing any particular solution and you just want that general idea. But nevertheless, let’s talk about some particular solutions that have been worked on in the past because you’re not starting from zero and there are some ideas about how to do this. So I guess neuromorphic computing is one such idea. Another is this noise-based computing, something like probabilistic computing. Can you explain what that is? Bramhavar: Noise is a very intriguing property? And there’s kind of two ways I’m thinking about noise. One is just how do we deal with it? When you’re designing a digital computer, you’re effectively designing noise out of your system, right? You’re trying to eliminate noise. And you go through great pains to do that. And as soon as you move away from digital logic into something a little bit more analog, you spend a lot of resources fighting noise. And in most cases, you eliminate any benefit that you get from your kind of newfangled technology because you have to fight this noise. But in the context of neural networks, what’s very interesting is that over time, we’ve kind of seen algorithms researchers discover that they actually didn’t need to be as precise as they thought they needed to be. You’re seeing the precision kind of come down over time. The precision requirements of these networks come down over time. And we really haven’t hit the limit there as far as I know. And so with that in mind, you start to ask the question, “Okay, how precise do we actually have to be with these types of computations to perform the computation effectively?” And if we don’t need to be as precise as we thought, can we rethink the types of hardware platforms that we use to perform the computations? So that’s one angle is just how do we better handle noise? The other angle is how do we exploit noise? And so there’s kind of entire textbooks full of algorithms where randomness is a key feature. I’m not talking necessarily about neural networks only. I’m talking about all algorithms where randomness plays a key role. Neural networks are kind of one area where this is also important. I mean, the primary way we train neural networks is stochastic gradient descent. So noise is kind of baked in there. I talked about stable diffusion models like that where noise becomes a key central element. In almost all of these cases, all of these algorithms, noise is kind of implemented using some digital random number generator. And so there the thought process would be, “Is it possible to redesign our hardware to make better use of the noise, given that we’re using noisy hardware to start with?” Notionally, there should be some savings that come from that. That presumes that the interface between whatever novel hardware you have that is creating this noise, and the hardware you have that’s performing the computing doesn’t eat away all your gains, right? I think that’s kind of the big technological roadblock that I’d be keen to see solutions for, outside of the algorithmic piece, which is just how do you make efficient use of noise. When you’re thinking about implementing it in hardware, it becomes very, very tricky to implement it in a way where whatever gains you think you had are actually realized at the full system level. And in some ways, we want the solutions to be very, very tricky. The agency is designed to fund very high risk, high reward type of activities. And so there in some ways shouldn’t be consensus around a specific technological approach. Otherwise, somebody else would have likely funded it. Genkina: You’re already becoming British. You said you were keen on the solution. Bramhavar: I’ve been here long enough. Genkina: It’s showing. Great. Okay, so we talked a little bit about neuromorphic computing. We talked a little bit about noise. And you also mentioned some alternatives to backpropagation in your thesis. So maybe first, can you explain for those that might not be familiar what backpropagation is and why it might need to be changed? Bramhavar: Yeah, so this algorithm is essentially the bedrock of all AI training currently you use today. Essentially, what you’re doing is you have this large neural network. The neural network is composed of— you can think about it as this long chain of knobs. And you really have to tune all the knobs just right in order to get this network to perform a specific task, like when you give it an image of a cat, it says that it is a cat. And so what backpropagation allows you to do is to tune those knobs in a very, very efficient way. Starting from the end of your network, you kind of tune the knob a little bit, see if your answer gets a little bit closer to what you’d expect it to be. Use that information to then tune the knobs in the previous layer of your network and keep on doing that iteratively. And if you do this over and over again, you can eventually find all the right positions of your knobs such that your network does whatever you’re trying to do. And so this is great. Now, the issue is every time you tune one of these knobs, you’re performing this massive mathematical computation. And you’re typically doing that across many, many GPUs. And you do that just to tweak the knob a little bit. And so you have to do it over and over and over and over again to get the knobs where you need to go. There’s a whole bevy of algorithms. What you’re really doing is kind of minimizing error between what you want the network to do and what it’s actually doing. And if you think about it along those terms, there’s a whole bevy of algorithms in the literature that kind of minimize energy or error in that way. None of them work as well as backpropagation. In some ways, the algorithm is beautiful and extraordinarily simple. And most importantly, it’s very, very well suited to be parallelized on GPUs. And I think that is part of its success. But one of the things I think both algorithmic researchers and hardware researchers fall victim to is this chicken and egg problem, right? Algorithms researchers build algorithms that work well on the hardware platforms that they have available to them. And at the same time, hardware researchers develop hardware for the existing algorithms of the day. And so one of the things we want to try to do with this program is blend those worlds and allow algorithms researchers to think about what is the field of algorithms that I could explore if I could rethink some of the bottlenecks in the hardware that I have available to me. Similarly in the opposite direction. Genkina: Imagine that you succeeded at your goal and the program and the wider community came up with a 1/1000s compute cost architecture, both hardware and software together. What does your gut say that that would look like? Just an example. I know you don’t know what’s going to come out of this, but give us a vision. Bramhavar: Similarly, like I said, I don’t think I can prescribe a specific technology. What I can say is that— I can say with pretty high confidence, it’s not going to just be one particular technological kind of pinch point that gets unlocked. It’s going to be a systems level thing. So there may be individual technology at the chip level or the hardware level. Those technologies then also have to meld with things at the systems level as well and the algorithms level as well. And I think all of those are going to be necessary in order to reach these goals. I’m talking kind of generally, but what I really mean is like what I said before is we got to think about new types of hardware. We also have to think about, “Okay, if we’re going to scale these things and manufacture them in large volumes cost effectively, we’re going to have to build larger systems out of building blocks of these things. So we’re going to have to think about how to stitch them together in a way that makes sense and doesn’t eat away any of the benefits. We’re also going to have to think about how to simulate the behavior of these things before we build them.” I think part of the power of the digital electronics ecosystem comes from the fact that you have cadence and synopsis and these EDA platforms that allow you with very high accuracy to predict how your circuits are going to perform before you build them. And once you get out of that ecosystem, you don’t really have that. So I think it’s going to take all of these things in order to actually reach these goals. And I think part of what this program is designed to do is kind of change the conversation around what is possible. So by the end of this, it’s a four-year program. We want to show that there is a viable path towards this end goal. And that viable path could incorporate kind of all of these aspects of what I just mentioned. Genkina: Okay. So the program is four years, but you don’t necessarily expect like a finished product of a 1/1000s cost computer by the end of the four years, right? You kind of just expect to develop a path towards it. Bramhavar: Yeah. I mean, ARIA was kind of set up with this kind of decadal time horizon. We want to push out-- we want to fund, as I mentioned, high-risk, high reward technologies. We have this kind of long time horizon to think about these things. I think the program is designed around four years in order to kind of shift the window of what the world thinks is possible in that timeframe. And in the hopes that we change the conversation. Other folks will pick up this work at the end of that four years, and it will have this kind of large-scale impact on a decadal. Genkina: Great. Well, thank you so much for coming today. Today we spoke with Dr. Suraj Bramhavar, lead of the first program headed up by the UK’s newest funding agency, ARIA. He filled us in on his plans to reduce AI costs by a factor of 1,000, and we’ll have to check back with him in a few years to see what progress has been made towards this grand vision. For IEEE Spectrum, I’m Dina Genkina, and I hope you’ll join us next time on Fixing the Future.</itunes:summary><itunes:keywords>Type-podcast, Fixing-the-future, Aria, Neuromorphic-computing</itunes:keywords></item><item><title>U.S. Commercial Drone Delivery Comes Closer</title><link>https://spectrum.ieee.org/us-drone-delivery-comes-closer</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.webp?id=52004785&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/7c352e00" width="100%"></iframe><p>
<strong>Stephen Cass:</strong> Hello and welcome to <em><em>Fixing the Future</em></em>, an <em><em>IEEE Spectrum</em></em> podcast where we look at concrete solutions to tough problems. I’m your host,<a href="https://spectrum.ieee.org/u/stephen-cass" target="_self"> <u>Stephen Cass</u></a>, a senior editor at <em><em>IEEE Spectrum</em></em>. And before I start, I just want to tell you that you can get the latest coverage of some of Spectrum’s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to<a href="https://spectrum.ieee.org/newsletters/" target="_self"> <u>spectrum.ieee.org/newsletters</u></a> to subscribe. We’ve been covering the drone delivery company<a href="https://www.flyzipline.com/" rel="noopener noreferrer" target="_blank"> <u>Zipline</u></a> in <em><em>Spectrum</em></em> for several years, and I do encourage listeners to check out our great onsite reporting from Rwanda in 2019 when we visited one of<a href="https://spectrum.ieee.org/in-the-air-with-ziplines-medical-delivery-drones" target="_self"> <u>Zipline’s dispatch centers</u></a> for delivering vital medical supplies into rural areas. But now it’s 2024, and Zipline is expanding into commercial drone delivery in the United States, including into urban areas, and hitting some recent milestones. Here to talk about some of those milestones today, we have<a href="https://www.linkedin.com/in/keenanwyrobek/" rel="noopener noreferrer" target="_blank"> <u>Keenan Wyrobek</u></a>, Zipline’s co-founder and CTO. Keenan, welcome to the show.
</p><p>
<strong>Keenan Wyrobek: </strong>Great to be here. Thanks for having me.
</p><p>
<strong>Cass: </strong>So before we get into what’s going on with the United States, can you first catch us up on how things have been going on with Rwanda and the other African countries you’ve been operating in?
</p><p>
<strong>Wyrobek: </strong>Yeah, absolutely. So we’re now operating in eight countries, including here in the US. That includes a handful of countries in Africa, as well as Japan and Europe. So in Africa, it’s really exciting. So the scale is really impressive, basically. As we’ve been operating, started eight years ago with blood, then moved into vaccine delivery and delivering many other things in the healthcare space, as well as outside the healthcare space. We can talk a little bit about in things like animal husbandry and other things. The scale is really what’s exciting. We have a single distribution center there that now regularly flies more than the equivalent of once the equator of the Earth every day. And that’s just from one of a whole bunch of distribution centers. That’s where we are really with that operation today.
</p><p>
<strong>Cass: </strong>So could you talk a little bit about those non-medical systems? Because this was very much how we’d seen blood being parachuted down from these drones and reaching those distant centers. What other things are you delivering there?
</p><p>
<strong>Wyrobek: </strong>Yeah, absolutely. So start with blood, like you said, then vaccines. We’ve now done delivered well over 15 million vaccine doses, lots of other pharmaceutical use cases to hospitals and clinics, and more recently, patient home delivery for chronic care of things like hypertension, HIV-positive patients, and things like that. And then, yeah, moved into some really exciting use cases and things like animal husbandry. One that I’m personally really excited about is supporting these genetic diversity campaigns. It’s one of those things very unglamorous, but really impactful. One of the main sources of protein around the world is cow’s milk. And it turns out the difference between a non-genetically diverse cow and a genetically diverse cow can be 10x difference in milk production. And so one of the things we deliver is bull semen. We’re very good at the cold chain involved in that as we’ve mastered in vaccines and blood. And that’s just one of many things we’re doing in other spaces outside of healthcare directly.
</p><p>
<strong>Cass: </strong>Oh, fascinating. So turning now to the US, it seems like there’s been two big developments recently. One is you’re getting close to deploying Platform 2, which has some really fascinating tech that allows packages to be delivered very precisely by tether. And I do want to talk about that later. But first, I want to talk about a big milestone you had late last year. And this was something that goes by the very unlovely acronym of a BVLOS flight. Can you tell us what a BVLOS stands for and why that flight was such a big deal?
</p><p>
<strong>Wryobek: </strong>Yeah, “beyond visual line of sight.” And so that is basically, before this milestone last year, all drone deliveries, all drone operations in the US were done by people standing on the ground, looking at the sky, that line of sight. And that’s how basically we made sure that the drones were staying clear of aircraft. This is true of everybody. Now, this is important because in places like the United States, many aircraft don’t and aren’t required to carry a transponder, right? So transponders where they have a radio signal that they’re transmitting their location that our drones can listen to and use to maintain separation. And so the holy grail of basically scalable drone operations, of course, it’s physically impossible to have people standing around all the world staring at the sky, and is a sensing solution where you can sense those aircraft and avoid those aircraft. And this is something we’ve been working on for a long time and got the approval for late last year with the FAA, the first-ever use of sensors to detect and avoid for maintaining safety in the US airspace, which is just really, really exciting. That’s now been in operations in two distribution centers here, one in Utah and one in Arkansas ever since.
</p><p>
<strong>Cass: </strong>So could you just tell us a little bit about how that tech works? It just seems to be quite advanced to trust a drone to recognize, “Oh, that is an actual airplane that’s a Cessna that’s going to be here in about two minutes and is a real problem,” or, “No, it’s a hawk, which is just going about his business and I’m not going to ever come close to it at all because it’s so far away.
</p><p>
<strong>Wryobek: </strong>Yeah, this is really fun to talk about. So just to start with what we’re not doing, because most people expect us to use either a radar for this or cameras for this. And basically, those don’t work. And the radar, you would need such a heavy radar system to see 360 degrees all the way around your drone. And this is really important because two things to kind of plan in your mind. One is we’re not talking about autonomous driving where cars are close together. Aircraft never want to be as close together as cars are on a road, right? We’re talking about maintaining hundreds of meters of separation, and so you sense it a long distance. And drones don’t have right of way. So what that means is even if a plane’s coming up behind the drone, you got to sense that plane and get out of the way. And so to have enough radar on your drone that you can actually see far enough to maintain that separation in every direction, you’re talking about something that weighs many times the weight of a drone and it just doesn’t physically close. And so we started there because that’s sort of where we assumed and many people assume that’s the place to start. Then looked at cameras. Cameras have lots of drawbacks. And fundamentally, you can sort of-- we’ve all had this, you taken your phone and tried to take a picture of an airplane and you look at the picture, you can’t see the airplane. Yeah. It takes so many pixels of perfectly clean lenses to see an aircraft at a kilometer or two away that it really just is not practical or robust enough. And that’s when we went back to the drawing board and it ended up where we ended up, which is using an array of microphones to listen for aircraft, which works very well at very long distances to then maintain separation from those other aircraft.
</p><p>
<strong>Cass: </strong>So yeah, let’s talk about Platform 2 a little bit more because I should first explain for listeners who maybe aren’t familiar with Zipline that these are not the kind of the little purely sort of helicopter-like drones. These are these fixed wing with sort of loiter capability and hovering capabilities. So they’re not like your Mavic drones and so on. These have a capacity then for long-distance flight, which is what it gives them.
</p><p>
<strong>Wyrobek: </strong>Yeah. And maybe to jump into Platform 2— maybe starting with Platform 1, what does it look like? So Platform 1 is what we’ve been operating around the world for years now. And this basically looks like a small airplane, right? In the industry referred to as a fixed-wing aircraft. And it’s fixed wing because to solve the problem of going from a metro area to surrounding countryside, really two things matter. Your range and long range and low cost. And a fixed-wing aircraft over something that can hover has something like an 800% advantage in range and cost. And that’s why we did fix wing because it actually works for our customers for their needs for that use case. Platform 2 is all about, how do you deliver to homes and in metro areas where you need an incredible amount of precision to deliver to nearly every home. And so Platform 2—we call our drone zips—our drone, it flies out to the delivery site. Instead of floating a package down to a customer like Platform 1 does, it hovers. Platform 2 hovers and lowers down what we call a droid. And so the droids on tether. The drone stays way up high, about 100 meters up high, and the drone lowers down. And the drone itself-- sorry, the droid itself, it lowers down, it can fly. Right? So you think of it as like the tether does the heavy lifting, but the droid has fans. So if it gets hit by a gust of wind or whatnot, it can still stay very precisely on track and come in and deliver it to a very small area, put the package down, and then be out of there seconds later.
</p><p>
<strong>Cass: </strong>So let me get this right. Platform 2 is kind of as a combo, fixed wing and rotor wing. It’s like a VTOL like that. I’m cheating here a little bit because my colleague Evan Ackerman has a great Q&A on the <em><em>Spectrum</em></em> website with you, some of your team members about<a href="https://spectrum.ieee.org/delivery-drone-zipline-design" target="_self"> <u>the nitty-gritty of how that design was evolved</u></a>. But first off, it’s like a little droid thing at the end of the tether. How much extra precision do all those fans and stuff give you?
</p><p>
<strong>Wyrobek: </strong>Oh, massive, right? We can come down and hit a target within a few centimeters of where we want to deliver, which means we can deliver. Like if you have a small back porch, which is really common, right, in a lot of urban areas to have a small back porch or a small place on your roof or something like that, we can still just deliver as long as we have a few feet of open space. And that’s really powerful for being able to serve our customers. And a lot of people think of Platform 2 as like, “Hey, it’s a slightly better way of doing maybe a DoorDash-style operation, people in cars driving around.” And to be clear, it’s not slightly better. It’s massively better, much faster, more environmentally friendly. But we have many contracts for Platform 2 in the health space with US Health System Partners and Health Systems around the world. And what’s powerful about these customers in terms of their needs is they really need to serve all of their customers. And this is where a lot of our sort of-- this is where our engineering effort goes is how do you make a system that doesn’t just kind of work for some folks, and they can use it if they want to, but a health system is like, “No, I want this to work for everybody in my health network.” And so how do we get to that near 100 percent serviceability? And that’s what this droid really enables us to do. And of course, it has all these other magic benefits too. It makes some of the hardest design problems in this space much, much easier. The safety problem gets much easier by keeping the drone way up high.
</p><p>
<strong>Cass: </strong>Yeah, how high is Platform 2 hovering when it’s doing its deliveries?
</p><p>
<strong>Wyrobek: </strong>About 100 meters, so 300 plus feet, right? We’re talking about high up as a football field is long. And so it’s way up there. And it also helps with things like noise, right? We don’t want to live in a future where drones are all around us sounding like swarms of insects. We want drones to make no noise. We want them to just melt into the background. And so it makes that kind of problem much easier as well. And then, of course, the droid gets other benefits where for many products, we don’t need any packaging at all. We can just deliver the product right onto a table in your porch. And not just from a cost perspective, but again, from— we’re all familiar with the nightmare of packaging from deliveries we get. Eliminating packaging just has to be our future. And we’re really excited to advance that future.
</p><p>
<strong>Cass: </strong>From Evan’s Q&A, I know that a lot of effort went into making the droid element look rather adorable. Why was that so important?
</p><p>
<strong>Wryobek: </strong>Yeah, I like to describe it as sort of a cross between three things, if you kind of picture this, like a miniature little fan boat, right, because it has some fan, a big fan on the back, looks like a little fan boat, combined with sort of a baby seal, combined with a toaster. It sort of has that look to it. And making it adorable, there’s a bunch of sort of human things that matter, right? I want this to be something that when my grandmother, who’s not a tech-savvy, gets these deliveries, it’s approachable. It doesn’t come off as sort of scary. And when you make something cute, not only does it feel approachable, but it also forces you to get the details right so it is approachable, right? The rounded corners, right? This sounds really benign, but a lot of robots, it turns out if you bump into them, they scratch you. And we want you to be able to bump into this droid, and this is no big deal. And so getting the surfaces right, getting them— the surface is made sort of like a helmet foam. If you can picture that, right? The kind of thing you wouldn’t be afraid to touch if it touched you. And so getting it both to be something that feels safe, but is something that actually is safe to be around, those two things just matter a lot. Because again, we’re not designing this for some piloty kind of low-volume thing. Our customers want this in phenomenal volume. And so we really want this to be something that we’re all comfortable around.
</p><p>
<strong>Cass: </strong>Yeah, and one thing I want to pull out from that Q&A as well is it was an interesting note, because you mentioned it has three fans, but they’re rather unobtrusive. And the original design, you had two big fans on the sides, which was very great for maneuverability. But you had to get rid of those and come up with a three-fan design. And maybe you can explain why that was so.
</p><p>
<strong>Wryobek: </strong>Yeah, that’s a great detail. So the original design, the picture, it was like, imagine the package in the middle, and then kind of on either side of the package, two fans. So when you looked at it, it kind of looked like— I don’t know. It kind of looked like the package had big mouse ears or something. And when you looked at it, everybody had the same reaction. You kind of took this big step back. It was like, “Whoa, there’s this big thing coming down into my yard.” And when you’re doing this kind of user testing, we always joke, you don’t need to bring users in if it already makes you take a step back. And this is one of those things where like, “That’s just not good enough, right, to even start with that kind of refined design.” But when we got the sort of profile of it smaller, the way we think about it from a design experiment perspective is we want to deliver a large package. So basically, the droid needs to be as sucked down as small additional volume around that package as possible. So we spent a lot of time figuring out, “Okay, how do you do that sort of physically and aesthetically in a way that also gets that amazing performance, right? Because when I say performance, what I’m talking about is we still need it to work when the winds are blowing really hard outside and still can deliver precisely. And so it has to have a lot of aero performance to do that and still deliver precisely in essentially all weather conditions.
</p><p>
<strong>Cass: </strong>So I guess I just want to ask you then is, what kind of weight and volume are you able to deliver with this level of precision?
</p><p>
<strong>Wryobek: </strong>Yeah, yeah. So we’ll be working our way up to eight pounds. I say working our way up because that’s part of, once you launch a product like this, there’s refinement you can do overtime on many layers, but eight pounds, which was driven off, again, these health use cases. So it does basically 100 percent of what our health partners need to do. And it turns out it’s, nearly 100 percent of what we want to do in meal delivery. And even in the goods sector, I’m impressed by the percentage of goods we can deliver. One of our partners we work with, we can deliver over 80 percent of what they have in their big box store. And yeah, it’s wildly exceeding expectations on nearly every axis there. And volume, it’s big. It’s bigger than a shoebox. I don’t have a great-- I’m trying to think of a good reference to kind of bring it to life. But it looks like a small cooler basically inside. And it can comfortably fit a meal for four to give you a sense of the amount of food you can fit in there. Yeah.
</p><p>
<strong>Cass: </strong>So we’ve seen this history of Zipline in rural areas, and now we’re talking about expanding operations in more urban areas, but just how urban? I don’t imagine that we’ll see the zip lines of zooming around, say, the very hemmed-in streets, say, here in Midtown Manhattan. So what level of urban are we talking about?
</p><p>
<strong>Wryobek: </strong>Yeah, so the way we talk about it internally in our design process is basically we call three-story sprawl. Manhattan is the place where when we think of New York, we’re not talking about Manhattan, but most of the rest of New York, we are talking about it, right? Like the Bronx, things like that. We just have this sort of three stories forever. And that’s a lot of the world out here in California, that’s most of San Francisco. I think it’s something like 98 percent of San Francisco is that. If you’ve ever been to places like India and stuff like that, the cities, it’s just sort of this three stories going for a really long way. And that’s what we’re really focused on. And that’s also where we provide that incredible value because that’s also matches where the hardest traffic situations and things like that can make any other sort of terrestrial on-demand delivery be phenomenally late.
</p><p>
<strong>Cass: </strong>Well, no, I live out in Queens, so I agree there’s not much skyscrapers out there. Although there are quite a few trees and so on, but at the same time, there’s usually some sort of sidewalk availability. So is that kind of what you’re hoping to get into?
</p><p>
<strong>Wyrobek: </strong>Exactly. So as long as you’ve got a porch with a view of the sky or an alley with a view of the sky, it can be literally just a few feet, we can get in there, make a delivery, and be on our way.
</p><p>
<strong>Cass: </strong>And so you’ve done this preliminary test with the FAA, the BVLOS test, and so on. How close do you think you are to, and you’re working with a lot of partners, to really seeing this become routine commercial operations?
</p><p>
<strong>Wyrobek: </strong>Yeah, yeah. So at relatively limited scale, our operations here in Utah and in Arkansas that are leveraging that FAA approval for beyond visual line-of-sight flight operations, that’s been all day, every day now since our approval last year. With Platform 2, we’re really excited. That’s coming later this year. We’re currently in the phase of basically massive-scale testing. So we now have our production hardware and we’re taking it through a massive ground testing campaign. So this picture dozens of thermal chambers and five chambers and things like that just running to really both validate that we have the reliability we need and flush out any issues that we might have missed so we can address that difference between what we call the theoretical reliability and the actual reliability. And that’s running in parallel to a massive flight test campaign. Same idea, right? We’re slowly ramping up the flight volume as we fly into heavier conditions really to make sure we know the limits of the system. We know its actual reliability and true scaled operations so we can get the confidence that it’s ready to operate for people.
</p><p>
<strong>Cass: </strong>So you’ve got Platform 2. What’s kind of next on your technology roadmap for any possible platform three?
</p><p>
<strong>Wyrobek: </strong>Oh, great question. Yeah, I can’t comment on platform three at this time, but. And I will also say, Zipline is pouring our heart into Platform 2 right now. Getting Platform 2 ready for this-- the way I like to talk about this internally is today, we fly about four times the equator of the Earth in our operations on average. And that’s a few thousand flights per day. But the demand we have is for more like millions of flights per day, if not beyond. And so on the log scale, right, we’re halfway there. Three hours of magnitude down, three more zeros to come. And the level of testing, the level of systems engineering, the level of refinement required to do that is a lot. And there’s so many systems from weather forecasting to our onboard autonomy and our fleet management systems. And so to highlight one team, our system test team run by this really impressive individual named<a href="https://www.linkedin.com/in/juanalbanell/" rel="noopener noreferrer" target="_blank"> <u>Juan Albanell</u></a>, this team has taken us from where we were two years ago, where we had shown the concept at a very prototype stage of this delivery experience, and we’ve done the first order math kind of on the architecture and things like that through the iterations in test to actually make sure we had a drone that could actually fly in all these weather conditions with all the robustness and tolerance required to actually go to this global scale that Platform 2 is targeting.
</p><p>
<strong>Cass: </strong>Well, that’s fantastic. Well, I think there’s a lot more to talk about to come up in the future, and we look forward to talking with Zipline again. But for today, I’m afraid we’re going to have to leave it there. But it was really great to have you on the show, Keenan. Thank you so much.
</p><p>
<strong>Wyrobek: </strong>Cool. Absolutely, Stephen. It was a pleasure to speak with you.
</p><p>
<strong>Cass: </strong>So today on <em><em>Fixing the Future</em></em>, we were talking with Zipline’s Keenan Wyrobek about the progress of commercial drone deliveries. For <em><em>IEEE Spectrum</em></em>, I’m Stephen Cass, and I hope you’ll join us next time.
</p>]]></description><pubDate>Wed, 17 Apr 2024 15:10:22 +0000</pubDate><guid>https://spectrum.ieee.org/us-drone-delivery-comes-closer</guid><category>Type-podcast</category><category>Fixing-the-future</category><category>Zipline</category><category>Drone-delivery</category><category>Drones</category><dc:creator>Stephen Cass</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/52004785/origin.webp"/></item><item><title>Heat Pumps Go North</title><link>https://spectrum.ieee.org/heat-pump-cold-climate-tech</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/fixing-the-future-podcast-logo.jpg?id=46469653&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/845e76f4" width="100%"></iframe><p>
<strong>Stephen Cass: </strong>Hello and welcome to <em>Fixing the Future</em>, an <em>IEEE Spectrum</em> podcast where we look at concrete solutions to tough problems. I’m your host,<a href="https://spectrum.ieee.org/u/stephen-cass" target="_self"> Stephen Cass</a>, a senior editor at <em>IEEE Spectrum</em>. And before we start, I just want to tell you that you can get the latest coverage from some of <em>Spectrum</em>‘s most important beats, including AI, climate change, and robotics, by signing up for one of our newsletters. These are free, and you just have to go to<a href="https://spectrum.ieee.org/newsletters/" target="_self"> spectrum.ieee.org/newsletters</a> to subscribe.
</p><p>
<a href="https://spectrum.ieee.org/heat-pumps-explained" target="_self">Heat pumps</a> don’t have a reputation as being a particularly glamorous technology. They sort of act like a combination of an air conditioner and a refrigerator, pumping heat out of a home in the summer and pumping it back inside during the winter. But governments around the world increasingly see heat pumps as a chance to make some big improvements in energy efficiency, and some recent technological developments could expand dramatically the number of homes that could employ heat pumps. Here to talk about those developments today, we have Spectrum’s new power and energy editor,<a href="https://spectrum.ieee.org/u/emily_waltz" target="_self"> Emily Waltz</a>, who recently joined the staff after many years contributing to <em>Spectrum</em> as a freelance writer. So Emily, welcome to <em>Spectrum</em> and welcome to <em>Fixing the Future</em>.
</p><p>
<strong>Emily Waltz: </strong>Thanks. I’m glad to be here.
</p><p>
<strong>Cass: </strong>So first off, when we talk about heat pumps, I think one of two pictures form in people’s minds. One is a<a href="https://www.energy.gov/energysaver/geothermal-heat-pumps" rel="noopener noreferrer" target="_blank"> geothermal system</a> in which pipes are buried in the ground outside a home and the ground acts as a heat reservoir where you can dump heat during the summer and then you extract it during the winter. But today we’re going to be focusing on the other type of heat pump, the<a href="https://www.energy.gov/energysaver/air-source-heat-pumps" rel="noopener noreferrer" target="_blank"> air source heat pump</a>. Can you sketch out how that works?
</p><p>
<strong>Waltz: </strong>Yeah. So what’s great about heat pumps is that they transfer heat rather than generate it. And that’s part of what makes them more energy-efficient than other sources of heating. They can both heat and cool a home. And I’ll describe how they work in heating mode. So in heating mode, what they do is they pull ambient heat from outside air and compress it and then release it into the home. And there’s an outdoor unit, which from the exterior looks like a big box with a fan. And then there’s some connection lines and then an indoor unit. And so what happens is the air gets drawn into the system in the outdoor unit. It passes over a heat exchanger, which contains a refrigerant that has a very low boiling point. So the most common refrigerant is called<a href="https://en.wikipedia.org/wiki/R-410A" rel="noopener noreferrer" target="_blank"> R410A</a>, and it has a boiling point at about negative 48 degrees Celsius. So it may be 0 degrees outside, but when that air passes over the refrigerant, the refrigerant boils. So the refrigerant boils, and then it evaporates into a vapor. And then the compressor increases the temperature and pressure so that it becomes this superheated vapor. And the superheated vapor moves to an indoor unit and goes over through a set of coils. And there a fan blows across it, and it moves the heat into the home. So the heat is distributed through the home, usually through ductwork, but there are ways to do it without ductwork too. And then in the summer, the system works in reverse. It pulls warm air out of the home and moves in cooler air.
</p><p>
<strong>Cass:</strong> And so what kind of homes are suitable for hit heat pumps? I mean, obviously, you need some land for geothermal heat pump because we talk about burying things, but this seems to be able to work on a much smaller footprint in homes in denser areas.
</p><p>
<strong>Waltz: </strong>Yes, that’s right. So as you mentioned, the ground source or geothermal heat pumps, they do require quite a bit of land. But the air source heat pumps, just a small outdoor space is needed. These can be installed, obviously, standalone homes, but also townhomes, apartment buildings, and even high-rises. There are ways to make it work. I know that the outdoor units are frequently installed on roofs and on balconies.
</p><p>
	Cass: So what kind of energy savings kind of typical homeowner gain from installing a heat pump?
</p><p>
<strong>Waltz:</strong> Yeah. There was a<a href="https://www.cell.com/joule/abstract/S2542-4351(24)00049-7" rel="noopener noreferrer" target="_blank"> good study published on this last month</a> in the journal <em>Joule</em>. They looked at 550,000 homes that are representative of the entire housing stock in the US. And they looked at both energy use and then energy bills. And the study found that if every home in the United States switched to a heat pump, home energy use, that is the residential sector, would drop by 31 to 47 percent on average. And that national carbon dioxide emissions would fall by 5 to 9 percent overall. So that’s pretty good. But the reductions depend on what kind of heating system is being replaced, how well the home is sealed up and insulated, and whether the home’s electricity comes from renewable sources. So they found that emissions reductions are highest when replacing a fuel oil heating system. But whether that will translate into lowering a home’s heating bill is widely variable. And it depends a lot on what kind of heat pump is installed, so whether it’s a high-efficiency heat pump or a low-efficiency, so a newer one or an older one, and then what kind of heat’s being replaced and whether the home had previously had air conditioning. But bottom line, what they found is if replacing fuel oil or propane for those homes, 87 to 100 percent of those homes would see a reduction in their energy bill. That percentage is smaller for natural gas and electric resistance heating.
</p><p>
<strong>Cass: </strong>Wow. That’s still considerable. And this idea, how many homes can this be used in? And this is where I want to turn out to recent developments. So you recently published this terrific story for us, which will be linked to in the show notes, titled “<a href="https://spectrum.ieee.org/cold-climate-heat-pump" target="_self">Heat Pumps Take On Cold Climates</a>”. Can you tell us why heat pumps up to now haven’t fared well in cold climates? And what’s the key new advance that’s changing that?
</p><p>
<strong>Waltz:</strong> Yeah. Yeah. So most air source heat pumps on the market currently work pretty well until the outdoor temperature gets to about 4 degrees Celsius, which is 40 degrees Fahrenheit. Colder than that, they still work, but they’re often operating at less than full capacity. So when the temperature gets down to about negative 15 degrees Celsius, which is 5 degrees Fahrenheit, they stop doing their job. And they switch over to emergency heating mode, which is an all-electric resistance heating. But that’s what’s currently available, and that’s changing. And one of the key advances has been in optimizing how the compressor works in concert with the rest of the system. So that includes controlling the compressor motor speed, improving the timing when the vapor is injected into the compressor. So heat pump manufacturers have been playing with these cycles to optimize them. And it sounds like they finally got it sorted. One manufacturer I spoke with,<a href="https://www.tranetechnologies.com/en/index.html" rel="noopener noreferrer" target="_blank"> Trane Technologies</a>, they found that if they inject refrigerant at just the right time, right when the system begins to lose its capacity to heat, it gives it the boost it needs. So that’s been the main advancement. And there’s also technology that improves the way that the indoor and outdoor units communicate with each other and with a thermostat that optimizes the system.
</p><p>
<strong>Cass: </strong>And this was kind of demonstrated in a big test recently, wasn’t it?
</p><p>
<strong>Waltz: </strong>It was. The Department of Energy has set up this challenge. The goal is to get cold climate heat pumps working efficiently at full capacity at negative 15 degrees Celsius and even down as low as negative 26 degrees Celsius. So the agency law launched a challenge to inspire companies to achieve that. There are eight companies competing in it, and they’re in the middle of field testing that right now.
</p><p>
<strong>Cass: </strong>And where are those field test tests taking place? Do you know?
</p><p>
<strong>Waltz:</strong> Yes. They are in several US states, mostly northern states, and in a couple of Canadian provinces.
</p><p>
<strong>Cass: </strong>So how long before we might see these cold weather pumps hit the market?
</p><p>
<strong>Waltz:</strong> Yeah. It depends partly on how you define cold-climate heat pumps. The ones we’re talking about that are in this DOE challenge, I think we’ll see them next year. Both the Department of Energy and training representatives I spoke to at those places both said, “We should see this in the market by next year.” But it’s important to remember that there is a big upfront cost to installing these. So widespread adoption will probably require government incentives and some good marketing.
</p><p>
<strong>Cass:</strong> You know, with all these great results coming out from these DOE trials and so on, what kind of incentives is the US putting toward heat pumps??
</p><p>
<strong>Waltz: </strong>Right so the US is putting some pretty good incentives toward it. The Federal government offers tax credits and states will be rolling out rebates to offset the cost of installations which is very very high. In the systems I’ve seen its 10 to 20 thousand to install these things. We’ve also seen 9 US states, last month they pledged to accelerate heat pump sales and then 25 governors have vowed to quadruple heart pump sales, so there is an all-out effort in the US to make this happen and it seems to be working so far, cause heat pumps outsold gasoline furnaces for the second year in a row last year.
</p><p>
<strong>Cass: </strong>So you mentioned some pretty impressive figures there for things like reducing climate emissions and so on. And yes, it depends on what you’re switching from. But why are they so much better than conventional HVAC systems? Is this related to the electrification of everything?
</p><p>
<strong>Waltz:</strong> Yeah. So it’s partly because they run on electricity rather than fossil fuels. But it’s also because they transfer heat rather than generate it. So I mean, there is all electric heating, but heat pumps are different. So with electric resistance heating an electric current passes through conductive materials and releases heat. But with heat pumps, they’re powered by electricity. They’re plugged in. But the electricity powers equipment that enables it to transfer and concentrate heat. So they’re more efficient than all-electric. So it’s a combination of those things and the fact that it’s not relying on fossil fuels.
</p><p>
<strong>Cass: </strong>But is there a danger that all the advantages we could gain from heat pumps will be wiped out depending on how the electricity is generated? Does this really have to go hand-in-glove with renewables to see these advantages? Or is this something that even if you aren’t changing your generation profile, you’re still going to see some advantages?
</p><p>
<strong>Waltz: </strong>Right. I think you’ll still see advantages. I mean, if electricity comes from renewable energy, then that’s a bonus. But these are so much more energy efficient that even if they don’t come-- even if you’re not powered by renewables, it’s still an advantage.
</p><p>
<strong>Cass: </strong>And Europe seems to be very interested in heat pumps as well. Why is that?
</p><p>
<strong>Waltz:</strong> Yeah. So Russia’s gas exports to Europe have fallen sharply because of the tensions over Ukraine over the last couple of years. And so Europe is pushing pretty hard for people to replace their gas heating systems with heat pumps.<a href="https://energy.ec.europa.eu/topics/energy-efficiency/heat-pumps_en" rel="noopener noreferrer" target="_blank"> The European Commission has called for expedited deployment of heat pumps</a>, and they also recommended that member states phase out the use of fossil fuel heating systems in all buildings by 2035. And so we’re seeing many European countries subsidizing residential heat pump installation and offering grants to homeowners. Yeah. So we’re seeing a pretty hard push in Europe.
</p><p>
<strong>Cass: </strong>I just want to talk then, just to come back to geothermal heat pumps, it’s still the case though that if you have the chance, the geothermal— if you have the ground, I guess, basically, the geothermal system is more efficient than these air source heat pumps in an ideal kind of world.
</p><p>
<strong>Waltz: </strong>Yes. Especially if you live in a very cold climate because underground is going to maintain a more consistent temperature. And so the source of the heat that’s coming in is already warmer. So yes, they can be more efficient. They just require a lot of land. I was looking at one commercial developer and they were sketching out what that might look like in a home. And it looked like it was almost probably a quarter of an acre that it took up. And they have to dig up trenches. And I mean, your yard, your garden is all dug up. But I love the idea of it. I do have some land and I was thinking about doing it myself.
</p><p>
<strong>Cass:</strong> Well, you’ll have to let us know how that goes and maybe give us a peek into how your bills have been going. Well, that is all fascinating, but I’m afraid we’ll have to leave it there. But thanks very much, Emily, for coming on and making your first appearance on Fixing the Future.
</p><p>
<strong>Waltz:</strong> Well, thank you. I enjoyed it.
</p><p>
<strong>Cass: </strong>So today we were talking with Emily Waltz about cold climate heat pumps. For <em>IEEE Spectrum</em>, I’m Stephen Cass, and I hope you’ll join us next time.
</p>]]></description><pubDate>Wed, 03 Apr 2024 13:39:12 +0000</pubDate><guid>https://spectrum.ieee.org/heat-pump-cold-climate-tech</guid><category>Heat-pumps</category><category>Type-podcast</category><category>Fixing-the-future</category><category>Climate</category><dc:creator>Stephen Cass</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/46469653/origin.jpg"/></item><item><title>Exploding Chips, Meta's AR Hardware, and More</title><link>https://spectrum.ieee.org/secure-integrated-circuits-meta-ar</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/fixing-the-future-podcast-logo.jpg?id=46469653&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/a1decc64" width="100%">
</iframe><p><strong>Stephen Cass:</strong> Hello and welcome to <em>Fixing the Future</em>, an <em>IEEE Spectrum </em>podcast where we look at concrete solutions to some big problems. I’m your host <a href="https://spectrum.ieee.org/u/stephen-cass" target="_self">Stephen Cass</a>, a senior editor at <em>IEEE Spectrum</em>. And before we start, I just want to tell you that you can get the latest coverage from some of Spectrum’s most important beats, including AI, climate change and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe. Today we’re going to be talking with <a href="https://spectrum.ieee.org/u/samuel-k-moore" target="_self">Samuel K. Moore</a>, who follows a semiconductor beat for us like a charge carrier in an electric field. Sam, welcome to the show.</p><p><strong>Samuel K. Moore: </strong>Thanks, Stephen. Good to be here.</p><p><strong>Cass:</strong> Sam, you recently attended the Big Kahuna Conference of the semiconductor research world, <a href="https://www.isscc.org/" rel="noopener noreferrer" target="_blank">ISSCC</a>. What exactly is that, and why is it so important?</p><p><strong>Moore:</strong> Well, besides being a difficult-to-say acronym, it actually stands for the IEEE International Solid State Circuits Conference. And this is really one of the big three of the semiconductor research world. It’s been going on for more than 70 years, which means it’s technically older than the IEEE in some ways. We’re not going to get into that. And it really is sort of the crème de la crème if you are doing circuits research. So there is another conference for inventing new kinds of transistors and other sorts of devices. This is the conference that’s about the circuits you can make from them. And as such, it’s got all kinds of cool stuff. I mean, we’re talking about like 200 or so talks about processors, memories, radio circuits, power circuits, brain-computer interfaces. There’s kind of really something for everybody.</p><p><strong>Cass:</strong> So while we’re there, we send you this monster thing and ask you to fish out— They’re not all going to be— Let’s be honest. They’re not all going to be gangbusters. What were the ones that really caught your eye?</p><p><strong>Moore:</strong> All right. So I’m going to tell you actually about a few things. First off, there’s a potential revolution in analog circuits that’s brewing. Just saw the beginnings of it. There’s a cool upcoming chip that does AI super efficiently by mixing its memory and computing resources. We had a peek at Meta’s future AR glasses or the chip for them anyways. And finally, there was a bunch of very cool security stuff, including a circuit that self-destructs.</p><p><strong>Cass:</strong> Oh, that sounds cool. Well, let’s start off with the analog stuff because you were saying this is like really a way of kind of almost saying bye-bye to some electronic analog stuff. So this is fascinating.</p><p><strong>Moore: </strong>Yeah. So this really kind of kicked the conference off with a bang because it was one of the plenary sessions. It was literally one of the first things that was said. And it had to come from the right person, and it kind of did. It was IEEE fellow and sort of analog establishment figure from the Netherlands <a href="https://people.utwente.nl/b.nauta" rel="noopener noreferrer" target="_blank">Bram Nauta</a>. And it was a kind of a real, like, “We’re doing it all wrong kind of moment,” but it was important because the stakes are pretty high. Basically, Moore’s Law has been really good for digital circuits, the stuff that you use to make the processing parts of CPUs and in its own way for memory but not so much for analog. Basically, you kind of look down the road and you are really not getting any better transistors and processes for analog going forward. And you’re starting to see this in places, even in high-end processors, the parts that kind of do the I/O. They’re just not advancing. They’re using super cutting-edge processes for the compute part and using the same I/O chiplet for like four or five generations.</p><p><strong>Cass: </strong>So this is like when you’re trying to see things from the outside world. So like your smartphone, it needs these converters to digitize your voice but also to handle the radio signal and so on.</p><p><strong>Moore:</strong> Exactly. Exactly. As they say, the world is analog. You have to make it digital to do the computing on it. So what you’re saying about a radio circuit is actually a great example because you’ve got the antenna and then you have to amplify, you have to mix in the carrier signal and stuff, but you have to amplify it. You have to amplify it really nicely quite linearly and everything like that. And then you feed it to your analog to digital converter. What Nauta is pointing out is that we’re not really going to get any better with this amplifier. It’s going to continue to burn tens or hundreds of times more power than any of the digital circuits. And so his idea is let’s get rid of it. No more linear amplifiers. Forget it. Instead, what he’s proposing is that we invent an analog-to-digital converter that doesn’t need one. So literally--</p><p><strong>Cass:</strong> Well, why haven’t we done this before? It sounds very obvious. You don’t like a component. You throw it out. But obviously, it was doing <em>something</em>. And how do you make up that difference with the pure analog-to-digital converter?</p><p><strong>Moore:</strong> Well, I can’t tell you completely how it’s done, especially because he’s still working on it. But his math basically checks out. And this is really a question— this is really a question of Moore’s Law. It’s not so much, “Well, what are we doing now?” It’s, “What can we do in the future?” If we can’t get any better with our analog parts in the future, let’s make everything out of digital, digitize immediately. And let’s not worry about any of the amplification part.</p><p><strong>Cass: </strong>But is there some kind of trade-off being made here?</p><p><strong>Moore: </strong>There is. So right now, you’ve got your linear amplifier consuming milliwatts and your analog to digital converter, which is a thing that can take advantage of Moore’s Law going forward because it’s mostly just comparators and capacitors and stuff that you can deal with. And that consumes only microwatts. So what he’s saying is, “We’ll make the analog-to-digital converter a little bit worse. It’s going to consume a little more power. But the overall system is going to consume less if you take the whole system as a piece.” And that has been part of the problem is that the figures of merit, the things that you measure how good is your linear amplifier, is really just about the linear amplifier rather than worrying about like, “Well, what’s the whole system consuming?” And this looks like, if you care about the whole system, which is kind of what you have to, then this no longer really makes sense.</p><p><strong>Cass: </strong>This also sounds like it gets closer to the dream of the pure <a href="https://spectrum.ieee.org/tag/software-defined-radio" target="_self">software-defined radio</a>, which is you take basically an idea where you take your CPU, you connect one pin to an antenna, and then almost from DC to daylight, you’re able to handle everything in software-defined functions.</p><p><strong>Moore: </strong>That’s right. That’s right. Digital can take advantage of Moore’s Law. Moore’s Law is continuing. It’s slowing, but it’s continuing. And so that’s just sort of how things have been creeping along. And now it’s finally getting kind of to the edge, to that first amplifier. So anyways, he was kind of apprehensive about giving this talk because it is poo-pooing on quite a lot of things actually at this conference. So he told me he was actually pretty nervous about it. But it had some interest. I mean, there were some engineers from Apple and others that approached him that said, “Yeah, this kind of makes sense. And maybe we’ll take a look at this.”</p><p><strong>Cass:</strong> So fascinating. So it appears to be solving these bottlenecks and linear amplifier efficiencies of bottleneck. But there was another bottleneck that you mentioned, which is the memory wall.</p><p><strong>Moore: </strong>Yes.</p><p><strong>Cass:</strong> It’s a memory wall.</p><p><strong>Moore:</strong> Right. So the memory wall is this sort of longstanding issue in computing. Particularly, it started off in high-performance computing, but it’s kind of in all computing now, where the amount of time and energy needed to move a bit from memory to the CPU or the GPU is so much bigger than the amount of time and energy needed to move a bit from one part of the GPU or CPU to another part of the GPU or CPU, staying on the silicon, essentially.</p><p><strong>Cass:</strong> Going off silicon has a penalty.</p><p><strong>Moore: </strong>That’s a huge penalty.</p><p><strong>Cass: </strong>And this is why, in traditional CPUs, you have these like caches, L1. You hear these words, L1 cache, L2 cache, L3 cache. But this goes much further. What you’re talking about is much further than just having a little blob of memory near the CPU.</p><p><strong>Moore: </strong>Yes, yes. So the general memory wall is this problem. And people have been trying to solve this in all kinds of ways. And you just sort of see it in the latest <a href="https://www.nvidia.com/en-us/" rel="noopener noreferrer" target="_blank">NVIDIA</a> GPUs basically has all of its DRAM is right on the same— is on like a silicon interposer with the GPU. They couldn’t be connected any more closely. You see it in that giant chip. If you remember, <a href="https://spectrum.ieee.org/cerebras-chip-cs3" target="_self">Cerebras has a wafer size chip</a>. It’s as big as your face. And that is—</p><p><strong>Cass: </strong>Oh, that sounds an incredible chip. And we’ll definitely put the link to that in the show notes for this because there’s a great picture. It has to be kind of seen to be believed, I think. There’s a great picture of this monster, monster thing. But sorry.</p><p><strong>Moore: </strong>Yeah, and that is an extreme solution to the memory wall problem. But there’s all sorts of other cool research in this. And one of the best is sort of to bring the compute to the memory so that your bits just don’t have to move very far. There’s a bunch of different— well, a whole mess of different ways to do this. There were like nine talks or something on this when I was there, and there are even very cool ways that we’ve written about in Spectrum, where you can actually do you can do sort of AI calculations in memory using analog, where the--</p><p><strong>Cass: </strong>Oh, so now we’re back to analog! Let’s creep it back in.</p><p><strong>Moore: </strong>Yeah, no, it’s cool. I mean, it’s cool that sort of coincidentally, the multiply and accumulate task, which is sort of the fundamental crux of all the matrix stuff that runs AI you can do in basically <a href="https://en.wikipedia.org/wiki/Ohm%27s_law" rel="noopener noreferrer" target="_blank">Ohm’s Law</a> and <a href="https://en.wikipedia.org/wiki/Kirchhoff%27s_circuit_laws" rel="noopener noreferrer" target="_blank">Kirchhoff’s Law</a>. They just kind of dovetail into this wonderful thing. But it’s very fiddly. Trying to do anything in analog is always [crosstalk].</p><p><strong>Cass: </strong>So before digital computers, like right up into the ‘70s, <a href="https://spectrum.ieee.org/try-this-new-analog-computer" target="_self">analog computers</a> were actually quite competitive, whereby you set up your problem using operational amplifiers, which is why they’re called <a href="https://en.wikipedia.org/wiki/Operational_amplifier" rel="noopener noreferrer" target="_blank">operational amplifiers</a>. Op amps are called op amps. And you set it all your equation all up, and then you produce results. And this is basically like taking one of those analog operations where the behavior of the components models a particular mathematical equation. And you’re taking a little bit of analog computing, and you’re putting it in because it matches with one particular calculation that’s used in AI.</p><p><strong>Moore: </strong>Exactly, yeah, yeah. So it’s a very fruitful field, and people are still chugging along at it. I met a guy at ISSCC. His name is <a href="https://www.axelera.ai/our-team/evangelos-eleftheriou" rel="noopener noreferrer" target="_blank">Evangelos Eleftheriou</a>. He is the CTO of a company called <a href="https://www.axelera.ai/" rel="noopener noreferrer" target="_blank">Axelera</a>, and he is a veteran of one of these projects that was doing analog AI at IBM. And he came to the conclusion that it was just not ready for prime time. So instead, he found himself a digital way of doing the AI compute in memory. And it hinges on basically interleaving the compute so tightly with the cache memory that they’re kind of a part of each other. That required, of course, coming up with a sort of new kind of SRAM, which he was very hush-hush about, and also kind of doing things in integer math instead of floating point math. Most of what you see in the AI world, like NVIDIA and stuff like that, their primary calculations are in floating point numbers. Now, those floating point numbers are getting smaller and smaller. They’re doing more and more in just 8-bit floating point, but it’s still floating point. This depends on integers instead just because of the architecture depends on it.</p><p><strong>Cass: </strong>Yeah, no, I like integer math, actually, because I do a lot of this retrocomputing. A lot of that is in this where you actually end up doing a lot of <a href="https://www.forth.com/starting-forth/5-fixed-point-arithmetic/" rel="noopener noreferrer" target="_blank">integer math</a>. And the truth is that you realize, oh, the Forth programming language also is famously very [integer]-based. And for a lot of real-world problems, you can find a perfectly acceptable scale factor that lets you use integers with no appreciable difference in precision. Floating points are kind of more general purpose. But this really had some impressive trade-offs in the benchmarks.</p><p><strong>Moore: </strong>Yeah, whatever they managed, despite any trade-offs they might have had to make for the math, they actually did very well. Now this is for— their aim is what’s called an edge computer. So it’s the kind of thing that would be running a bunch of cameras in sort of a traffic management situation or things like that. It was very machine-vision-oriented, but it’s like a computer or a card that you’d stick into a server that’s going to sit on-premises and do its thing. And when they ran a typical machine vision benchmark, they were able to do 2,500 frames per second. So that’s a lot of cameras potentially, especially when you consider most of these cameras are like— they’re not going 240.</p><p><strong>Cass: </strong>Even if you take it at a standard frame rate of, say, 20 frames per frame per second, that’s 100 cameras that you’re processing simultaneously.</p><p><strong>Moore: </strong>Yeah, yeah. And they were able to actually do this at like 353 frames per watt, which is a very good figure. And it’s performance per watt that really is kind of driving everything at the edge. If you ever want this sort of thing to go in a car or any kind of moving vehicle, everybody’s counting the watts. So that’s the thing. Anyways, I would really look, keep my eyes out for them. They are taping out this year. Should have some silicon later. Could be very cool.</p><p><strong>Cass: </strong>So speaking of that, getting into the chips and making differences, you can make changes sort of on the plane of the chips. But you and I have found some interesting stuff on 3D chip technology, which I know has been a thread of your coverage in recent years.</p><p><strong>Moore: </strong>Yeah, I’m all about the 3D chip technology. You’re finding 3D chip technology all the time pretty much in advanced processors. If you look at what Intel’s doing with its AI accelerators for supercomputers, if you look at what AMD is doing for basically all of its stuff now, they’re really taking advantage of being able to stack one chip on top of another. And this is, again, Moore’s Law slowing down, not getting as much in the two-dimensional shrinking as we used to. And we really can’t expect to get that much. And so if you want more transistors per square millimeter, which really is how you get more compute, you’ve got to start putting one slice of silicon on top of the other slice of silicon.</p><p><strong>Cass: </strong>So as we’re getting towards—instead of transistors per square millimeter, it’s going to be per cubic millimeter in the future.</p><p><strong>Moore: </strong>You could measure it that way. Thankfully, these things are so slim and sort of—</p><p><strong>Cass: </strong>Right. So it looks like a—</p><p><strong>Moore: </strong>Yeah, it looks basically the same form factor as a regular chip. So this 3D tech is powered by the most advanced part anyways is powered by something called hybrid bonding, which I’m afraid I have failed to understand where the word hybrid comes in at all. But really it is kind of making a cold weld between the copper pads on top of one chip and the copper pads on another one.</p><p><strong>Cass: </strong>Just explain what a cold well is because I have heard about a cold well is, but actually, when it comes to— it’s a problem when you’re <a href="https://ntrs.nasa.gov/api/citations/19670009180/downloads/19670009180.pdf" rel="noopener noreferrer" target="_blank">building things in outer space.</a></p><p><strong>Moore: </strong>Oh, oh, that. Exactly that. So how it works here is— so picture you build your transistors on the plane of the silicon and then you’ve got layer upon layer of interconnects. And those terminate in a set of sort of pads at the top, okay? You’ve got the same thing on your other chip. And what you do is you put them face-to-face, and there’s going to be like a little bit of gap between the copper on one and the copper on the other, but the insulation around them will just stick together. Then you heat them up just a little bit and the copper expands and just kind of jams itself together and sticks.</p><p><strong>Cass: </strong>Oh, it’s almost like brazing, actually.</p><p><strong>Moore: </strong>I’ll take your word for it. I genuinely don’t know what that is.</p><p><strong>Cass: I</strong> could be wrong. I’m sure a nice metallurgist out there will correct me. But yes, but I see what you’re being with the magnet. You just need a little bit of whoosh. And then everything kind of sticks together. You don’t have to go into your soldering iron and do the heavy—</p><p><strong>Moore: </strong>There’s no solder involved. And that is actually really, really key because it means almost like an order of magnitude increase in the density you can have these connections. We’re talking about like having one connection every few microns. So that adds up to like 200,000 connections per square millimeter if my math is right. It’s actually quite a lot. And it’s really enough to make the distances between from one part of one piece of silicon to one part of another. The same kind of as if they were all just built on one piece of silicon. It’s like Cerebras did it all big in two dimensions. This is folding it up and getting essentially the same kind of connectivity, the same low energy per bit, the same low latency per bit.</p><p><strong>Cass: </strong>And this is where Meta came in.</p><p><strong>Moore:</strong> Yeah. So Meta has been showing up at this conference and other conferences sort of. I’ve noticed them on panels sort of talking about what they would want from chip technology for the ideal pair of augmented reality glasses. The talk they gave today was like— the point was you really just don’t want a shoebox walking around on your face. That’s just not how—</p><p><strong>Cass: </strong>That sounds like a very <a href="https://variety.com/2024/digital/news/zuckerberg-apple-vision-pro-meta-quest-better-product-1235910851/" rel="noopener noreferrer" target="_blank">pointed jab</a> at the moment, perhaps.</p><p><strong>Moore: </strong>Right, it does. Anyways, it turns out what they want is 3D technology because it allows them to pack in more performance, more silicon performance in an area that might actually fit into something that looks like a pair of glasses that you might actually want to wear. And again, flinging the bits around, it would probably reduce the power consumption of said chip, which is very important because you don’t want it to be really hot. You don’t want a really hot shoebox on your face. And you want it to last a long time. You don’t have to keep charging it. So <a href="https://spectrum.ieee.org/meta-ar-glasses" target="_self">what they showed for the first time</a>, as far as I can tell, is sort of the silicon that they’ve been working on for this. This is a custom machine learning chip. It’s meant to do the kind of neural network stuff that you just absolutely need for augmented reality. And what they had was a four millimeter by four millimeter roughly chip that’s actually made up of two chips that are hybrid bonded together.</p><p><strong>Cass:</strong> And you need this stuff because you need the chip to be able to do all this computer vision processing to process what’s going on in the environment and reduce some sort of semantic stuff that you can overlay things on. This is why learning is so, so important. Machine learning is so important to these applications or AI in general. Yeah.</p><p><strong>Moore:</strong> Exactly, yeah. And you need that AI to be right there in your glasses as opposed to out in the cloud or even in a nearby server. Anything other than actually in the device is not going to give you enough latency and such, or it’s going to give you too much latency, excuse me. Anyway, so this chip was actually two 3D stacked chips. And what was very cool about this is they really made the 3D point because they had a version that was just the 2D, just like they had half of it. They tested the combined one, and they tested the half one. So the 3D stacked one was amazingly better. It wasn’t just twice as good. Basically, in their test, they tracked two hands, which is very important, obviously, for augmented reality. It has to know where your hands are. So that was the thing they tested. So the 3D chip was able to track two hands, and it used less energy than the ordinary 2D chip did when it was only tracking one hand. So 3D is a win for Meta clearly. We’ll see what the final project is like and whether anybody actually wants to use it. But it’s clear that this is the technology that’s going to get them there if they’re ever going to get there.</p><p><strong>Cass: </strong>So jumping to another track, you talked about you mentioned security at the top. And I love the security because there seems to be no limit to how paranoid you can be and yet still not always be able to keep up with the real world. <em>Spectrum</em> has had a long coverage of the history of electronic intelligence spying. We had this great piece on the Russian typewriter and <a href="https://spectrum.ieee.org/the-crazy-story-of-how-soviet-russia-bugged-an-american-embassys-typewriters" target="_self">how the Russians spied on American typewriters</a> by putting this embedding circuitry directly into the covers of the typewriters. It’s a crazy story, but you entered the chip security track. And as I’m really eager to hear about <a href="https://spectrum.ieee.org/hardwired-to-self-destruct" target="_self">the crazy ideas you heard there</a>— or as it turns out, not so crazy ideas.</p><p><strong>Moore:</strong> Right. You’re not paranoid if they’re really trying to— they’re really out to get to you. So yeah, no, this was some real <em>Mission Impossible</em> stuff. I mean, you could kind of envision Ving Rhames and Simon Pegg hunched over a circuit board while <a href="https://youtu.be/U8Q2MgdMskQ?si=hLj7Pg1mWrw73SRd" rel="noopener noreferrer" target="_blank">Tom Cruise was running</a> in the background. It was very cool. So I want to start with that vision of like somebody hunched over a circuit board that they’ve stolen and they’re trying to crack an encryption code or whatever and they’ve got a little probe on one exposed piece of copper. <a href="https://www.engineering.columbia.edu/faculty/mingoo-seok" rel="noopener noreferrer" target="_blank">A group at Columbia</a> and Intel came up with countermeasures for that. They invented a circuit that would reside basically on each pin going out of a processor, or you could have it on the memory side if you wanted. That can actually detect even the most advanced probe. So when you touch these probes to the line, there’s like a very, very slight change in capacitance. I mean, if you’re using a really high-end probe, it’s very, very slight. Larger probes, it’s huge. [laughter] You never think that the CPU is actually paying attention when you’re doing this. With this circuit, it could. It will know that you are actuall— that there’s a probe on a line, and it can take countermeasures like, “Oh, I’m just going to scramble everything. You’re never going to find any secrets from this.” So again, the countermeasures, what it triggers, they left up to you. But the circuit was very cool because now your CPU can know when someone’s trying to hack it.</p><p><strong>Cass: </strong>My CPU always knows I’m trying to hack it. It’s evil. But yes, I’m just trying to debug it, not everything else. But that’s actually pretty cool. And then there was another one where, yeah, again, you were going after another— <a href="https://www.utexas.edu/" rel="noopener noreferrer" target="_blank">University of Austin, Texas</a>, were also doing this thing where even non-physical probes, I think, it could go after.</p><p><strong>Moore: </strong>So you don’t have to— you don’t always have to touch things. You can use the electromagnetic emissions from a chip as sort of what’s called a side channel attack. So it just sort of changes in the emissions from the chip when it’s doing particular things can leak information. So what the UT Austin team did was basically they made the circuitry that kind of does the encryption, the sort of key encryption circuitry. They modified it in a way so that the signature was just sort of a blur. And it still worked well. It did its job in a timely manner and stuff like that. But if you hold your EM sniffer up to it, you’re never going to figure out what the encryption key is.</p><p><strong>Cass: </strong>But I think you said you had one that was your absolute favorite.</p><p><strong>Moore: </strong>Yes. It’s totally my favorite. I mean, come on. How could I not like this? They invented a circuit that self-destructs. I got to tell you what the circuit is first because this is also a cool and—</p><p><strong>Cass: </strong>This is a different group.</p><p><strong>Moore: </strong>This is a group at <a href="https://www.uvm.edu/" rel="noopener noreferrer" target="_blank">University of Vermont</a> and <a href="https://www.marvell.com/" rel="noopener noreferrer" target="_blank">Marvell Technology</a>. And what they came up with was a physical unclonable function circuit that—</p><p><strong>Cass: </strong>You’re going to have to go and unpack.</p><p><strong>Moore: </strong>Yeah, let me start with that. Physical and clonable function is basically there are always going to be very, very slight differences in each device on a chip, such that if you were to sort of take it, if you were to sort of measure those differences, every chip would be different. Every chip would have sort of its unique fingerprint. So these people have invented these physical and clonable function circuits. And they work great in some ways, but they’re actually very hard to make consistent. You don’t want to use this chip fingerprint as your security key if that fingerprint changes with temperature or as the chip ages. [laughter] So those are problems that different groups have come up with different solutions to solve. The Vermont group had their own solution. It was cool. But what I loved the most was that if the key is compromised or in danger of being compromised. For instance, somebody’s got a probe on it. [laughter] The circuit will actually destroy itself, literally destroy itself. Not in a sparks and smoke kind of way.</p><p><strong>Cass:</strong> Boo.</p><p><strong>Moore:</strong> I know. But at the micro level, it’s kind of like that. Basically, they just jammed the voltage up so high that there’s enough current in the long lines that copper atoms will actually be blown out of position. It will literally create voids and open circuits. At the same time, the voltage is again so high that the insulation in the transistors will start to get compromised, which is an ordinary aging effect, but they’re accelerating it greatly. And so you wind up basically with gobbledygook. Your fingerprint is gone. You could never countermeasure— sorry, you could never counterfeit this chip. You couldn’t say, well, “I got this,” because it’ll have a different fingerprint. It’s definitely not like— it won’t register as the same chip.</p><p><strong>Cass:</strong> So not only will it not work, but if you were to like-- because it’s not like blowing fuses because there are memory protection systems where you send a little-- because you don’t want someone downloading your firmware. You send a little pulse through blows a fuse. But if you really want to, you could crack open. You could decap that chip and see what’s going on. This is scorched Earth internally.</p><p><strong>Moore:</strong> Right, right. At least for the part that makes the physical unclonable function, that is essentially destroyed. And so if you encounter that chip and it doesn’t have the right fingerprint, which it won’t, you know it’s been compromised.</p><p><strong>Cass:</strong> Wow. Well, that is fascinating and very cool. But I’m afraid that’s all we have time today. So thanks so much for coming on and talking about IISSCC.</p><p><strong>Moore: </strong>ISSCC. Oh, yeah. Thanks, Stephen. It was a great time.</p><p><strong>Cass: </strong>So today on <em>Fixing the Future</em>, we were talking with Samuel K. Moore about the latest developments in semiconductor technology. For <em>IEEE Spectrum</em>‘s <em>Fixing the Future</em>, I’m Stephen Cass, and I hope you’ll join us next time.</p>]]></description><pubDate>Wed, 20 Mar 2024 09:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/secure-integrated-circuits-meta-ar</guid><category>Integrated-circuits</category><category>Meta</category><category>Semiconductors</category><category>Type-podcast</category><category>Fixing-the-future</category><dc:creator>Samuel K. Moore</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/46469653/origin.jpg"/><enclosure length="10879774" type="application/pdf" url="https://ntrs.nasa.gov/api/citations/19670009180/downloads/19670009180.pdf"/><itunes:explicit/><itunes:subtitle>Stephen Cass: Hello and welcome to Fixing the Future, an IEEE Spectrum podcast where we look at concrete solutions to some big problems. I’m your host Stephen Cass, a senior editor at IEEE Spectrum. And before we start, I just want to tell you that you can get the latest coverage from some of Spectrum’s most important beats, including AI, climate change and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe. Today we’re going to be talking with Samuel K. Moore, who follows a semiconductor beat for us like a charge carrier in an electric field. Sam, welcome to the show. Samuel K. Moore: Thanks, Stephen. Good to be here. Cass: Sam, you recently attended the Big Kahuna Conference of the semiconductor research world, ISSCC. What exactly is that, and why is it so important? Moore: Well, besides being a difficult-to-say acronym, it actually stands for the IEEE International Solid State Circuits Conference. And this is really one of the big three of the semiconductor research world. It’s been going on for more than 70 years, which means it’s technically older than the IEEE in some ways. We’re not going to get into that. And it really is sort of the crème de la crème if you are doing circuits research. So there is another conference for inventing new kinds of transistors and other sorts of devices. This is the conference that’s about the circuits you can make from them. And as such, it’s got all kinds of cool stuff. I mean, we’re talking about like 200 or so talks about processors, memories, radio circuits, power circuits, brain-computer interfaces. There’s kind of really something for everybody. Cass: So while we’re there, we send you this monster thing and ask you to fish out— They’re not all going to be— Let’s be honest. They’re not all going to be gangbusters. What were the ones that really caught your eye? Moore: All right. So I’m going to tell you actually about a few things. First off, there’s a potential revolution in analog circuits that’s brewing. Just saw the beginnings of it. There’s a cool upcoming chip that does AI super efficiently by mixing its memory and computing resources. We had a peek at Meta’s future AR glasses or the chip for them anyways. And finally, there was a bunch of very cool security stuff, including a circuit that self-destructs. Cass: Oh, that sounds cool. Well, let’s start off with the analog stuff because you were saying this is like really a way of kind of almost saying bye-bye to some electronic analog stuff. So this is fascinating. Moore: Yeah. So this really kind of kicked the conference off with a bang because it was one of the plenary sessions. It was literally one of the first things that was said. And it had to come from the right person, and it kind of did. It was IEEE fellow and sort of analog establishment figure from the Netherlands Bram Nauta. And it was a kind of a real, like, “We’re doing it all wrong kind of moment,” but it was important because the stakes are pretty high. Basically, Moore’s Law has been really good for digital circuits, the stuff that you use to make the processing parts of CPUs and in its own way for memory but not so much for analog. Basically, you kind of look down the road and you are really not getting any better transistors and processes for analog going forward. And you’re starting to see this in places, even in high-end processors, the parts that kind of do the I/O. They’re just not advancing. They’re using super cutting-edge processes for the compute part and using the same I/O chiplet for like four or five generations. Cass: So this is like when you’re trying to see things from the outside world. So like your smartphone, it needs these converters to digitize your voice but also to handle the radio signal and so on. Moore: Exactly. Exactly. As they say, the world is analog. You have to make it digital to do the computing on it. So what you’re saying about a radio circuit is actually a great example because you’ve got the antenna and then you have to amplify, you have to mix in the carrier signal and stuff, but you have to amplify it. You have to amplify it really nicely quite linearly and everything like that. And then you feed it to your analog to digital converter. What Nauta is pointing out is that we’re not really going to get any better with this amplifier. It’s going to continue to burn tens or hundreds of times more power than any of the digital circuits. And so his idea is let’s get rid of it. No more linear amplifiers. Forget it. Instead, what he’s proposing is that we invent an analog-to-digital converter that doesn’t need one. So literally-- Cass: Well, why haven’t we done this before? It sounds very obvious. You don’t like a component. You throw it out. But obviously, it was doing something. And how do you make up that difference with the pure analog-to-digital converter? Moore: Well, I can’t tell you completely how it’s done, especially because he’s still working on it. But his math basically checks out. And this is really a question— this is really a question of Moore’s Law. It’s not so much, “Well, what are we doing now?” It’s, “What can we do in the future?” If we can’t get any better with our analog parts in the future, let’s make everything out of digital, digitize immediately. And let’s not worry about any of the amplification part. Cass: But is there some kind of trade-off being made here? Moore: There is. So right now, you’ve got your linear amplifier consuming milliwatts and your analog to digital converter, which is a thing that can take advantage of Moore’s Law going forward because it’s mostly just comparators and capacitors and stuff that you can deal with. And that consumes only microwatts. So what he’s saying is, “We’ll make the analog-to-digital converter a little bit worse. It’s going to consume a little more power. But the overall system is going to consume less if you take the whole system as a piece.” And that has been part of the problem is that the figures of merit, the things that you measure how good is your linear amplifier, is really just about the linear amplifier rather than worrying about like, “Well, what’s the whole system consuming?” And this looks like, if you care about the whole system, which is kind of what you have to, then this no longer really makes sense. Cass: This also sounds like it gets closer to the dream of the pure software-defined radio, which is you take basically an idea where you take your CPU, you connect one pin to an antenna, and then almost from DC to daylight, you’re able to handle everything in software-defined functions. Moore: That’s right. That’s right. Digital can take advantage of Moore’s Law. Moore’s Law is continuing. It’s slowing, but it’s continuing. And so that’s just sort of how things have been creeping along. And now it’s finally getting kind of to the edge, to that first amplifier. So anyways, he was kind of apprehensive about giving this talk because it is poo-pooing on quite a lot of things actually at this conference. So he told me he was actually pretty nervous about it. But it had some interest. I mean, there were some engineers from Apple and others that approached him that said, “Yeah, this kind of makes sense. And maybe we’ll take a look at this.” Cass: So fascinating. So it appears to be solving these bottlenecks and linear amplifier efficiencies of bottleneck. But there was another bottleneck that you mentioned, which is the memory wall. Moore: Yes. Cass: It’s a memory wall. Moore: Right. So the memory wall is this sort of longstanding issue in computing. Particularly, it started off in high-performance computing, but it’s kind of in all computing now, where the amount of time and energy needed to move a bit from memory to the CPU or the GPU is so much bigger than the amount of time and energy needed to move a bit from one part of the GPU or CPU to another part of the GPU or CPU, staying on the silicon, essentially. Cass: Going off silicon has a penalty. Moore: That’s a huge penalty. Cass: And this is why, in traditional CPUs, you have these like caches, L1. You hear these words, L1 cache, L2 cache, L3 cache. But this goes much further. What you’re talking about is much further than just having a little blob of memory near the CPU. Moore: Yes, yes. So the general memory wall is this problem. And people have been trying to solve this in all kinds of ways. And you just sort of see it in the latest NVIDIA GPUs basically has all of its DRAM is right on the same— is on like a silicon interposer with the GPU. They couldn’t be connected any more closely. You see it in that giant chip. If you remember, Cerebras has a wafer size chip. It’s as big as your face. And that is— Cass: Oh, that sounds an incredible chip. And we’ll definitely put the link to that in the show notes for this because there’s a great picture. It has to be kind of seen to be believed, I think. There’s a great picture of this monster, monster thing. But sorry. Moore: Yeah, and that is an extreme solution to the memory wall problem. But there’s all sorts of other cool research in this. And one of the best is sort of to bring the compute to the memory so that your bits just don’t have to move very far. There’s a bunch of different— well, a whole mess of different ways to do this. There were like nine talks or something on this when I was there, and there are even very cool ways that we’ve written about in Spectrum, where you can actually do you can do sort of AI calculations in memory using analog, where the-- Cass: Oh, so now we’re back to analog! Let’s creep it back in. Moore: Yeah, no, it’s cool. I mean, it’s cool that sort of coincidentally, the multiply and accumulate task, which is sort of the fundamental crux of all the matrix stuff that runs AI you can do in basically Ohm’s Law and Kirchhoff’s Law. They just kind of dovetail into this wonderful thing. But it’s very fiddly. Trying to do anything in analog is always [crosstalk]. Cass: So before digital computers, like right up into the ‘70s, analog computers were actually quite competitive, whereby you set up your problem using operational amplifiers, which is why they’re called operational amplifiers. Op amps are called op amps. And you set it all your equation all up, and then you produce results. And this is basically like taking one of those analog operations where the behavior of the components models a particular mathematical equation. And you’re taking a little bit of analog computing, and you’re putting it in because it matches with one particular calculation that’s used in AI. Moore: Exactly, yeah, yeah. So it’s a very fruitful field, and people are still chugging along at it. I met a guy at ISSCC. His name is Evangelos Eleftheriou. He is the CTO of a company called Axelera, and he is a veteran of one of these projects that was doing analog AI at IBM. And he came to the conclusion that it was just not ready for prime time. So instead, he found himself a digital way of doing the AI compute in memory. And it hinges on basically interleaving the compute so tightly with the cache memory that they’re kind of a part of each other. That required, of course, coming up with a sort of new kind of SRAM, which he was very hush-hush about, and also kind of doing things in integer math instead of floating point math. Most of what you see in the AI world, like NVIDIA and stuff like that, their primary calculations are in floating point numbers. Now, those floating point numbers are getting smaller and smaller. They’re doing more and more in just 8-bit floating point, but it’s still floating point. This depends on integers instead just because of the architecture depends on it. Cass: Yeah, no, I like integer math, actually, because I do a lot of this retrocomputing. A lot of that is in this where you actually end up doing a lot of integer math. And the truth is that you realize, oh, the Forth programming language also is famously very [integer]-based. And for a lot of real-world problems, you can find a perfectly acceptable scale factor that lets you use integers with no appreciable difference in precision. Floating points are kind of more general purpose. But this really had some impressive trade-offs in the benchmarks. Moore: Yeah, whatever they managed, despite any trade-offs they might have had to make for the math, they actually did very well. Now this is for— their aim is what’s called an edge computer. So it’s the kind of thing that would be running a bunch of cameras in sort of a traffic management situation or things like that. It was very machine-vision-oriented, but it’s like a computer or a card that you’d stick into a server that’s going to sit on-premises and do its thing. And when they ran a typical machine vision benchmark, they were able to do 2,500 frames per second. So that’s a lot of cameras potentially, especially when you consider most of these cameras are like— they’re not going 240. Cass: Even if you take it at a standard frame rate of, say, 20 frames per frame per second, that’s 100 cameras that you’re processing simultaneously. Moore: Yeah, yeah. And they were able to actually do this at like 353 frames per watt, which is a very good figure. And it’s performance per watt that really is kind of driving everything at the edge. If you ever want this sort of thing to go in a car or any kind of moving vehicle, everybody’s counting the watts. So that’s the thing. Anyways, I would really look, keep my eyes out for them. They are taping out this year. Should have some silicon later. Could be very cool. Cass: So speaking of that, getting into the chips and making differences, you can make changes sort of on the plane of the chips. But you and I have found some interesting stuff on 3D chip technology, which I know has been a thread of your coverage in recent years. Moore: Yeah, I’m all about the 3D chip technology. You’re finding 3D chip technology all the time pretty much in advanced processors. If you look at what Intel’s doing with its AI accelerators for supercomputers, if you look at what AMD is doing for basically all of its stuff now, they’re really taking advantage of being able to stack one chip on top of another. And this is, again, Moore’s Law slowing down, not getting as much in the two-dimensional shrinking as we used to. And we really can’t expect to get that much. And so if you want more transistors per square millimeter, which really is how you get more compute, you’ve got to start putting one slice of silicon on top of the other slice of silicon. Cass: So as we’re getting towards—instead of transistors per square millimeter, it’s going to be per cubic millimeter in the future. Moore: You could measure it that way. Thankfully, these things are so slim and sort of— Cass: Right. So it looks like a— Moore: Yeah, it looks basically the same form factor as a regular chip. So this 3D tech is powered by the most advanced part anyways is powered by something called hybrid bonding, which I’m afraid I have failed to understand where the word hybrid comes in at all. But really it is kind of making a cold weld between the copper pads on top of one chip and the copper pads on another one. Cass: Just explain what a cold well is because I have heard about a cold well is, but actually, when it comes to— it’s a problem when you’re building things in outer space. Moore: Oh, oh, that. Exactly that. So how it works here is— so picture you build your transistors on the plane of the silicon and then you’ve got layer upon layer of interconnects. And those terminate in a set of sort of pads at the top, okay? You’ve got the same thing on your other chip. And what you do is you put them face-to-face, and there’s going to be like a little bit of gap between the copper on one and the copper on the other, but the insulation around them will just stick together. Then you heat them up just a little bit and the copper expands and just kind of jams itself together and sticks. Cass: Oh, it’s almost like brazing, actually. Moore: I’ll take your word for it. I genuinely don’t know what that is. Cass: I could be wrong. I’m sure a nice metallurgist out there will correct me. But yes, but I see what you’re being with the magnet. You just need a little bit of whoosh. And then everything kind of sticks together. You don’t have to go into your soldering iron and do the heavy— Moore: There’s no solder involved. And that is actually really, really key because it means almost like an order of magnitude increase in the density you can have these connections. We’re talking about like having one connection every few microns. So that adds up to like 200,000 connections per square millimeter if my math is right. It’s actually quite a lot. And it’s really enough to make the distances between from one part of one piece of silicon to one part of another. The same kind of as if they were all just built on one piece of silicon. It’s like Cerebras did it all big in two dimensions. This is folding it up and getting essentially the same kind of connectivity, the same low energy per bit, the same low latency per bit. Cass: And this is where Meta came in. Moore: Yeah. So Meta has been showing up at this conference and other conferences sort of. I’ve noticed them on panels sort of talking about what they would want from chip technology for the ideal pair of augmented reality glasses. The talk they gave today was like— the point was you really just don’t want a shoebox walking around on your face. That’s just not how— Cass: That sounds like a very pointed jab at the moment, perhaps. Moore: Right, it does. Anyways, it turns out what they want is 3D technology because it allows them to pack in more performance, more silicon performance in an area that might actually fit into something that looks like a pair of glasses that you might actually want to wear. And again, flinging the bits around, it would probably reduce the power consumption of said chip, which is very important because you don’t want it to be really hot. You don’t want a really hot shoebox on your face. And you want it to last a long time. You don’t have to keep charging it. So what they showed for the first time, as far as I can tell, is sort of the silicon that they’ve been working on for this. This is a custom machine learning chip. It’s meant to do the kind of neural network stuff that you just absolutely need for augmented reality. And what they had was a four millimeter by four millimeter roughly chip that’s actually made up of two chips that are hybrid bonded together. Cass: And you need this stuff because you need the chip to be able to do all this computer vision processing to process what’s going on in the environment and reduce some sort of semantic stuff that you can overlay things on. This is why learning is so, so important. Machine learning is so important to these applications or AI in general. Yeah. Moore: Exactly, yeah. And you need that AI to be right there in your glasses as opposed to out in the cloud or even in a nearby server. Anything other than actually in the device is not going to give you enough latency and such, or it’s going to give you too much latency, excuse me. Anyway, so this chip was actually two 3D stacked chips. And what was very cool about this is they really made the 3D point because they had a version that was just the 2D, just like they had half of it. They tested the combined one, and they tested the half one. So the 3D stacked one was amazingly better. It wasn’t just twice as good. Basically, in their test, they tracked two hands, which is very important, obviously, for augmented reality. It has to know where your hands are. So that was the thing they tested. So the 3D chip was able to track two hands, and it used less energy than the ordinary 2D chip did when it was only tracking one hand. So 3D is a win for Meta clearly. We’ll see what the final project is like and whether anybody actually wants to use it. But it’s clear that this is the technology that’s going to get them there if they’re ever going to get there. Cass: So jumping to another track, you talked about you mentioned security at the top. And I love the security because there seems to be no limit to how paranoid you can be and yet still not always be able to keep up with the real world. Spectrum has had a long coverage of the history of electronic intelligence spying. We had this great piece on the Russian typewriter and how the Russians spied on American typewriters by putting this embedding circuitry directly into the covers of the typewriters. It’s a crazy story, but you entered the chip security track. And as I’m really eager to hear about the crazy ideas you heard there— or as it turns out, not so crazy ideas. Moore: Right. You’re not paranoid if they’re really trying to— they’re really out to get to you. So yeah, no, this was some real Mission Impossible stuff. I mean, you could kind of envision Ving Rhames and Simon Pegg hunched over a circuit board while Tom Cruise was running in the background. It was very cool. So I want to start with that vision of like somebody hunched over a circuit board that they’ve stolen and they’re trying to crack an encryption code or whatever and they’ve got a little probe on one exposed piece of copper. A group at Columbia and Intel came up with countermeasures for that. They invented a circuit that would reside basically on each pin going out of a processor, or you could have it on the memory side if you wanted. That can actually detect even the most advanced probe. So when you touch these probes to the line, there’s like a very, very slight change in capacitance. I mean, if you’re using a really high-end probe, it’s very, very slight. Larger probes, it’s huge. [laughter] You never think that the CPU is actually paying attention when you’re doing this. With this circuit, it could. It will know that you are actuall— that there’s a probe on a line, and it can take countermeasures like, “Oh, I’m just going to scramble everything. You’re never going to find any secrets from this.” So again, the countermeasures, what it triggers, they left up to you. But the circuit was very cool because now your CPU can know when someone’s trying to hack it. Cass: My CPU always knows I’m trying to hack it. It’s evil. But yes, I’m just trying to debug it, not everything else. But that’s actually pretty cool. And then there was another one where, yeah, again, you were going after another— University of Austin, Texas, were also doing this thing where even non-physical probes, I think, it could go after. Moore: So you don’t have to— you don’t always have to touch things. You can use the electromagnetic emissions from a chip as sort of what’s called a side channel attack. So it just sort of changes in the emissions from the chip when it’s doing particular things can leak information. So what the UT Austin team did was basically they made the circuitry that kind of does the encryption, the sort of key encryption circuitry. They modified it in a way so that the signature was just sort of a blur. And it still worked well. It did its job in a timely manner and stuff like that. But if you hold your EM sniffer up to it, you’re never going to figure out what the encryption key is. Cass: But I think you said you had one that was your absolute favorite. Moore: Yes. It’s totally my favorite. I mean, come on. How could I not like this? They invented a circuit that self-destructs. I got to tell you what the circuit is first because this is also a cool and— Cass: This is a different group. Moore: This is a group at University of Vermont and Marvell Technology. And what they came up with was a physical unclonable function circuit that— Cass: You’re going to have to go and unpack. Moore: Yeah, let me start with that. Physical and clonable function is basically there are always going to be very, very slight differences in each device on a chip, such that if you were to sort of take it, if you were to sort of measure those differences, every chip would be different. Every chip would have sort of its unique fingerprint. So these people have invented these physical and clonable function circuits. And they work great in some ways, but they’re actually very hard to make consistent. You don’t want to use this chip fingerprint as your security key if that fingerprint changes with temperature or as the chip ages. [laughter] So those are problems that different groups have come up with different solutions to solve. The Vermont group had their own solution. It was cool. But what I loved the most was that if the key is compromised or in danger of being compromised. For instance, somebody’s got a probe on it. [laughter] The circuit will actually destroy itself, literally destroy itself. Not in a sparks and smoke kind of way. Cass: Boo. Moore: I know. But at the micro level, it’s kind of like that. Basically, they just jammed the voltage up so high that there’s enough current in the long lines that copper atoms will actually be blown out of position. It will literally create voids and open circuits. At the same time, the voltage is again so high that the insulation in the transistors will start to get compromised, which is an ordinary aging effect, but they’re accelerating it greatly. And so you wind up basically with gobbledygook. Your fingerprint is gone. You could never countermeasure— sorry, you could never counterfeit this chip. You couldn’t say, well, “I got this,” because it’ll have a different fingerprint. It’s definitely not like— it won’t register as the same chip. Cass: So not only will it not work, but if you were to like-- because it’s not like blowing fuses because there are memory protection systems where you send a little-- because you don’t want someone downloading your firmware. You send a little pulse through blows a fuse. But if you really want to, you could crack open. You could decap that chip and see what’s going on. This is scorched Earth internally. Moore: Right, right. At least for the part that makes the physical unclonable function, that is essentially destroyed. And so if you encounter that chip and it doesn’t have the right fingerprint, which it won’t, you know it’s been compromised. Cass: Wow. Well, that is fascinating and very cool. But I’m afraid that’s all we have time today. So thanks so much for coming on and talking about IISSCC. Moore: ISSCC. Oh, yeah. Thanks, Stephen. It was a great time. Cass: So today on Fixing the Future, we were talking with Samuel K. Moore about the latest developments in semiconductor technology. For IEEE Spectrum‘s Fixing the Future, I’m Stephen Cass, and I hope you’ll join us next time.</itunes:subtitle><itunes:summary>Stephen Cass: Hello and welcome to Fixing the Future, an IEEE Spectrum podcast where we look at concrete solutions to some big problems. I’m your host Stephen Cass, a senior editor at IEEE Spectrum. And before we start, I just want to tell you that you can get the latest coverage from some of Spectrum’s most important beats, including AI, climate change and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe. Today we’re going to be talking with Samuel K. Moore, who follows a semiconductor beat for us like a charge carrier in an electric field. Sam, welcome to the show. Samuel K. Moore: Thanks, Stephen. Good to be here. Cass: Sam, you recently attended the Big Kahuna Conference of the semiconductor research world, ISSCC. What exactly is that, and why is it so important? Moore: Well, besides being a difficult-to-say acronym, it actually stands for the IEEE International Solid State Circuits Conference. And this is really one of the big three of the semiconductor research world. It’s been going on for more than 70 years, which means it’s technically older than the IEEE in some ways. We’re not going to get into that. And it really is sort of the crème de la crème if you are doing circuits research. So there is another conference for inventing new kinds of transistors and other sorts of devices. This is the conference that’s about the circuits you can make from them. And as such, it’s got all kinds of cool stuff. I mean, we’re talking about like 200 or so talks about processors, memories, radio circuits, power circuits, brain-computer interfaces. There’s kind of really something for everybody. Cass: So while we’re there, we send you this monster thing and ask you to fish out— They’re not all going to be— Let’s be honest. They’re not all going to be gangbusters. What were the ones that really caught your eye? Moore: All right. So I’m going to tell you actually about a few things. First off, there’s a potential revolution in analog circuits that’s brewing. Just saw the beginnings of it. There’s a cool upcoming chip that does AI super efficiently by mixing its memory and computing resources. We had a peek at Meta’s future AR glasses or the chip for them anyways. And finally, there was a bunch of very cool security stuff, including a circuit that self-destructs. Cass: Oh, that sounds cool. Well, let’s start off with the analog stuff because you were saying this is like really a way of kind of almost saying bye-bye to some electronic analog stuff. So this is fascinating. Moore: Yeah. So this really kind of kicked the conference off with a bang because it was one of the plenary sessions. It was literally one of the first things that was said. And it had to come from the right person, and it kind of did. It was IEEE fellow and sort of analog establishment figure from the Netherlands Bram Nauta. And it was a kind of a real, like, “We’re doing it all wrong kind of moment,” but it was important because the stakes are pretty high. Basically, Moore’s Law has been really good for digital circuits, the stuff that you use to make the processing parts of CPUs and in its own way for memory but not so much for analog. Basically, you kind of look down the road and you are really not getting any better transistors and processes for analog going forward. And you’re starting to see this in places, even in high-end processors, the parts that kind of do the I/O. They’re just not advancing. They’re using super cutting-edge processes for the compute part and using the same I/O chiplet for like four or five generations. Cass: So this is like when you’re trying to see things from the outside world. So like your smartphone, it needs these converters to digitize your voice but also to handle the radio signal and so on. Moore: Exactly. Exactly. As they say, the world is analog. You have to make it digital to do the computing on it. So what you’re saying about a radio circuit is actually a great example because you’ve got the antenna and then you have to amplify, you have to mix in the carrier signal and stuff, but you have to amplify it. You have to amplify it really nicely quite linearly and everything like that. And then you feed it to your analog to digital converter. What Nauta is pointing out is that we’re not really going to get any better with this amplifier. It’s going to continue to burn tens or hundreds of times more power than any of the digital circuits. And so his idea is let’s get rid of it. No more linear amplifiers. Forget it. Instead, what he’s proposing is that we invent an analog-to-digital converter that doesn’t need one. So literally-- Cass: Well, why haven’t we done this before? It sounds very obvious. You don’t like a component. You throw it out. But obviously, it was doing something. And how do you make up that difference with the pure analog-to-digital converter? Moore: Well, I can’t tell you completely how it’s done, especially because he’s still working on it. But his math basically checks out. And this is really a question— this is really a question of Moore’s Law. It’s not so much, “Well, what are we doing now?” It’s, “What can we do in the future?” If we can’t get any better with our analog parts in the future, let’s make everything out of digital, digitize immediately. And let’s not worry about any of the amplification part. Cass: But is there some kind of trade-off being made here? Moore: There is. So right now, you’ve got your linear amplifier consuming milliwatts and your analog to digital converter, which is a thing that can take advantage of Moore’s Law going forward because it’s mostly just comparators and capacitors and stuff that you can deal with. And that consumes only microwatts. So what he’s saying is, “We’ll make the analog-to-digital converter a little bit worse. It’s going to consume a little more power. But the overall system is going to consume less if you take the whole system as a piece.” And that has been part of the problem is that the figures of merit, the things that you measure how good is your linear amplifier, is really just about the linear amplifier rather than worrying about like, “Well, what’s the whole system consuming?” And this looks like, if you care about the whole system, which is kind of what you have to, then this no longer really makes sense. Cass: This also sounds like it gets closer to the dream of the pure software-defined radio, which is you take basically an idea where you take your CPU, you connect one pin to an antenna, and then almost from DC to daylight, you’re able to handle everything in software-defined functions. Moore: That’s right. That’s right. Digital can take advantage of Moore’s Law. Moore’s Law is continuing. It’s slowing, but it’s continuing. And so that’s just sort of how things have been creeping along. And now it’s finally getting kind of to the edge, to that first amplifier. So anyways, he was kind of apprehensive about giving this talk because it is poo-pooing on quite a lot of things actually at this conference. So he told me he was actually pretty nervous about it. But it had some interest. I mean, there were some engineers from Apple and others that approached him that said, “Yeah, this kind of makes sense. And maybe we’ll take a look at this.” Cass: So fascinating. So it appears to be solving these bottlenecks and linear amplifier efficiencies of bottleneck. But there was another bottleneck that you mentioned, which is the memory wall. Moore: Yes. Cass: It’s a memory wall. Moore: Right. So the memory wall is this sort of longstanding issue in computing. Particularly, it started off in high-performance computing, but it’s kind of in all computing now, where the amount of time and energy needed to move a bit from memory to the CPU or the GPU is so much bigger than the amount of time and energy needed to move a bit from one part of the GPU or CPU to another part of the GPU or CPU, staying on the silicon, essentially. Cass: Going off silicon has a penalty. Moore: That’s a huge penalty. Cass: And this is why, in traditional CPUs, you have these like caches, L1. You hear these words, L1 cache, L2 cache, L3 cache. But this goes much further. What you’re talking about is much further than just having a little blob of memory near the CPU. Moore: Yes, yes. So the general memory wall is this problem. And people have been trying to solve this in all kinds of ways. And you just sort of see it in the latest NVIDIA GPUs basically has all of its DRAM is right on the same— is on like a silicon interposer with the GPU. They couldn’t be connected any more closely. You see it in that giant chip. If you remember, Cerebras has a wafer size chip. It’s as big as your face. And that is— Cass: Oh, that sounds an incredible chip. And we’ll definitely put the link to that in the show notes for this because there’s a great picture. It has to be kind of seen to be believed, I think. There’s a great picture of this monster, monster thing. But sorry. Moore: Yeah, and that is an extreme solution to the memory wall problem. But there’s all sorts of other cool research in this. And one of the best is sort of to bring the compute to the memory so that your bits just don’t have to move very far. There’s a bunch of different— well, a whole mess of different ways to do this. There were like nine talks or something on this when I was there, and there are even very cool ways that we’ve written about in Spectrum, where you can actually do you can do sort of AI calculations in memory using analog, where the-- Cass: Oh, so now we’re back to analog! Let’s creep it back in. Moore: Yeah, no, it’s cool. I mean, it’s cool that sort of coincidentally, the multiply and accumulate task, which is sort of the fundamental crux of all the matrix stuff that runs AI you can do in basically Ohm’s Law and Kirchhoff’s Law. They just kind of dovetail into this wonderful thing. But it’s very fiddly. Trying to do anything in analog is always [crosstalk]. Cass: So before digital computers, like right up into the ‘70s, analog computers were actually quite competitive, whereby you set up your problem using operational amplifiers, which is why they’re called operational amplifiers. Op amps are called op amps. And you set it all your equation all up, and then you produce results. And this is basically like taking one of those analog operations where the behavior of the components models a particular mathematical equation. And you’re taking a little bit of analog computing, and you’re putting it in because it matches with one particular calculation that’s used in AI. Moore: Exactly, yeah, yeah. So it’s a very fruitful field, and people are still chugging along at it. I met a guy at ISSCC. His name is Evangelos Eleftheriou. He is the CTO of a company called Axelera, and he is a veteran of one of these projects that was doing analog AI at IBM. And he came to the conclusion that it was just not ready for prime time. So instead, he found himself a digital way of doing the AI compute in memory. And it hinges on basically interleaving the compute so tightly with the cache memory that they’re kind of a part of each other. That required, of course, coming up with a sort of new kind of SRAM, which he was very hush-hush about, and also kind of doing things in integer math instead of floating point math. Most of what you see in the AI world, like NVIDIA and stuff like that, their primary calculations are in floating point numbers. Now, those floating point numbers are getting smaller and smaller. They’re doing more and more in just 8-bit floating point, but it’s still floating point. This depends on integers instead just because of the architecture depends on it. Cass: Yeah, no, I like integer math, actually, because I do a lot of this retrocomputing. A lot of that is in this where you actually end up doing a lot of integer math. And the truth is that you realize, oh, the Forth programming language also is famously very [integer]-based. And for a lot of real-world problems, you can find a perfectly acceptable scale factor that lets you use integers with no appreciable difference in precision. Floating points are kind of more general purpose. But this really had some impressive trade-offs in the benchmarks. Moore: Yeah, whatever they managed, despite any trade-offs they might have had to make for the math, they actually did very well. Now this is for— their aim is what’s called an edge computer. So it’s the kind of thing that would be running a bunch of cameras in sort of a traffic management situation or things like that. It was very machine-vision-oriented, but it’s like a computer or a card that you’d stick into a server that’s going to sit on-premises and do its thing. And when they ran a typical machine vision benchmark, they were able to do 2,500 frames per second. So that’s a lot of cameras potentially, especially when you consider most of these cameras are like— they’re not going 240. Cass: Even if you take it at a standard frame rate of, say, 20 frames per frame per second, that’s 100 cameras that you’re processing simultaneously. Moore: Yeah, yeah. And they were able to actually do this at like 353 frames per watt, which is a very good figure. And it’s performance per watt that really is kind of driving everything at the edge. If you ever want this sort of thing to go in a car or any kind of moving vehicle, everybody’s counting the watts. So that’s the thing. Anyways, I would really look, keep my eyes out for them. They are taping out this year. Should have some silicon later. Could be very cool. Cass: So speaking of that, getting into the chips and making differences, you can make changes sort of on the plane of the chips. But you and I have found some interesting stuff on 3D chip technology, which I know has been a thread of your coverage in recent years. Moore: Yeah, I’m all about the 3D chip technology. You’re finding 3D chip technology all the time pretty much in advanced processors. If you look at what Intel’s doing with its AI accelerators for supercomputers, if you look at what AMD is doing for basically all of its stuff now, they’re really taking advantage of being able to stack one chip on top of another. And this is, again, Moore’s Law slowing down, not getting as much in the two-dimensional shrinking as we used to. And we really can’t expect to get that much. And so if you want more transistors per square millimeter, which really is how you get more compute, you’ve got to start putting one slice of silicon on top of the other slice of silicon. Cass: So as we’re getting towards—instead of transistors per square millimeter, it’s going to be per cubic millimeter in the future. Moore: You could measure it that way. Thankfully, these things are so slim and sort of— Cass: Right. So it looks like a— Moore: Yeah, it looks basically the same form factor as a regular chip. So this 3D tech is powered by the most advanced part anyways is powered by something called hybrid bonding, which I’m afraid I have failed to understand where the word hybrid comes in at all. But really it is kind of making a cold weld between the copper pads on top of one chip and the copper pads on another one. Cass: Just explain what a cold well is because I have heard about a cold well is, but actually, when it comes to— it’s a problem when you’re building things in outer space. Moore: Oh, oh, that. Exactly that. So how it works here is— so picture you build your transistors on the plane of the silicon and then you’ve got layer upon layer of interconnects. And those terminate in a set of sort of pads at the top, okay? You’ve got the same thing on your other chip. And what you do is you put them face-to-face, and there’s going to be like a little bit of gap between the copper on one and the copper on the other, but the insulation around them will just stick together. Then you heat them up just a little bit and the copper expands and just kind of jams itself together and sticks. Cass: Oh, it’s almost like brazing, actually. Moore: I’ll take your word for it. I genuinely don’t know what that is. Cass: I could be wrong. I’m sure a nice metallurgist out there will correct me. But yes, but I see what you’re being with the magnet. You just need a little bit of whoosh. And then everything kind of sticks together. You don’t have to go into your soldering iron and do the heavy— Moore: There’s no solder involved. And that is actually really, really key because it means almost like an order of magnitude increase in the density you can have these connections. We’re talking about like having one connection every few microns. So that adds up to like 200,000 connections per square millimeter if my math is right. It’s actually quite a lot. And it’s really enough to make the distances between from one part of one piece of silicon to one part of another. The same kind of as if they were all just built on one piece of silicon. It’s like Cerebras did it all big in two dimensions. This is folding it up and getting essentially the same kind of connectivity, the same low energy per bit, the same low latency per bit. Cass: And this is where Meta came in. Moore: Yeah. So Meta has been showing up at this conference and other conferences sort of. I’ve noticed them on panels sort of talking about what they would want from chip technology for the ideal pair of augmented reality glasses. The talk they gave today was like— the point was you really just don’t want a shoebox walking around on your face. That’s just not how— Cass: That sounds like a very pointed jab at the moment, perhaps. Moore: Right, it does. Anyways, it turns out what they want is 3D technology because it allows them to pack in more performance, more silicon performance in an area that might actually fit into something that looks like a pair of glasses that you might actually want to wear. And again, flinging the bits around, it would probably reduce the power consumption of said chip, which is very important because you don’t want it to be really hot. You don’t want a really hot shoebox on your face. And you want it to last a long time. You don’t have to keep charging it. So what they showed for the first time, as far as I can tell, is sort of the silicon that they’ve been working on for this. This is a custom machine learning chip. It’s meant to do the kind of neural network stuff that you just absolutely need for augmented reality. And what they had was a four millimeter by four millimeter roughly chip that’s actually made up of two chips that are hybrid bonded together. Cass: And you need this stuff because you need the chip to be able to do all this computer vision processing to process what’s going on in the environment and reduce some sort of semantic stuff that you can overlay things on. This is why learning is so, so important. Machine learning is so important to these applications or AI in general. Yeah. Moore: Exactly, yeah. And you need that AI to be right there in your glasses as opposed to out in the cloud or even in a nearby server. Anything other than actually in the device is not going to give you enough latency and such, or it’s going to give you too much latency, excuse me. Anyway, so this chip was actually two 3D stacked chips. And what was very cool about this is they really made the 3D point because they had a version that was just the 2D, just like they had half of it. They tested the combined one, and they tested the half one. So the 3D stacked one was amazingly better. It wasn’t just twice as good. Basically, in their test, they tracked two hands, which is very important, obviously, for augmented reality. It has to know where your hands are. So that was the thing they tested. So the 3D chip was able to track two hands, and it used less energy than the ordinary 2D chip did when it was only tracking one hand. So 3D is a win for Meta clearly. We’ll see what the final project is like and whether anybody actually wants to use it. But it’s clear that this is the technology that’s going to get them there if they’re ever going to get there. Cass: So jumping to another track, you talked about you mentioned security at the top. And I love the security because there seems to be no limit to how paranoid you can be and yet still not always be able to keep up with the real world. Spectrum has had a long coverage of the history of electronic intelligence spying. We had this great piece on the Russian typewriter and how the Russians spied on American typewriters by putting this embedding circuitry directly into the covers of the typewriters. It’s a crazy story, but you entered the chip security track. And as I’m really eager to hear about the crazy ideas you heard there— or as it turns out, not so crazy ideas. Moore: Right. You’re not paranoid if they’re really trying to— they’re really out to get to you. So yeah, no, this was some real Mission Impossible stuff. I mean, you could kind of envision Ving Rhames and Simon Pegg hunched over a circuit board while Tom Cruise was running in the background. It was very cool. So I want to start with that vision of like somebody hunched over a circuit board that they’ve stolen and they’re trying to crack an encryption code or whatever and they’ve got a little probe on one exposed piece of copper. A group at Columbia and Intel came up with countermeasures for that. They invented a circuit that would reside basically on each pin going out of a processor, or you could have it on the memory side if you wanted. That can actually detect even the most advanced probe. So when you touch these probes to the line, there’s like a very, very slight change in capacitance. I mean, if you’re using a really high-end probe, it’s very, very slight. Larger probes, it’s huge. [laughter] You never think that the CPU is actually paying attention when you’re doing this. With this circuit, it could. It will know that you are actuall— that there’s a probe on a line, and it can take countermeasures like, “Oh, I’m just going to scramble everything. You’re never going to find any secrets from this.” So again, the countermeasures, what it triggers, they left up to you. But the circuit was very cool because now your CPU can know when someone’s trying to hack it. Cass: My CPU always knows I’m trying to hack it. It’s evil. But yes, I’m just trying to debug it, not everything else. But that’s actually pretty cool. And then there was another one where, yeah, again, you were going after another— University of Austin, Texas, were also doing this thing where even non-physical probes, I think, it could go after. Moore: So you don’t have to— you don’t always have to touch things. You can use the electromagnetic emissions from a chip as sort of what’s called a side channel attack. So it just sort of changes in the emissions from the chip when it’s doing particular things can leak information. So what the UT Austin team did was basically they made the circuitry that kind of does the encryption, the sort of key encryption circuitry. They modified it in a way so that the signature was just sort of a blur. And it still worked well. It did its job in a timely manner and stuff like that. But if you hold your EM sniffer up to it, you’re never going to figure out what the encryption key is. Cass: But I think you said you had one that was your absolute favorite. Moore: Yes. It’s totally my favorite. I mean, come on. How could I not like this? They invented a circuit that self-destructs. I got to tell you what the circuit is first because this is also a cool and— Cass: This is a different group. Moore: This is a group at University of Vermont and Marvell Technology. And what they came up with was a physical unclonable function circuit that— Cass: You’re going to have to go and unpack. Moore: Yeah, let me start with that. Physical and clonable function is basically there are always going to be very, very slight differences in each device on a chip, such that if you were to sort of take it, if you were to sort of measure those differences, every chip would be different. Every chip would have sort of its unique fingerprint. So these people have invented these physical and clonable function circuits. And they work great in some ways, but they’re actually very hard to make consistent. You don’t want to use this chip fingerprint as your security key if that fingerprint changes with temperature or as the chip ages. [laughter] So those are problems that different groups have come up with different solutions to solve. The Vermont group had their own solution. It was cool. But what I loved the most was that if the key is compromised or in danger of being compromised. For instance, somebody’s got a probe on it. [laughter] The circuit will actually destroy itself, literally destroy itself. Not in a sparks and smoke kind of way. Cass: Boo. Moore: I know. But at the micro level, it’s kind of like that. Basically, they just jammed the voltage up so high that there’s enough current in the long lines that copper atoms will actually be blown out of position. It will literally create voids and open circuits. At the same time, the voltage is again so high that the insulation in the transistors will start to get compromised, which is an ordinary aging effect, but they’re accelerating it greatly. And so you wind up basically with gobbledygook. Your fingerprint is gone. You could never countermeasure— sorry, you could never counterfeit this chip. You couldn’t say, well, “I got this,” because it’ll have a different fingerprint. It’s definitely not like— it won’t register as the same chip. Cass: So not only will it not work, but if you were to like-- because it’s not like blowing fuses because there are memory protection systems where you send a little-- because you don’t want someone downloading your firmware. You send a little pulse through blows a fuse. But if you really want to, you could crack open. You could decap that chip and see what’s going on. This is scorched Earth internally. Moore: Right, right. At least for the part that makes the physical unclonable function, that is essentially destroyed. And so if you encounter that chip and it doesn’t have the right fingerprint, which it won’t, you know it’s been compromised. Cass: Wow. Well, that is fascinating and very cool. But I’m afraid that’s all we have time today. So thanks so much for coming on and talking about IISSCC. Moore: ISSCC. Oh, yeah. Thanks, Stephen. It was a great time. Cass: So today on Fixing the Future, we were talking with Samuel K. Moore about the latest developments in semiconductor technology. For IEEE Spectrum‘s Fixing the Future, I’m Stephen Cass, and I hope you’ll join us next time.</itunes:summary><itunes:keywords>Integrated-circuits, Meta, Semiconductors, Type-podcast, Fixing-the-future</itunes:keywords></item><item><title>Lean Software, Power Electronics, and the Return of Optical Storage</title><link>https://spectrum.ieee.org/lean-software-power-electronics</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/fixing-the-future-podcast-logo.jpg?id=46469653&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/18a89222" width="100%">
</iframe><p><strong>Stephen Cass:</strong> Hi. I’m <a href="https://spectrum.ieee.org/u/stephen-cass" target="_self">Stephen Cass</a>, a senior editor at <em>IEEE Spectrum</em>. And welcome to <em>Fixing The Future</em>, our bi-weekly podcast that focuses on concrete solutions to hard problems. Before we start, I want to tell you that you can get the latest coverage from some of <em>Spectrum</em>‘s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe.</p><p>Today on <em>Fixing The Future</em>, we’re doing something a little different. Normally, we deep dive into exploring one topic, but that does mean that some really interesting things get left out for the podcast simply because they wouldn’t take up a whole episode. So here today to talk about some of those interesting things, I have <em>Spectrum</em>‘s Editor in Chief <a href="https://spectrum.ieee.org/u/harry-goldstein" target="_self">Harry Goldstein</a>. Hi, boss. Welcome to the show.</p><p><strong>Harry Goldstein:</strong> Hi there, Stephen. Happy to be here.</p><p><strong>Cass:</strong> You look thrilled.</p><p><strong>Goldstein:</strong> I mean, I am thrilled. I’m always excited to talk about <em>Spectrum</em> stories.</p><p><strong>Cass:</strong> No, we’ve tied you down and made you agree to this, but I think it’ll be fun. So first up, I’d like to talk about this guest post we had from Bert Hubert which seemed to really strike a chord with readers. It was called <a href="https://spectrum.ieee.org/lean-software-development" target="_self">Why Bloat Is Still Software’s Biggest Vulnerability</a>: A 2024 plea for lean software. Why do you think this one resonated with readers, and why is it so important?</p><p><strong>Goldstein:</strong> I think it resonated with readers because software is everywhere. It’s ubiquitous. The entire world is essentially run on software. A few days ago, even, there was a good example of the AT&T network going down likely because of some kind of software misconfiguration. This happens constantly. In fact, it’s kind of like bad weather, the software systems going down. You just come to expect it, and we all live with it. But why we live with it and why we’re forced to live with it is something that people are interested in finding out more, I guess.</p><p><strong>Cass:</strong> So I think, in the past, when we associated giant bloated software, we had associated with large projects, these <a href="https://spectrum.ieee.org/who-killed-the-virtual-case-file" target="_self">big government projects</a>, these big airlines, big, big, big projects. And we’ve written about that a lot at <em>Spectrum</em> before, haven’t we?</p><p><strong>Goldstein:</strong> We certainly have. And <a href="https://spectrum.ieee.org/u/robert-n-charette" target="_self">Bob Charette</a>, our longtime contributing editor, who is actually the father of lean software, back in the early ‘90s took the Toyota Total Quality Management program and applied it to software development. And so it was pretty interesting to see Hubert’s piece on this more than 30 years later where the problems have just proliferated. And think about your average car these days. It’s approaching a couple hundred million lines of code. A glitch in any of those could cause some kind of safety problem. Recalls are pretty common. I think Toyota had one a few months ago. So the problem is everywhere, and it’s just going to get worse.</p><p><strong>Cass:</strong> Yeah. One of the things that struck me was that Bert’s making the argument that you don’t actually need now an army of programmers to create bloated software— to get all those millions of lines of code. You could be just writing a code to open a garage door. This is a trivial program. Because of the way you’re writing it on frameworks, and those are pulling in dependencies and so on, you’re pulling in just millions of lines of other people’s code. You might not even know you’re doing it. And you kind of don’t notice unless, at the end of the day, you look at your final program file and you’re like, “Oh, why is that megabytes upon megabytes?” which represents endless lines of source code. Why is that so big? Because this is how you do software. You just pull these things together. You glue stuff. You focus on the business logic because that’s your value add, but you’re not paying attention to this enormous sort of—I don’t know; what would you call it?—invisible dark matter that surrounds your software.</p><p><strong>Goldstein:</strong> Right. It’s kind of like dark matter. Yeah, that’s kind of true. I mean, it actually started making me think. All of these large language models that are being applied to software development. Co-piloting, I guess they call it, right, where the coder is sitting with an AI, trying to write better code. Do you think that might solve the problem or get us closer?</p><p><strong>Cass:</strong> No, because I think those systems, if you look at them, they reflect modern programming usage. And modern programming usage is often to use the frameworks that are available. It’s not about really getting in and writing something that’s a little bit leaner. Actually, I think the Ais—it’s not their fault—they just do what we do. And we write bloaty softwares. So I think that’s not going to get any better necessarily with this AI stuff because the point of lean software is it does take extra time to make, and there are no incentives to make lean software. And Bert talks about, “Maybe we’re going to have to impose some of this legis— l e g i s l a tively.”—I speak good. I editor. You hire wise.—But some of these things are going to have to be mandated through standards and regulations, and specifically through the lens of these cybersecurity requirements and knowing what’s going into your software. And that may help with all just getting a little bit leaner. But I did actually want to— another news story that came up this week was Apple closing down its EV division. And you mentioned Bob Charette there. And he wrote this great thing for us recently about why EV cars are one thing and <a href="https://spectrum.ieee.org/the-ev-transition-explained-2659602311" target="_self">EV infrastructure is an even bigger problem</a> and why EVs are proving to be really quite tough. And maybe the problem— again, it’s a dark matter problem, not so much the car at the center, but this sort of infrastructure— just talk a little bit about Bob’s book, which is, by the way, free to download, and we’ll have the link in the show notes.</p><p><strong>Goldstein:</strong> Everything you need to know about <a href="https://spectrum.ieee.org/collections/the-ev-transition-explained/" target="_self">the EV transition</a> can be yours for the low, low price of free. But, yeah. And I think we’re starting to see-- I mean, even if you mandate things, you’re going to-- you were talking about legislation to regulate software bloat.</p><p><strong>Cass:</strong> Well, it’s kind of indirect. If you want to have good security, then you’re going to have to do certain things. The White House just came out with this paper, I think yesterday or the day before, saying, “Okay, you need to start using memory-safe languages.” And it’s not quite saying, “You are forbidden from using C, and you must use Rust,” but it’s kind of close to that for certain applications. They exempted certain areas. But you can see, that is the government really coming in and, actually, what has often been a very personal decision of programmers, like, “What language do I use?” and, “I know how to use C. I know how to do garbage collection,” the government kind of saying, “Yeah, we don’t care how great a programmer you think you are. These programs lead to this class of bugs, and we’d really prefer if you used one of these memory-safe languages.” And that’s, I guess, a push into sort of the private lives of programmers that I think we’re going to see more of as time goes by.</p><p><strong>Goldstein:</strong> Oh, that’s interesting because the—I mean, where I was going with that connection to legislation is that—I think what Bob found in the EV transition is that the knowledge base of the people who are charged with making decisions about regulations is pretty small. They don’t really understand the technology. They certainly don’t understand the interdependencies, which are very similar to the software development processes you were just referring to. It’s very similar to the infrastructure for electric cars because the idea, ultimately, for electric cars is that you also are revamping your grid to facilitate, whatchamacallit, intermittent renewable energy sources, like wind and solar, because having an electric car that runs off a coal-fired power plant is defeating the purpose, essentially. In fact, Ozzie Zehner wrote an article for us way back in the mid-Teens about the— <a href="https://spectrum.ieee.org/unclean-at-any-speed" target="_self">the dirty secret behind your electric car is the coal</a> that fuels it. And—</p><p><strong>Cass:</strong> Oh, that was <a href="https://spectrum.ieee.org/a-rebuttal-evs-are-clean-at-every-speed" target="_self">quite controversial</a>. Yeah. I think maybe because the cover was a car perched at the top of a giant mountain of coal. I think that—</p><p><strong>Goldstein:</strong> But it’s true. I mean, in China, they have one of the biggest electric car industries in the world, if not the biggest, and one of the biggest markets that has not been totally saturated by personal vehicles, and all their cars are going to be running on coal. And they’re the world’s second-largest emitter behind the US. But just circling back to the legislative angle and the state of the electric vehicle industry-- well, actually, are we just getting way off topic with the electric vehicles?</p><p><strong>Cass:</strong> No, it is this idea of interdependence and these very systems that are all coupled in all kinds of ways we don’t expect. And with that EV story— so last time I was home in Ireland, one of the stories was— so they had bought this fleet of buses to put in Dublin to replace these double-decker buses, electric double-deck, to help Ireland hit its carbon targets. So this was an official government goal. We bought the buses, great expense purchasing the buses, and then <a href="https://www.rte.ie/news/primetime/2023/1107/1415097-dozens-of-electric-buses-not-in-use-due-to-lack-of-charge-points/" rel="noopener noreferrer" target="_blank">they can’t charge the buses</a> because they haven’t already done the planning permission to get the charging stations added into the bus depot, which just was this staggering level of interconnect whereas, one hand, the national government is very— “Yes, meeting our target goals. We’re getting these green buses in. Fantastic advance. Very proud of it,” la la la la, and you can’t plug the things in because just the basic work on the ground and dealing with the local government has not been there to put in the charging stations. All of these little disconnects add up. And the bigger, the more complex system you have, the more these things add up, which I think does come back to lean software. Because it’s not so much, “Okay. Yeah, your software is bloaty.” Okay, you don’t win the Turing Prize. Boo-hoo. Okay. But the problem is that because you are pulling all of these dependencies that you just do not know and all these places where things break— or the problem of libraries getting hijacked.</p><p>So we have to retain the capacity on some level— and this actually is a personal thing with me, is that I believe in <a href="https://www.foundsf.org/index.php?title=Birthplace_of_Personal_Computing" rel="noopener noreferrer" target="_blank">the concept of personal computing</a>. And this was the thing back in the 1970s when personal computers first came out, which the idea was it would— it was very explicitly part of the culture that you would free yourself from the utilities and the centralized systems and you could have a computer on your desk that will let you do stuff, that you didn’t have to go through, at that stage, university administrators and paperwork and you could— it was a personal computer revolution. It was very much front and center. And nowadays it’s kind of come back full circle because now we’re increasingly finding things don’t work if they’re not network connected. So I believe it should be possible to have machines that operate independently, truly personal machines. I believe it should be possible to write software to do even complicated things without relying on network servers or vast downloads or, again, the situation where you want it to run independently, okay, but you’ve got to download these <a href="https://spectrum.ieee.org/raspberry-pi-cluster-computer" target="_self">Docker</a> images that are 350 megabytes or something because an entire operating system has to be bundled into them because it is impossible to otherwise replicate the correct environment in which software is running, which also undercuts the whole point of open source software. The point of open source is, if I don’t like something, I can change it. But if it’s so hard for me to change something because I have to replicate the exact environment and toolchains that people on a particular project are using, it really limits the ability of me to come in and maybe— maybe I just want to make some small changes, or I just want to modify something, or I want to pull it into my project. That I have to bring this whole trail of dependencies with me is really tough. Sorry, that’s my rant.</p><p><strong>Goldstein:</strong> Right. Yeah. Yeah. Actually, one of the things I learned the most about from the Hubert piece was Docker and the idea that you have to put your program in a container that carries with it an entire operating system or whatever. Can you tell me more about containers?</p><p><strong>Cass: </strong>Yeah. Yeah. Yeah. I mean, you can put whatever you want into a container, and some containers are very small. It distributes its own thing. You can get very lean containers that is just basically the program and the install. But it basically replaces the old idea of installing software, where you’d— and that was a problem, because every time you installed a bit of software, it scarred your system in some way. There was always scar tissue because it made changes. It nestled in. If nothing else, it put files onto your disk. And so over time, one of the problems was that this then meant that your computer would accumulate random files. It was very hard to really uninstall something completely because it’d always put little hooks and would register itself in a different place in the operating system, again, because now it’s interoperating with a whole bunch of stuff. Programs are not completely standalone. At the very least, they’re talking to an operating system. You want it to talk nicely to other programs in the operating system. And this led to all these kind of direct install problems.</p><p>And so the idea was, “Oh, we will sandbox this out. We’ll have these little Docker images, basically, to do it,” but that does give you the freedom whereby you can build these huge images, which are essentially virtual machines running away. So, again, it relieves the process of having to figure out your install and your configuration, which is one thing he was talking about. When you had to do these installers, it did really make you clarify your thinking very sharply on configuration and so on. So again, containers are great. All these cloud technologies, being able to use libraries, being able to automatically pull in dependencies, they’re all terrific in moderation. They all solve very real problems. I don’t want to be a Luddite and go, “We should go back to <a href="https://users.cs.utah.edu/~elb/folklore/mel.html" rel="noopener noreferrer" target="_blank">writing assembler code as God intended</a>.” That’s not what I’m saying, but we do sometimes have to look at— it does sometimes enable bad habits. It can incentivize bad habits. And you have to really then think very deliberately about how to combat those problems as they pop up.</p><p><strong>Goldstein:</strong> But from the beginning, right? I mean, it seems to me like you have to commit to a lean methodology at the start of any project. It’s not something that the AI is going to come in and magically solve and slim down at the end.</p><p><strong>Cass: </strong>No, I agree. Yeah, you have to commit to it, or you have to commit to frameworks where— I’m not going to necessarily use these frameworks. I’m going to go and try and do some of this myself, or I’m going to be very careful in how I look at my frameworks, like what libraries I’m going to use. I’m going to use maybe a library that doesn’t pull in other dependencies. This guy maybe wrote this library which has got 80 percent of what I need it to do, but it doesn’t pull in libraries, unlike the bells and whistles thing which actually does 400 percent of what I need it to do. And maybe I might write that extra 20 percent. And again, it requires skill and it requires time. And it’s like anything else. There are just incentives in the world that really tend to sort of militate against having the time to do that, which, again, is where we start coming back into some of these regulatory regimes where it becomes a compliance requirement. And I think a lot of people listening will know that time when things get done is when things become compliance requirements, and then it’s mandatory. And that has its own set of issues with it in terms of losing a certain amount of flexibility and so on, but that sometimes seems to be the only way to get things done in commercial environments certainly. Not in terms of personal projects, but certainly for commercial environments.</p><p><strong>Goldstein:</strong> So what are the consequences, in a commercial environment, of bloat, besides— are there things beyond security? Here’s why I’m asking, because the idea that you’re going to legislate lean software into the world as opposed to having it come from the bottom up where people are recognizing the need because it’s costing them something—so what are the commercial costs to bloated software?</p><p><strong>Cass: </strong>Well, apparently, absolutely none. That really is the issue. Really, none, because software often isn’t maintained. People just really want to get their products out. They want to move very quickly. We see this when it comes to— they like to abandon old software very quickly. Some companies like to abandon old products as soon as the new one comes out. There really is no commercial downside to using this big software because you can always say, “Well, it’s industry standard. Everybody is doing it.” Because everybody’s doing it. You’re not necessarily losing out to your competitor. We see these massive security breaches. And again, the legislating for lean software is through demanding better security. Because currently, we see these huge security breaches, and there’s very minimal consequences. Occasionally, yes, a company screws up so badly that it goes down. But even so, sometimes they’ll reemerge in a different form, or they’ll get gobbled up in someone.</p><p>There really does not, at the moment, seem to be any commercial downside for this big software, in the same way that— there are a lot of weird incentives in the system, and this certainly is one of them where, actually, the incentive is, “Just use all the frameworks. Bolt everything together. Use JS Electron. Use all the libraries. Doesn’t matter because the end user is not really going to notice very much if their program is 10 megabytes versus 350 megabytes,” especially now when people are completely immune to the size of their software. Back in the days when software came on floppy disk, if you had a piece of software that <a href="https://www.youtube.com/watch?v=zWuJxKtF3gk" rel="noopener noreferrer" target="_blank">came on 100 floppy disks</a>, that would be considered impractical. But nowadays, people are downloading gigabytes of data just to watch a movie or something like this. If a program is 1 gigabyte versus 100 megabytes, they don’t really notice. I mean, the only people who notice is if, say, video games— a really big video game. And then you see people going, “Well, it took me three hours to download the 70 gigabytes for this AAA game that I wanted to play.” That’s about the only time you see people complaining about the actual storage size of software anymore, but everybody else, they just don’t care. Yeah, it’s just invisible to them now.</p><p><strong>Goldstein:</strong> And that’s a good thing. I think <a href="https://spectrum.ieee.org/u/charles-q-choi" target="_self">Charles Choi</a> had a piece for us on-- we’ll have <a href="https://spectrum.ieee.org/data-storage-petabit-optical-disc" target="_self">endless storage</a>, right, on disks, apparently.</p><p><strong>Cass:</strong> Oh, I love this story because it’s another story of a technology that looks like it’s headed off into the sunset, “We’ll see you in the museum.” And this is optical disk technology. I love this story and the idea that you can— we had laser disks. We had CDs. We had CD-ROMs. We had DVD. We had Blu-ray. And Blu-ray really seemed to be in many ways the end of the line for optical disks, that after that, we’re just going to use solid-state storage devices, and we’ll store all our data in those tiny little memory cells. And now we have these researchers coming back. And now my brain has frozen for a second on where they’re from. I think they’re from Shanghai. Is it Shanghai Institute?</p><p><strong>Goldstein: </strong>Yes, I think so.</p><p><strong>Cass:</strong> Yes, Shanghai. There we go. There we go. Very nice subtle check of the website there. And it might let us squeeze this data center into something the size of a room. And this is this optical disk technology where you can make a disk that’s about the size of just a regular DVD. And you can squeeze just enormous amount of data. I think he’s talking about petabits in a—</p><p><strong>Goldstein: </strong>Yeah, like 1.6 petabits on--</p><p><strong>Cass:</strong> Petabits on this optical surface. And the magic key is, as always, a new material. I mean, we do love new materials because they’re always the wellspring from which so much springs. And we have at <em>Spectrum</em> many times chased down materials that have not fulfilled necessarily their promise. We have a long history— and sometimes materials go away and they come back, like—</p><p><strong>Goldstein: </strong>They come back, like <a href="https://spectrum.ieee.org/graphene-semiconductor" target="_self">graphene</a>. It’s gone away. It’s come back.</p><p><strong>Cass:</strong> —graphene and stuff like this. We’re always looking for the new magic material. But this new magic material, which has this—</p><p><strong>Goldstein:</strong> Oh, yeah. Oh, I looked this one up, Stephen.</p><p><strong>Cass: </strong>What is it? What is it? What is it? It is called--</p><p><strong>Goldstein:</strong> Actually, our story did not even bother to include the translation because it’s so botched. But it is A-I-E, dash, D-D-P-R, AIE-DDPR or aggregation-induced emission dye-doped photoresist.</p><p><strong>Cass: </strong>Okay. Well, let’s just call it magic new dye-doped photoresist. And the point about this is that this material works at basically four wavelengths. And why you want a material that responds at four different wavelengths? Because the limit on optical technologies— and I’m also stretching here into the boundaries on either side of optical. The standard rule is you can’t really do anything that’s smaller than the wavelength of the light you’re using to read or write. So the length of your laser sets the density of data on your disk. And what these clever clogs have done is they’ve worked out that by using basically two lasers at once, you can, in a very clever way, write a blob that is smaller than the wavelength of light, and you can do it in multiple layers. So usually, your standard Blu-ray disk, they’re very limited in the number of layers they have on them, like CDs originally, one layer.</p><p>So you have multiple layers on this disk that you can write to, and you can write at resolutions that you wouldn’t think you could do if you were just doing— from your high school physics or whatever. So you write it using these two lasers of two wavelengths, and then you read it back using another two lasers at two different wavelengths. And this all localizes and makes it work. And suddenly, as I say, you can squeeze racks and racks and racks of solid-state storage down to hopefully something that is very small. And what’s also interesting is that they’re actually closer to commercialization than you normally see with these early material stories. And they also think you could write one of these disks in six minutes, which is pretty impressive. As someone who stood and has sat watching the progress bars on a lot of DVD-ROMs burn over the years back in the day, six minutes to burn these—that’s probably for commercial mass production—is still pretty impressive. And so you could solve this problem of some of these large data transfers we get where currently you do have to ship servers from one side of the world to the other because it actually is too slow to copy things over the internet. And so this would increase the bandwidth of sort of the global sneakernet or station wagon net quite dramatically as well.</p><p><strong>Goldstein: </strong>Yeah. They are super interested in seeing them deployed in big data centers. And in order for them to do that, they still have to get the writing speed up and the energy consumption down. So the real engineering is just beginning for this. Well, speaking of new materials, there’s a new use for aluminum nitride according to our colleague <a href="https://spectrum.ieee.org/u/glenn-zorpette" target="_self">Glenn Zorpette</a> who wrote about the use of the material in <a href="https://spectrum.ieee.org/aluminum-nitride" target="_self">power transistors</a>. And apparently, if you properly dope this material, it’ll have a much wider band gap and be able to handle higher voltages. So what does this mean for the grid, Stephen?</p><p><strong>Cass: </strong>Yeah. So I actually find <a href="https://spectrum.ieee.org/search/?q=power+electronics" target="_self">power electronics</a> really fascinating because most of the history of transistors, right, is about making them use ever smaller amounts of electricity—5-volt logic used to be pretty common; now 3.3 is pretty common, and even 1.1 volts is pretty common—and really sipping microamps of power through these circuits. And power electronics kind of gets you back to actually the origins of being an electronics engineer, electrical engineers, which is when you’re really talking about power and energy, and you are humping around thousands of volts, and you’re humping around huge currents. And power electronics is an attempt to bring some of that smartness that transistors gives you into these much higher voltages. And we’ve seen some of this with, say, gallium nitride, which is a material we had talked about in <em>Spectrum</em> for years, speaking of materials that had been for years floating around, and then really, though, in the last like five years, you’ve seen it be a real commercial success. So all those wall warts we have have gotten dramatically smaller and better, which is why you can have a USB-C charger system where you can drive your laptop and bunch of ancillary peripherals all off one little wall wart without worrying about it bringing down the house because it’s just so efficient and so small. And most of those now are these new gallium-nitride-based devices, which is one example where a material really is making some progress.</p><p>And so aluminum nitride is kind of another step along that, to be able to handle even higher voltages, being able to handle bigger currents. So we’re not up yet to the level where you could have these massive high-voltage transmission lines directly, but the more and more you— the rising tide of where you can put these kind of electronics into your systems. First off, it means more efficient. As I say, these power adapters that convert AC to DC, they get more efficient. Your power supplies in your computer get more efficient, and your power supplies in your grid center. We’ve talked about how much power grid centers today get more efficient. And it bundles up. And the whole point of this is that you do want <a href="https://spectrum.ieee.org/smart-transformers-will-make-the-grid-cleaner-and-more-flexible" target="_self">a grid that is as smart as possible</a>. You need something that will be able to handle very intermittent power sources, fluctuating power sources. The current grid is really built around very, very stable power supplies, very constant power supplies, very stable frequency timings. So the frequency of the grid is the key to stability. Everything’s got to be on that 60 hertz in the US, 50 hertz in other places. Every power station has got to be synchronized very precisely with the other. So stability is a problem, and being able to handle fluctuations quickly is the key to both grid stability and to be able to handle some of these intermittent sources where the power varies as the wind blows stronger or weaker, as the day turns, as clouds move in front of your farm. So it’s very exciting from that point of view to see these very esoteric technologies. We’re talking about things like band gaps and how do you stick the right doping molecule in the matrix, but it does bubble up into these very-large-scale impacts when we’re talking about the future of electrical engineering and that old-school power and energy keeping the lights on and the motors churning kind of a way.</p><p><strong>Goldstein: </strong>Right. And the electrification of everything is just going to put bigger demands on the grid, like you were saying, for alternative energy sources. “Alternative.” They’re all price competitive now, the solar and wind. But--</p><p><strong>Cass: </strong>Yeah, not just at the generate— this idea that you have distributed power and power can be generated locally, and also being able to switch power. So you have these smart transformers so that if you are generating surplus power on your solar panels, you can send that to maybe your neighbor next door who’s charging their electric vehicle without at all having to be mediated by going up to the power company. Maybe your local transformer is making some of these local grid scale balancing decisions that are much closer to where the power is being used.</p><p><strong>Goldstein: </strong>Oh, yeah. Stephen, that reminds me of this other piece we had this week, actually, on utilities and profit motive on their part <a href="https://spectrum.ieee.org/transmission-expansion" target="_self">hampering US grid expansion</a>. It’s by a Harvard scholar named Ari Peskoe, and his first line is, “The United States is not building enough transmission lines to connect regional power networks. The deficit is driving up electricity prices, reducing grid reliability, and hobbling renewable-energy deployment.” And basically, they’re just saying that it’s not—what he does a good job explaining is not only how these new projects might impact their bottom lines but also all of the industry alliances that they’ve established over the years that become these embedded interests that need to be disrupted.</p><p><strong>Cass: </strong>Yeah, the truth is there is a list of things we could do. Not magic things. There are pretty obvious things we could do that would make the US grid— even if you don’t care much about renewables, you probably do care about your grid resilience and reliability and being able to move power around. The US grid is not great. It is creaky. We know there are things that could be done. As a byproduct of doing those things, you also would actually make it much more renewable friendly. So it is this issue of— there are political problems. Depending on which administration is in power, there is more or less an appetite to deal with some of these interests. And then, yeah, these utilities often have incentives to kind of keep things the way they are. They don’t necessarily want a grid where it’s easier to get cheaper electricity or more green electricity from one place to a different market. Everybody loves a captive monopoly market they can sell. I mean, that’s wonderful if you could do that. And then there are many places with anti-competition rules. But grids are a real— it’s really difficult to break down those barriers.</p><p><strong>Goldstein: </strong>It is. And if you’re in Texas in a bad winter and the grid goes down and you <a href="https://spectrum.ieee.org/what-texas-freeze-fiasco-tells-us-about-future-of-the-grid" target="_self">need power from outside</a> but you’re an island unto yourself and you can’t import that power, it becomes something that is disruptive to people’s lives, right? And people pay attention to it during a disaster, but we have a slow-rolling disaster called climate change that if we don’t start overturning some of the barriers to electrification and alternative energy sources, we’re kind of digging our own grave.</p><p><strong>Cass: </strong>It is very tricky because we do then get into these issues where you build these transmission lines, and there are questions about who ends up paying for those transmission lines and whether they get built over their lands, the local impacts of those. And it’s hard sometimes to tell. Is this a group that is really genuinely feeling that there is a sort of justice gap here— that they’re being asked to pay for the sins of higher carbon producers, or is this astroturfing? And sometimes it’s very difficult to tell that these organizations are being underwritten by people who are invested in the status quo, and it does become a knotty problem. And we are going to, I think, as things get more and more difficult, be really faced into making some difficult choices. And I am not quite sure how that’s going to play out, but I do know that we will keep tracking it as best we can. And I think maybe, yeah, you just have to come back and see how we keep covering the grid in pages of <em>Spectrum</em>.</p><p><strong>Goldstein: </strong>Excellent. Well—</p><p><strong>Cass: </strong>And so that’s probably a good point where— I think we’re going to have to wrap this round up here. But thank you so much for coming on the show.</p><p><strong>Goldstein: </strong>Excellent. Thank you, Stephen. Much fun.</p><p><strong>Cass: </strong>So today on <em>Fixing The Future</em>, I was talking with <em>Spectrum</em>‘s Editor in Chief Harry Goldstein, and we talked about electric vehicles, we talked about software bloat, and we talked about new materials. I’m Stephen Cass, and I hope you join us next time.</p>]]></description><pubDate>Wed, 06 Mar 2024 10:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/lean-software-power-electronics</guid><category>Fixing-the-future</category><category>Power-electronics</category><category>Type-podcast</category><category>Lean-software</category><dc:creator>Harry Goldstein</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/46469653/origin.jpg"/></item><item><title>Let Robots Do Your Lab Work</title><link>https://spectrum.ieee.org/air-force-research-ares-os</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.webp?id=51522099&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/a06d86c9" width="100%">
</iframe><p><strong>Dina Genkina:</strong> Hi. I’m <a href="https://spectrum.ieee.org/u/dina_genkina" target="_self">Dina Genkina</a> for <em>IEEE Spectrum</em>‘s <em>Fixing the Future</em>. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum’s most important beeps, including AI, Change, and Robotics, by signing up for one of our free newsletters. Just go to <a href="https://spectrum.ieee.org/newsletters/" target="_self">spectrum.ieee.org\newsletters</a> to subscribe. Today, a guest is Dr. <a href="https://www.linkedin.com/in/benji-maruyama-563b57b8/" rel="noopener noreferrer" target="_blank">Benji Maruyama,</a> a Principal Materials Research Engineer at the <a href="https://www.afrl.af.mil/" rel="noopener noreferrer" target="_blank">Air Force Research Laboratory</a>, or AFRL. Dr. Maruyama is a materials scientist, and his research focuses on carbon nanotubes and making research go faster. But he’s also a man with a dream, a dream of a world where science isn’t something done by a select few locked away in an ivory tower, but something most people can participate in. He hopes to start what he calls the billion scientist movement by building AI-enabled research robots that are accessible to all. Benji, thank you for coming on the show.</p><p>Benji Maruyama: Thanks, Dina. Great to be with you. I appreciate the invitation.</p><p>Genkina: Yeah. So let’s set the scene a little bit for our listeners. So you advocate for this billion scientist movement. If everything works amazingly, what would this look like? Paint us a picture of how AI will help us get there.</p><p>Maruyama: Right, great. Thanks. Yeah. So one of the things as you set the scene there is right now, to be a scientist, most people need to have access to a big lab with very expensive equipment. So I think top universities, government labs, industry folks, lots of equipment. It’s like a million dollars, right, to get one of them. And frankly, just not that many of us have access to those kinds of instruments. But at the same time, there’s probably a lot of us who want to do science, right? And so how do we make it so that anyone who wants to do science can try, can have access to instruments so that they can contribute to it. So that’s the basics behind citizen science or democratization of science so that everyone can do it. And one way to think of it is what happened with <a href="https://spectrum.ieee.org/tag/3d-printing" target="_self">3D printing</a>. It used to be that in order to make something, you had to have access to a machine shop or maybe get fancy tools and dyes that could cost tens of thousands of dollars a pop. Or if you wanted to do electronics, you had to have access to very expensive equipment or services. But when 3D printers came along and became very inexpensive, all of a sudden now, anyone with access to a 3D printer, so maybe in a school or a library or a makerspace could print something out. And it could be something fun, like a game piece, but it could also be something that got you to an invention, something that was maybe useful to the community, was either a prototype or an actual working device.</p><p>And so really, 3D printing democratized manufacturing, right? It made it so that many more of us could do things that before only a select few could. And so that’s where we’re trying to go with science now, is that instead of only those of us who have access to big labs, we’re building research robots. And when I say we, we’re doing it, but now there are a lot of others who are doing it as well, and I’ll get into that. But the example that we have is that we took a 3D printer that you can buy off the internet for less than $300. Plus a couple of extra parts, a webcam, a Raspberry Pi board, and a tripod really, so only four components. You can get them all for $300. Load them with open-source software that was developed by AFIT, the Air Force Institute of Technology. So Burt Peterson and Greg Captain [inaudible]. We worked together to build this fully autonomous 3D printing robot that taught itself how to print to better than manufacturer’s specifications. So that was a really fun advance for us, and now we’re trying to take that same idea and broaden it. So I’ll turn it back over to you.</p><p>Genkina: Yeah, okay. So maybe let’s talk a little bit about this automated research robot that you’ve made. So right now, it works with a 3D printer, but is the big picture that one day it’s going to give people access to that million dollar lab? How would that look like?</p><p>Maruyama: Right, so there are different models out there. One, we just did a workshop at the University of— sorry, North Carolina State University about that very problem, right? So there’s two models. One is to get low-cost scientific tools like the 3D printer. There’s a couple of different chemistry robots, one out of University of Maryland and NIST, one out of University of Washington that are in the sort of 300 to 1,000 dollars range that makes it accessible. The other part is kind of the user facility model. So in the US, the Department of Energy National Labs have many user facilities where you can apply to get time on very expensive instruments. Now we’re talking tens of millions. For example, Brookhaven has a synchrotron light source where you can sign up and it doesn’t cost you any money to use the facility. And you can get days on that facility. And so that’s already there, but now the advances are that by using this, autonomy, autonomous closed loop experimentation, that the work that you do will be much faster and much more productive. So, for example, on ARES, our Autonomous Research System at AFRL, we actually were able to do experiments so fast that a professor who came into my lab said, it just took me aside and said, “Hey, Benji, in a week’s worth of time, I did a dissertation’s worth of research.” So maybe five years worth of research in a week. So imagine if you keep doing that week after week after week, how fast research goes. So it’s very exciting.</p><p>Genkina: Yeah, so tell us a little bit about how that works. So what’s this system that has sped up five years of research into a week and made graduate students obsolete? Not yet, not yet. How does that work? Is that the 3D printer system or is that a—</p><p>Maruyama: So we started with our system to grow carbon nanotubes. And I’ll say, actually, when we first thought about it, your comment about graduate students being absolute— obsolete, sorry, is interesting and important because, when we first built our system that worked it 100 times faster than normal, I thought that might be the case. We called it sort of graduate student out of the loop. But when I started talking with people who specialize in autonomy, it’s actually the opposite, right? It’s actually empowering graduate students to go faster and also to do the work that they want to do, right? And so just to digress a little bit, if you think about farmers before the Industrial Revolution, what were they doing? They were plowing fields with oxen and beasts of burden and hand plows. And it was hard work. And now, of course, you wouldn’t ask a farmer today to give up their tractor or their combine harvester, right? They would say, of course not. So very soon, we expect it to be the same for researchers, that if you asked a graduate student to give up their autonomous research robot five years from now, they’ll say, “Are you crazy? This is how I get my work done.”</p><p>But for our original ARES system, it worked on the synthesis of <a href="https://spectrum.ieee.org/tag/carbon-nanotubes" target="_self">carbon nanotubes</a>. So that meant that what we’re doing is trying to take this system that’s been pretty well studied, but we haven’t figured out how to make it at scale. So at hundreds of millions of tons per year, sort of like polyethylene production. And part of that is because it’s slow, right? One experiment takes a day, but also because there are just so many different ways to do a reaction, so many different combinations of temperature and pressure and a dozen different gases and half the periodic table as far as the catalyst. It’s just too much to just brute force your way through. So even though we went from experiments where we could do 100 experiments a day instead of one experiment a day, just that combinatorial space was vastly overwhelmed our ability to do it, even with many research robots or many graduate students. So the idea of having artificial intelligence algorithms that drive the research is key. And so that ability to do an experiment, see what happened, and then analyze it, iterate, and constantly be able to choose the optimal next best experiment to do is where ARES really shines. And so that’s what we did. ARES taught itself how to grow carbon nanotubes at controlled rates. And we were the first ones to do that for material science in our 2016 publication.</p><p>Genkina: That’s very exciting. So maybe we can peer under the hood a little bit of this AI model. How does the magic work? How does it pick the next best point to take and why it’s better than you could do as a graduate student or researcher?</p><p>Maruyama: Yeah, and so I think it’s interesting, right? In science, a lot of times we’re taught to hold everything constant, change one variable at a time, search over that entire space, see what happened, and then go back and try something else, right? So we reduce it to one variable at a time. It’s a reductionist approach. And that’s worked really well, but a lot of the problems that we want to go after are simply too complex for that reductionist approach. And so the benefit of being able to use artificial intelligence is that high dimensionality is no problem, right? Tens of dimensions search over very complex high-dimensional parameter space, which is overwhelming to humans, right? Is just basically bread and butter for AI. The other part to it is the iterative part. The beauty of doing autonomous experimentation is that you’re constantly iterating. You’re constantly learning over what just happened. You might also say, well, not only do I know what happened experimentally, but I have other sources of prior knowledge, right? So for example, ideal gas law says that this should happen, right? Or Gibbs phase rule might say, this can happen or this can’t happen. So you can use that prior knowledge to say, “Okay, I’m not going to do those experiments because that’s not going to work. I’m going to try here because this has the best chance of working.”</p><p>And within that, there are many different machine learning or artificial intelligence algorithms. Bayesian optimization is a popular one to help you choose what experiment is best. There’s also new AI that people are trying to develop to get better search.</p><p>Genkina: Cool. And so the software part of this autonomous robot is available for anyone to download, which is also really exciting. So what would someone need to do to be able to use that? Do they need to get a 3D printer and a Raspberry Pi and set it up? And what would they be able to do with it? Can they just build carbon nanotubes or can they do more stuff?</p><p>Maruyama: Right. So what we did, we built ARES OS, which is our open source software, and we’ll make sure to get you <a href="https://github.com/AFRL-ARES/ARES_OS" rel="noopener noreferrer" target="_blank">the GitHub link</a> so that anyone can download it. And the idea behind ARES OS is that it provides a software framework for anyone to build their own autonomous research robot. And so the 3D printing example will be out there soon. But it’s the starting point. Of course, if you want to build your own new kind of robot, you still have to do the software development, for example, to link the ARES framework, the core, if you will, to your particular hardware, maybe your particular camera or 3D printer, or pipetting robot, or spectrometer, whatever that is. We have examples out there and we’re hoping to get to a point where it becomes much more user-friendly. So having direct Python connects so that you don’t— currently it’s programmed in C#. But to make it more accessible, we’d like it to be set up so that if you can do Python, you can probably have good success in building your own research robot.</p><p>Genkina: Cool. And you’re also working on a educational version of this, I understand. So what’s the status of that and what’s different about that version?</p><p>Maruyama: Yeah, right. So the educational version is going to be-- its sort of composition of a combination of hardware and software. So what we’re starting with is a low-cost 3D printer. And we’re collaborating now with the <a href="https://engineering.buffalo.edu/materials-design-innovation.html" rel="noopener noreferrer" target="_blank">University at Buffalo, Materials Design Innovation Department</a>. And we’re hoping to build up a robot based on a 3D printer. And we’ll see how it goes. It’s still evolving. But for example, it could be based on this very inexpensive $200 3D printer. It’s an <a href="https://store.creality.com/collections/ender-series-3d-printer" rel="noopener noreferrer" target="_blank">Ender 3D printer</a>. There’s another printer out there that’s based on <a href="https://jubilee3d.com/index.php?title=Main_Page" rel="noopener noreferrer" target="_blank">University of Washington’s Jubilee printer</a>. And that’s a very exciting development as well. So professors <a href="https://mse.washington.edu/facultyfinder/lilo-pozzo" rel="noopener noreferrer" target="_blank">Lilo Pozzo</a> and <a href="https://www.hcde.washington.edu/peek" rel="noopener noreferrer" target="_blank">Nadya Peek</a> at the University of Washington built this Jubilee robot with that idea of accessibility in mind. And so combining our ARES OS software with their Jubilee robot hardware is something that I’m very excited about and hope to be able to move forward on.</p><p>Genkina: What’s this Jubilee 3D printer? How is it different from a regular 3D printer?</p><p>Maruyama: It’s very open source. Not all 3D printers are open source and it’s based on a gantry system with interchangeable heads. So for example, you can get not just a 3D printing head, but other heads that might do things like do indentation, see how stiff something is, or maybe put a camera on there that can move around. And so it’s the flexibility of being able to pick different heads dynamically that I think makes it super useful. For the software, right, we have to have a good, accessible, user-friendly graphical user interface, a GUI. That takes time and effort, so we want to work on that. But again, that’s just the hardware software. Really to make ARES a good educational platform, we need to make it so that a teacher who’s interested can have the lowest activation barrier possible, right? We want she or he to be able to pull a lesson plan off of the internet, have supporting YouTube videos, and actually have the material that is a fully developed curriculum that’s mapped against state standards.</p><p>So that, right now, if you’re a teacher who— let’s face it, teachers are already overwhelmed with all that they have to do, putting something like this into their curriculum can be a lot of work, especially if you have to think about, well, I’m going to take all this time, but I also have to meet all of my teaching standards, all the state curriculum standards. And so if we build that out so that it’s a matter of just looking at the curriculum and just checking off the boxes of what state standards it maps to, then that makes it that much easier for the teacher to teach.</p><p>Genkina: Great. And what do you think is the timeline? Do you expect to be able to do this sometime in the coming year?</p><p>Maruyama: That’s right. These things always take longer than hoped for than expected, but we’re hoping to do it within this calendar year and very excited to get it going. And I would say for your listeners, if you’re interested in working together, please let me know. We’re very excited about trying to involve as many people as we can.</p><p>Genkina: Great. Okay, so you have the educational version, and you have the more research geared version, and you’re working on making this educational version more accessible. Is there something with the research version that you’re working on next, how you’re hoping to upgrade it, or is there something you’re using it for right now that you’re excited about?</p><p>There’s a number of things that we are very excited about the possibility of carbon nanotubes being produced at very large scale. So right now, people may remember carbon nanotubes as that great material that sort of never made it and was very overhyped. But there’s a core group of us who are still working on it because of the important promise of that material. So it’s material that is super strong, stiff, lightweight, electrically conductive. Much better than silicon as a digital electronics compute material. All of those great things, except we’re not making it at large enough scale. It’s actually used pretty significantly in lithium-ion batteries. It’s an important application. But other than that, it’s sort of like where’s my flying car? It’s never panned out. But there’s, as I said, a group of us who are working to really produce carbon nanotubes at much larger scale. So large scale for nanotubes now is sort of in the kilogram or ton scale. But what we need to get to is hundreds of millions of tons per year production rates. And why is that? Well, there’s a great effort that came out of <a href="https://spectrum.ieee.org/tag/arpa-e" target="_self">ARPA-E</a>. So the Department of Energy Advanced Research Projects Agency and the E is for Energy in that case.</p><p>So they funded a collaboration between Shell Oil and Rice University to pyrolyze methane, so natural gas into hydrogen for the hydrogen economy. So now that’s a clean burning fuel plus carbon. And instead of burning the carbon to CO2, which is what we now do, right? We just take natural gas and feed it through a turbine and generate electric power instead of— and that, by the way, generates so much CO2 that it’s causing global climate change. So if we can do that pyrolysis at scale, at hundreds of millions of tons per year, it’s literally a save the world proposition, meaning that we can avoid so much CO2 emissions that we can reduce global CO2 emissions by 20 to 40 percent. And that is the save the world proposition. It’s a huge undertaking, right? That’s a big problem to tackle, starting with the science. We still don’t have the science to efficiently and effectively make carbon nanotubes at that scale. And then, of course, we have to take the material and turn it into useful products. So the batteries is the first example, but thinking about replacing copper for electrical wire, replacing steel for structural materials, aluminum, all those kinds of applications. But we can’t do it. We can’t even get to that kind of development because we haven’t been able to make the carbon nanotubes at sufficient scale.</p><p>So I would say that’s something that I’m working on now that I’m very excited about and trying to get there, but it’s going to take some good developments in our research robots and some very smart people to get us there.</p><p>Genkina: Yeah, it seems so counterintuitive that making everything out of carbon is good for lowering carbon emissions, but I guess that’s the break.</p><p>Maruyama: Yeah, it is interesting, right? So people talk about carbon emissions, but really, the molecule that’s causing global warming is carbon dioxide, CO2, which you get from burning carbon. And so if you take that methane and parallelize it to carbon nanotubes, that carbon is now sequestered, right? It’s not going off as CO2. It’s staying in solid state. And not only is it just not going up into the atmosphere, but now we’re using it to replace steel, for example, which, by the way, steel, aluminum, copper production, all of those things emit lots of CO2 in their production, right? They’re energy intensive as a material production. So it’s kind of ironic.</p><p>Genkina: Okay, and are there any other research robots that you’re excited about that you think are also contributing to this democratization of science process?</p><p>Maruyama: Yeah, so we talked about Jubilee, the NIST robot, which is from Professor Ichiro Takeuchi at Maryland and Gilad Kusne at NIST, National Institute of Standards and Technology. Theirs is fun too. It’s LEGO as. So it’s actually based on a LEGO robotics platform. So it’s an actual chemistry robot built out of Legos. So I think that’s fun as well. And you can imagine, just like we have LEGO robot competitions, we can have autonomous research robot competitions where we try and do research through these robots or competitions where everybody sort of starts with the same robot, just like with LEGO robotics. So that’s fun as well. But I would say there’s a growing number of people doing these kinds of, first of all, low-cost science, accessible science, but in particular low-cost autonomous experimentation.</p><p>Genkina: So how far are we from a world where a high school student has an idea and they can just go and carry it out on some autonomous research system at some high-end lab?</p><p>Maruyama: That’s a really good question. I hope that it’s going to be in 5 to 10 years, that it becomes reasonably commonplace. But it’s going to take still some significant investment to get this going. And so we’ll see how that goes. But I don’t think there are any scientific impediments to getting this done. There is a significant amount of engineering to be done. And sometimes we hear, oh, it’s just engineering. The engineering is a significant problem. And it’s work to get some of these things accessible, low cost. But there are lots of great efforts. There are people who have used CDs, compact discs to make spectrometers out of. There are lots of good examples of citizen science out there. But it’s, I think, at this point, going to take investment in software, in hardware to make it accessible, and then importantly, getting students really up to speed on what AI is and how it works and how it can help them. And so I think it’s actually really important. So again, that’s the democratization of science is if we can make it available to everyone and accessible, then that helps people, everyone contribute to science. And I do believe that there are important contributions to be made by ordinary citizens, by people who aren’t you know PhDs working in a lab.</p><p>And I think there’s a lot of science out there to be done. If you ask working scientists, almost no one has run out of ideas or things they want to work on. There’s many more scientific problems to work on than we have the time where people are funding to work on. And so if we make science cheaper to do, then all of a sudden, more people can do science. And so those questions start to be resolved. And so I think that’s super important. And now we have, instead of, just those of us who work in big labs, you have millions, tens of millions, up to a billion people, that’s the billion scientist idea, who are contributing to the scientific community. And that, to me, is so powerful that many more of us can contribute than just the few of us who do it right now.</p><p>Genkina: Okay, that’s a great place to end on, I think. So, today we spoke to Dr. Benji Maruyama, a material scientist at AFRL, about his efforts to democratize scientific discovery through automated research robots. For IEEE Spectrum, I’m Dina Genkina, and I hope you’ll join us next time on Fixing the Future.</p>]]></description><pubDate>Wed, 21 Feb 2024 17:33:43 +0000</pubDate><guid>https://spectrum.ieee.org/air-force-research-ares-os</guid><category>Fixing-the-future</category><category>Type-podcast</category><category>Robots</category><category>Automation</category><category>Ares-os</category><dc:creator>Dina Genkina</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/51522099/origin.webp"/></item><item><title>Figuring Out Semiconductor Manufacturing's Climate Footprint</title><link>https://spectrum.ieee.org/semiconductor-manufacturing-climate-footprint</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/fixing-the-future-podcast-logo.jpg?id=46469653&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/cbdf9ee1" width="100%">
</iframe><p><strong>Samuel K. Moore</strong> Hi. I’m <a href="https://spectrum.ieee.org/u/samuel-k-moore" target="_self">Samuel K. Moore</a> for <em>IEEE Spectrum</em>‘s <em>Fixing the Future</em> podcast. Before we start, I want to tell you that you can get the latest coverage from some of <em>Spectrum</em>‘s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe. The semiconductor industry is in the midst of a major expansion driven by the seemingly insatiable demands of AI, the addition of more intelligence in transportation, and national security concerns, among many other things. Governments and the industry itself are starting to worry what this expansion might mean for chip-making’s carbon footprint and its sustainability generally. Can we make everything in our world smarter without worsening climate change? I’m here with someone who’s helping figure out the answer. <a href="https://www.linkedin.com/in/lizzie-boakes-965ba917b/" rel="noopener noreferrer" target="_blank">Lizzie Boakes</a> is a life cycle analyst in the Sustainable Semiconductor Technologies and Systems Program at <a href="https://www.imec-int.com/en" rel="noopener noreferrer" target="_blank">IMEC</a>, the Belgium-based nanotech research organization. Welcome, Lizzie.</p><p><strong>Lizzie Boakes:</strong> Hello.</p><p><strong>Moore:</strong> Thanks very much for coming to talk with us.</p><p><strong>Boakes:</strong> You’re welcome. Pleasure to be here.</p><p><strong>Moore: </strong>So let’s start with, just how big is the carbon footprint of the semiconductor industry? And is it really big enough for us to worry about?</p><p><strong>Boakes:</strong> Yeah. So quantifying the carbon footprint of the semiconductor industry is not an easy task at all, and that’s because semiconductors are now embedded in so many industries. So the most obvious industry is the ICT industry, which is estimated to be about approximately 3 percent of the global emissions. However, semiconductors can also be found in so many other industries, and their embedded nature is increasing dramatically. So they’re embedded in automotives, they’re embedded in healthcare applications, as far as aerospace and defense applications too. So their expansion and adoption of semiconductors in all of these different industries just makes it very hard to quantify.</p><p>And the global impact of the semiconductor chip manufacturing itself is expected to increase as well because of the fact that we need more and more of these chips. So the global chip market is projected to have a 7 percent compound annual growth rate in the next coming years. And bearing in mind that the manufacturing of the IC chips itself often accounts for the largest share of the life cycle climate impact, especially for consumer electronics, for instance. This increase in demand for so many chips and the demand for the manufacturing of those chips will have a significant impact on the climate impact of the semiconductor industry. So it’s really crucial that we focus on this and we identify the challenges and try to work towards reducing the impact to achieve any of our ambitions at reaching net zero before 2050.</p><p>Moore: Okay. So the way you looked at this, it was sort of a— it was cradle-to-gate life cycle. Can you sort of explain what that entails, what that really means?</p><p><strong>Boakes:</strong> Yeah. So cradle to gate here means that we quantify the climate impacts, not only of the IC manufacturing processes that occur inside the semiconductor fab, but also we quantify the embedded impact of all of the energy and material flows that are entering the fab that are necessary for the fab to operate. So in other words, we try to quantify the climate impact of the value chain upstream to the fab itself, and that’s where the cradle begins. So the extraction of all of the materials that you need, all of the energy sources. For instance, the extraction of coal for electricity production. That’s the cradle. And the gate refers to the point where you stop the analysis, you stop the quantification of the impact. And in our case, that is the end of the processing of the silicon wafer for a specific technology node.</p><p><strong>Moore:</strong> Okay. So it stops basically when you’ve got the <a href="https://en.wikipedia.org/wiki/Die_(integrated_circuit)" rel="noopener noreferrer" target="_blank">die</a>, but it hasn’t been packaged and put in a computer.</p><p><strong>Boakes:</strong> Exactly.</p><p><strong>Moore: </strong>And so why do you feel like you have to look at all the upstream stuff that a chip-maker may not really have any control over, like coal and such like that?</p><p><strong>Boakes:</strong> So there is a big need to analyze your scope through what is called— in greenhouse gas protocol, you have three different scopes. Your scope one is your direct emissions. Your scope two is the emissions related to the electricity consumption and the production of electricity that you have consumed in your operation. And scope three is basically everything else, and a lot of people start with scope three, all of their upstream materials. And it does have— it’s obviously the largest scope because it’s everything else other than what you’re doing. And I think it’s necessary to coordinate your supply chain so that you make sure you’re doing the most sustainable solution that you can. So if there are— you have power in your purchasing, you have power over how you choose your supply chain. And if you can manipulate it in a way where you have reduced emissions, then that should be done. Often, scope three is the largest proportion of the total impact, A, because it’s one of the biggest groups, but B, because there is a lot of materials and things coming in. So yeah, it’s necessary to have a look up there and see how you can best reduce your emissions. And yeah, you can have power in your influence over what you choose in the end, in terms of what you’re purchasing.</p><p><strong>Moore:</strong> All right. So in your analysis, what did you see as sort of the biggest contributors to the chip fabs carbon output?</p><p><strong>Boakes: </strong>So without effective abatement, the processed gases that are released as direct emissions, they would really dominate the total emissions of the IC chip manufacturing. And this is because the processed gases that are often consumed in IC manufacturing, they have a very high <a href="https://www.epa.gov/ghgemissions/understanding-global-warming-potentials" rel="noopener noreferrer" target="_blank">GWP</a> value. So if you do not abate them and you do not destroy them in a small abatement system, then their emissions and contribution to global warming are very large. However, you can drastically reduce that emission already by deploying effective abatements on specific process areas, the high-impact process areas. And if you do that, then this distribution shifts.</p><p>So then you would see that the direct emission-- the contribution of the direct emissions would reduce because you’ve reduced your direct emission output. But then the next-biggest contributor would be the electrical energy. So the scope to the emissions that are related to the production of the electricity that you’re consuming. And as you can imagine, IC manufacturing is very energy-intensive. So there’s a lot of electricity coming in, so it’s necessary then to try to start to decarbonize your electricity provider or reduce your carbon intensity of your electricity that you’re purchasing.</p><p>And then once you do that step, you would also see that again the distribution changes, and your scope three, your upstream materials, would then be the largest contributors to the total impact. And the materials that we’ve identified as being the most or the largest contributors to that impact would be, for instance, the silicon wafers themselves, the raw wafers before you start processing, as well as wet chemicals. So these are chemicals that are very specific to the semiconductor industry. There’s a lot of consumption there, and they’re very specific and have a high GWP value.</p><p><strong>Moore: </strong>Okay. So if we could start with— unpack a few of those. First off, what are some of these chemicals, and are they generally abated well these days? Or is this sort of something that’s still a coming problem?</p><p><strong>Boakes: </strong>Yeah. So they could be from specific <a href="https://en.wikipedia.org/wiki/Photoresist" rel="noopener noreferrer" target="_blank">photoresists</a> to— there is a very heavy consumption of basic chemicals for neutralization of wastewater, these types of things. So there’s a combination of having in a high embedded GWP value, which means that it takes a very large amount of-- or has a very large impact to produce the chemical itself, or you just have a lot that you’re consuming of it. So it might have a low embedded impact, but you’re just using so much of it that, in the end, it’s the higher contributor anyway. So you have two kind of buckets there. And yeah, it would just be a matter of, you have to multiply through the amounts by your embedded emission to see which ones come on top. But yeah, we see that often, the wastewater treatment uses a lot of these chemicals just for neutralization and treatment of wastewater on site, as well as very specific chemicals for the semiconductor industry such as photoresists and <a href="https://en.wikipedia.org/wiki/Chemical-mechanical_polishing" rel="noopener noreferrer" target="_blank">CMP</a> cleans, those types of very specific chemistries which, again, it’s difficult to quantify the embedded impact of because often there’s a proprietary— you don’t exactly know what goes into it, and it’s a lot of difficulty trying to actually characterize those chemicals appropriately. So often we apply a proxy value to those. So this is something that we would really like to improve in the future would be having more communication with our supply chain and really understanding what the real embedded impact of those chemicals would be. This is something that we really would need to work on to really identify the high-impact chemicals and try anything we can to reduce them.</p><p><strong>Moore:</strong> Okay. And what about those direct greenhouse gas emission chemicals? Are those generally abated, or is that something that’s still being worked on?</p><p><strong>Boakes: </strong>So there is quite, yeah, a substantial amount of work going into the abatement system. So we have the usual methane combustion of processed gases. There’s also now development in plasma abatement systems. So there are different abatement systems being developed, and their effectiveness is quite high. However, we don’t have such a good oversight at the moment on the amount of abatement that’s being deployed in high-volume manufacturing. This, again, is quite a sensitive topic to discuss from a research perspective when you don’t have insight into the fab itself. So asking particular questions about how much abatement is deployed on certain tools is not such easy data to come across.</p><p>So we often go with models. So we apply the IPCC Tier 2c model where, basically, you calculate the direct emissions by how much you’ve used. So it’s a mathematical model based on how much you’ve consumed. There is a model that generates the amounts that would be emitted directly into the atmosphere. So this is the model that we’ve applied. And we see that, yeah, it does correlate sometimes with the top-down reporting that comes from the industry. So yeah, I think there is a lot of way forward where we can start comparing top-down reporting to these bottom-up models that we’ve been generating from a kind of research perspective. So yeah, there’s still a lot of work to do to match those.</p><p><strong>Moore:</strong> Okay. Are there any particular nasties in terms of what those chemicals are? I don’t think people are familiar with really what comes out of the smokestack of chip fab.</p><p><strong>Boakes:</strong> So one of the highest GWP gases, for instance, would be the <a href="https://en.wikipedia.org/wiki/Sulfur_hexafluoride" rel="noopener noreferrer" target="_blank">sulfur hexafluoride</a>, so SF6. This has a GWP value of 25,200 kilograms of CO2 equivalent. So that really means that it has over 25,000 times more damaging effects to the climate compared to a CO2, so the equivalent CO2 molecule. So this is extremely high. But there’s also others like NF4 that— these also have over 1,000 times more damaging to the climate than CO2. However, they can be abated. So in these abatement systems, you can destroy them and they’re no longer being released.</p><p>There are also efforts going into replacing high GWP gases such as these that I’ve mentioned to use alternatives which have a lower GWP value. However, this is going to take a lot of process development and a lot of effort to go into changing those process flows to adapt to these new alternatives. And this will then be a slow adoption into the high-volume fabs because, as we know, this industry is quite rigid to any changes that you suggest. So yeah, it will be a slow adoption if there are any alternatives. And for the meantime, effective abatement can destroy quite a lot. But it would really be having to employ and really have those abatement systems on those high-impact process areas.</p><p><strong>Moore: </strong>As Moore’s Law continues, each step or manufacturing node might have a different carbon footprint. What were some of the big trends your research revealed regarding that?</p><p><strong>Boakes:</strong> So in our model, we’ve assumed a constant fab operation condition, and this means that we’ve assumed the same abatement systems, the same electrical carbon intensities, for all of the different technology nodes, which-- yeah. So we see that there is a general increase in total emissions under these assumptions, and we double in total climate impact from N28 to A14. So when we evolve in that technology node, we do see it doubling between N28 and A14. And this can be attributed to the increased process complexity as well as the increased number of steps, in process steps, as well as the different chemistries being used, different materials that are being embedded in the chips. This all contributes to it. So generally, there is an increase because of the process complexities that’s required to really reach those aggressive pitches in the more advanced technology nodes.</p><p><strong>Moore:</strong> I see. Okay. So as things are progressing, they’re also kind of getting worse in some ways. Is there anything—?</p><p><strong>Boakes:</strong> Yeah.</p><p><strong>Moore: </strong>Is this inevitable, or is there—?</p><p><strong>Boakes: </strong>[laughter] Yeah. If you make things more complicated, it will probably take more energy and more materials to do it. Also, when you make things smaller, you need to change your processes and use-- yeah, for instance, with interconnect metals, we’ve really reached the physical limits sometimes because it’s gotten so small that the physical limits of really traditional metals like copper or tungsten has been reached. And now they’re looking for new alternatives like ruthenium, yeah, or platinum. Different types of metals which-- again, if it’s a platinum group metal, of course it’s going to have a higher embedded impact. So when we hit those limits, physical limits or limits to the current technology and we need to change it in a way that makes it more complicated, more energy-intensive— again, the move to <a href="https://spectrum.ieee.org/high-na-euv" target="_self">EUV</a>. EUV is an extremely energy-intensive tool compared to DUV.</p><p>But an interesting point there on the EUV topic would be that it’s really important to keep this holistic view because even though moving from a DUV tool to an EUV tool, it has a large jump in energy intensity per kilowatt hour. The power intensity of the tool is much higher. However, you’re able to reduce the number of total steps to achieve a certain deposition or edge. So you’re able to overall reduce your emissions, or you’re able to reduce your energy intensity of the process flow. So even though we make all these changes and we might think, “Oh, that’s a very powerful tool,” it could go and cut down on process steps in the holistic view. So it’s always good to keep a kind of life cycle perspective to be able to see, “Okay, if I implement this tool, it does have a higher power intensity, but I can reduce half of the number of steps to achieve the same result. So it’s overall better. So it’s always good to keep that kind of holistic view when we’re doing any type of sustainability assessment.</p><p><strong>Moore: </strong>Oh, that’s interesting. That’s interesting. So you also looked at— as sort of the nodes get more advanced and processes get more complex. What did that do to water consumption?</p><p><strong>Boakes:</strong> Also, so again, the number of steps in a similar sense. If you’re increasing your number of process steps, there would be an increase in the number of those wet clean steps as well that are often the high-water-consumption steps. So if you have an increased number of those particular process steps, then you’re going to have a higher water consumption in the end. So it is just based on the number of steps and the complexity of the process as we advance into the more advanced technology nodes.</p><p><strong>Moore: </strong>Okay. So it sounds like complexity is kind of king in this field.</p><p><strong>Boakes: </strong>Yeah.</p><p><strong>Moore: </strong>What should the industry be focusing on most to achieve its carbon goals going forward?</p><p><strong>Boakes: </strong>Yeah. So I think to start off, you need to think of the largest contributors and prioritize those. So of course, if you’re looking at the total impact and we’re looking at a system that doesn’t have effective abatement, then of course, direct emissions would be the first thing that you want to try to focus on and reducing, as they would be the largest contributors. However, once you start moving into a system which already has effective abatement, then your next objective would be to decarbonize your electricity production, go for a lower-carbon-intensity electricity provider, so you’re moving more towards green energy.</p><p>And at the same time, you would also want to try to target your high-impact value chain. So your materials and energy that are coming into the fab, you need to look at the ones that are the most highly impacting and then try to find a way to find a provider that does a kind of decarbonized version of the same material or try to design a way where you don’t need that certain material. So not necessarily that it has to be done in a sequential order. Of course, you can do it all in parallel. It would be better. So it doesn’t have to be one, two, three, but the idea and the prioritizing comes from targeting the largest contributors. And that would be direct emissions, decarbonizing your electricity production, and then looking at your supply chain and looking into those high-impact materials.</p><p><strong>Moore:</strong> Okay. And as a researcher, I’m sure there’s data you would love to have that you probably don’t have. What could industry do better about providing that kind of data to make these models work?</p><p><strong>Boakes: </strong>So for a lot of our a lot of our scope three, so that upstream, that cradle-to-fab, let’s call it— those impacts. We’ve had to use quite a lot— we had to rely quite a lot on life cycle assessment literature or life cycle assessment databases, which are available through purchasing, or sometimes if you’re lucky, you have a free database. So I would say-- and that’s also because my role in my research group is more looking at that LCA and upstream materials and quantifying the environmental impact of that. So from my perspective, I really think that this industry needs to work on providing data through the supply chain, which is standardized in a way that people can understand, which is product-specific so that we can really allocate embedded impact to a specific product and multiply that through then by our inventory, which we have data on. So for me, it’s really having a standardized way of communicating sustainability impact of production, upstream production, throughout the supply chain. Not only tier one, but all the way up to the cradle, the beginning of the value chain. So this is something-- and I know it is evolving and it will be slow, and it does need a lot of cooperation. But I do think that that would be very, very useful for really making our work more realistic, more representative. And then people can rely on it better when they start using our data in their product carbon footprints, for instance.</p><p><strong>Moore: </strong>Okay. And speaking of sort of your work, can you tell me what <a href="https://netzero.imec-int.com/" rel="noopener noreferrer" target="_blank">imec.netzero</a> is and how that works?</p><p><strong>Boakes: </strong>Yeah. This is a web app that’s been developed in our program, so the SSTS program at IMEC. And this web app is a way for people to interact with the model that we’ve been building, the LCA model. So it’s based on life cycle assessment, and it’s really what we’ve been talking about with this cradle-to-gate model of the IC-chip-manufacturing process. It tries to model a generic fab. So we don’t necessarily point to any specific fab or process flow from a certain company. But we try to make a very generic industry average that people can use to estimate and get a more realistic view on the modern IC chip. Because we noticed that, in literature and what’s available in LCA databases, the semiconductor data is extremely old, and we know that this industry moves very quickly. So there is a huge gap between what’s happening now and what is going into your phones and what’s going into the computers and the LCA data that’s available to try to quantify that from a sustainability perspective. So imec.netzero, we work with all of— we have the benefit of being connected with the industry and now a position in IMEC, and we have a view on those more advanced technology nodes.</p><p>So not only do we have models for the nodes that are being generated and produced today, but we also predict the future nodes. And we have models to predict what will happen in 5 years’ time, in 10 years’ time. So it’s a really powerful tool, and it’s available publicly. We have a public version, which is a limited-- it has limited functionality in comparison to the program partner version. So we work with our program partners who have access to a much more complicated and, yeah, deep way of using the web app, as well as the other work that we do in our program. And our program partners also contribute data to the model, and we’re constantly evolving the model to improve always. So that’s a bit of an overview.</p><p><strong>Moore: </strong>Cool. Cool. Thank you very much, Lizzie. I have been speaking to Lizzie Boakes, a life cycle analyst in the Sustainable Semiconductor Technologies and Systems Program at IMEC, the Belgium-based nanotech research organization. Thank you again, Lizzie. This has been fantastic.</p>]]></description><pubDate>Fri, 09 Feb 2024 16:44:14 +0000</pubDate><guid>https://spectrum.ieee.org/semiconductor-manufacturing-climate-footprint</guid><category>Type-podcast</category><category>Semiconductor-manufacturing</category><category>Fixing-the-future</category><dc:creator>Samuel K. Moore</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/46469653/origin.jpg"/></item><item><title>The Brain Implant That Sidesteps The Competition</title><link>https://spectrum.ieee.org/brain-implant-close-to-market</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.webp?id=51160803&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/ca9f5468" width="100%"></iframe><p>
<strong>Eliza Strickland: </strong>Hi, I’m <a href="https://spectrum.ieee.org/u/eliza-strickland" target="_self">Eliza Strickland</a> for <em>IEEE Spectrum</em>‘s <em>Fixing the Future</em> podcast. Before we start, I want to tell you that you can get the latest coverage from some of <em>Spectrum</em>‘s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to <a href="https://spectrum.ieee.org/newsletters/" target="_self">spectrum.ieee.org/newsletters</a> to subscribe. You’ve probably heard of <a href="https://neuralink.com/" rel="noopener noreferrer" target="_blank">Neuralink</a>, the buzzy neurotech company founded by Elon Musk that wants to put brain implants in humans this year. But you might not have heard of another company, <a href="https://synchron.com/" rel="noopener noreferrer" target="_blank">Synchron</a>, that’s way ahead of Neuralink. The company has already put 10 of its innovative brain implants into humans during its clinical trials, and it’s pushing ahead to regulatory approval of a commercial system. Synchron’s implant is a type of brain-computer interface, or BCI, that can allow severely paralyzed people to control communication software and other computer programs with their thoughts alone. <a href="https://profiles.mountsinai.org/thomas-j-oxley" rel="noopener noreferrer" target="_blank">Tom Oxley</a> is a practicing neurologist at Mount Sinai Hospital in New York City and the founder and CEO of Synchron. He joined us on <em>Fixing the Future</em> to tell us about the company’s technology and its progress. Tom, thank you so much for joining me on <em>Fixing the Future</em> today. So the enabling technology behind Synchron is something called the Stentrode. Can you explain to listeners how that works?
</p><p>
<strong>Tom Oxley: </strong>Yeah, so the concept of the Stentrode was that we can take a endovascular platform that’s been used in medicine for decades and build an electronics layer onto it. And I guess it addresses one of the challenges with implantable neurotechnology in the brain, which is that-- well, firstly, it’s hard to get into the brain. And secondly, it’s hard to remain in the brain without having the brain launch a pretty sophisticated immune response at you. And the blood-brain barrier is a thing. And if you can stay inside on one side of that blood-brain barrier, then you do have a very predictable and contained immune response. That’s how tattoos work in the skin. And the skin is the epithelial and the blood vessels have an endothelial layer and they kind of behave the same way. So if you can convince the endothelial layer of the blood vessel to receive a package and not worry about it and just leave it be, then you’ve got a long-term solution for a electronics package that can use the natural highways to most regions within the brain.
</p><p>
<strong>Strickland: </strong>Right. So it’s called a Stentrode because it resembles a stent, right? It’s sort of like a mesh sleeve with electrodes embedded in it, and it’s inserted through the jugular. Is that correct?
</p><p>
<strong>Oxley:</strong> We actually called it a Stentrode because, in the early days, we were taking stents. And Nick Opie and Gil Rind and Steve as well were taking these stents that we basically took out of the rubbish bin and cleaned them, and then by hand, we’re weaving electrodes onto the stent. So we just needed a name to call the devices that we were testing back in the early days. So Stentrode was a really organic term that we just started using within the group. And I think then 2016 <em>Wired</em> ran a piece, calling it one of the new words. So we’re like, “Okay, this word seems to be sticking.” Yeah, it goes in the jugular vein. So in what we’re seeking to commercialize as the first product offering for our implantable BCI platform, we’re targeting a particular large blood vessel called the superior sagittal sinus. And yes, the entrance into the body is through the jugular vein to get there.
</p><p>
<strong>Strickland: </strong>Yeah, I’m curious about the early days. Can you tell me a little bit about how your team came up with this idea in the first place?
</p><p>
<strong>Oxley:</strong> The very early conceptualization of this was: I was going through medical school with my co-founder, <a href="https://profiles.stanford.edu/rahul-prakash-sharma" rel="noopener noreferrer" target="_blank">Rahul Sharma</a>, who’s a cardiologist. And he was very fixated on interventional cardiology, which is a very sexy field in medicine. And I was more obsessed with the brain. And it looked—and this was back around 2010—that intervention was going to become a thing in neurology. And it took until 2015 for a real breakthrough in neurointervention to emerge, which was for the treatment of stroke. And that was basically a stent going up into the brain to pull out a blood clot. But I was always less interested in the plumbing and more interested in how it could be that the electrical activity of the brain created not just health and disease but also wellness and consciousness. And that whole continuum of the brain, mind was why I went into medicine in the first place. But I thought the technology— the speed of technology growth in the interventional domain in medicine is incredible. Relative to the speed of expansion of other surgical domains, the interventional domain, and now into robotics is, I would say, the most fast-moving area in medicine. So I think I was excited about technology in neurointervention, but it was the electrophysiology of the brain that was so enticing. And the brain has remained this black box for a long period of time.
</p><p>
	When I started medicine, doing neurology was a joke to the other types of ambitious young medical people because, well, in neurology, you can diagnose everything, but you can’t treat anything. And now implantable neurotechnology is opening up access into the brain in a way which just wasn’t possible 10 or 15 years ago. So that was the early vision. The early vision was, can the blood vessels open up avenues to get to the brain to treat conditions that haven’t previously been treated? So that was the early conceptualization of the idea. And then I was bouncing this idea around in my head, and then I read about brain-computer interfaces, and I read about <a href="https://www.braingate.org/team/leigh-hochberg-m-d-ph-d/" rel="noopener noreferrer" target="_blank">Leigh Hochberg</a> and the <a href="https://www.braingate.org/" rel="noopener noreferrer" target="_blank">BrainGate</a> work. And then I thought, “Oh, well, maybe that’s the first application of functional neurointervention or electronics in neurointervention.” And the early funding came from US defense from DARPA, but we spent four or five years in Melbourne, Australia, Nick Opie hand-building these devices and then doing sheep experiments to prove that we could record brain activity in a way that was going to be meaningful from a signal-to-noise perspective that we felt was going to be sufficient to drive a brain-computer interface for motor control.
</p><p>
<strong>Strickland: </strong>Right. So with the Stentrode, you’re recording electrical signals from the brain through the blood vessels. So I guess that’s some remove. And the BrainGate Consortium that you referenced before, they’re one of many, many groups that have been doing implanted electrodes inside the brain tissue where you can get up close to the neurons. So it feels like you have a very different approach. Have you ever doubted it along the way? Feel like, “Oh my gosh, the entire community of BCI is going in this other direction, and we’re going in this one.” Did it ever make you pause?
</p><p>
<strong>Oxley:</strong> I think clinical translation is very different to things that can be proven in an experimental setting. And so I think, yeah, there’s a data reduction that occurs if you stay on the surface of the brain, and particularly if you stay in a blood vessel that’s on the surface of the brain. But the things that are solved technically make clinical translation more of a reality. And so the way I think about it more is not, “Well, how does this compete with systems that have proven things out in an experimental domain versus what is required to achieve clinical translation and to solve a problem in a patient setting?” So they’re kind of different questions. So one is kind of getting obsessed with a technology race based upon technology-based metrics, and the other is, “Well, what is the clinical unmet need and what are particular ways that we can solve that?” And I’ll give an example of that, something that we’re learning now. So yeah, this first product is in a large blood vessel that only gives a constrained amount of access to the motor cortex. But there are reasons why we chose that.
</p><p>
	We know it’s safe. We know it can live in there. We know we can get there. We know we have a procedure that can do that. We know we have lots of people in the country that can do that procedure. And we understand roughly what the safety profile is. And we know that we can deliver enough data that can drive performance of the system. But what’s been interesting is there are advantages to using population-level <a href="https://en.wikipedia.org/wiki/Local_field_potential" rel="noopener noreferrer" target="_blank">LFP-type</a> brain recordings. And that is that they’re more stable. They’re quite robust. They’re easy to detect. They don’t need substantial training. And we have low power requirements, which means our power can go for a long time. And that really matters when you’re talking about helping people who are paralyzed or have motor impairment because you want there to be as little troubleshooting as possible. It has to be as easy to use as possible. It has to work immediately. You can’t spend weeks or months training. You can’t be troubleshooting. You can’t be having to press anything. It just should be working all the time. So these things have only become obvious to us most recently.
</p><p>
<strong>Strickland: </strong>So we’ve talked a little bit about hardware. I’m also curious about the software side of things. How has that evolved over the course of your research? The part of your system that looks at the electrical signals and translates them into some kind of meaningful action.
</p><p>
<strong>Oxley: </strong>Yeah. It’s been an awesome journey. I was just visiting one of our patients just this week. And watching him go through the experience of trying out different features and having him explain to us— not all of our patients can talk. He can still talk, but he’s lost control of his hands, so he can’t use his iPhone anymore. And hearing what it feels like for him to— we’re trying out different levels of control, in particular in this case with iPad use. And it’s interesting because we are also still feeling very early, but this is not a science experiment. We’re trying to zero in and focus on features that we believe are going to work for everyone and be stable and that feel good in the use of the system. And you can’t really do that in the preclinical setting. You have to wait until you’re in the clinical setting to figure that out. And so it’s been interesting because what do we build? We could build any number of different iterations of control features that are useful, but we have to focus on particular control interaction models that are useful for the patient and which feel good for the patient and which we think can scale over a population. So it’s been a fascinating journey.
</p><p>
<strong>Strickland: </strong>Can you tell me a little bit about the people who have participated in your clinical trials so far and why they need this kind of assistive device?
</p><p>
<strong>Oxley: </strong>Yeah. So we’ve had a range of levels of disability. We’ve had people on the one end who have been completely locked in, and that’s from a range of different conditions. So locked-in syndrome is where you still may have some residual cranial nerve function, like eye movements or maybe some facial movements, but in whom you can’t move your upper or lower limbs, and often you can’t move your head. And then, on the other end of the spectrum, we’ve had some patients on the neurodegenerative side with ALS, in particular, where limb function has impaired their ability to utilize digital devices. And so really, the way I think about-- how we’re thinking about the problem is: the technology is for people who can’t use their hands to control personal digital devices. And why that matters is because they-- we’ve all become pretty dependent on digital devices for activities of daily living, and the things that matter from a clinically meaningful perspective are things like communication, texting, emailing, messaging, banking, shopping, healthcare access, environmental smart control, and then entertainment.
</p><p>
	And so even for the people who can still— we’ve got someone in our study who can still speak and who can actually still walk, but he can’t use a digital device. And he’s been telling us-- like you’d think, “Oh, well, what about Siri? What about Alexa?” And you realize that if you really remove the ability to press any button, it becomes very challenging to engage in even the technology that’s existing. Now, we still don’t know what the exact indication will be for our first application, but even in patients who can still talk, we’re finding that there are major gaps in their capacity to engage in digital devices that I believe BCI is going to solve. And it’s often very simple things. I’ll give you an example. If you try to answer the phone when Siri-- if you try to answer the phone with Siri, you can’t put it on speakerphone. So you can say, “Yes, Siri, answer the phone,” but then you can’t put on the speakerphone. So there are little things like that where you just need to hit a couple of buttons that make the difference to be able to give you that engagement.
</p><p>
<strong>Strickland:</strong> I’d like to hear about what the process has been like for these volunteers. Can you tell me about what the surgery was like and then how-- or if you had to calibrate the device to work with their particular brains?
</p><p>
<strong>Oxley:</strong> Yeah. So the surgery is in the cath lab in a hospital. It’s the same place you would go to to have a stent put in or a pacemaker. So that involves: first, there are imaging studies to make sure that the brain is appropriate and that all the blood vessels leading up into the brain are appropriate. So we have our physicians identify a suitable patient, talk to the patient. And then, if they’re interested in the study, they’ve joined the study. And then we do brain imaging. The investigators make a determination that they can access that part of the brain. Then the procedure, you come in; it takes a few hours. You lie down; you have an X-ray above you. You’re using X-ray and dye inside the blood vessels to navigate to the right spot. We have a mechanism to make sure that you are in the exact spot you need to be. The Stentrode sort of opens up like a flower in that spot, and it’s got self-expanding capacity, so it stays put. And then there is a device that-- so the lead comes out of the skull through a natural blood vessel passage, and then that gets plugged into an electronics package that sits on the chest under the skin. So the whole thing’s fully implanted. The patients have been then resting for a day or so and then going home. And then, in the setting of this clinical study, we’re having our field clinical engineers going out to the home two to three times per week and practicing with the system and practicing with our new software versions that we keep releasing. And that’s how we’re building-- that’s how we’re building a product.
</p><p>
	By the time we get to the next stage of the clinical trial, the software is getting more and more automated. From a learning perspective, we have a philosophy that if there’s a substantial learning curve for this patient population, that’s not good. It’s not good for the patient. It’s not good for the caregiver. These patients who are suffering with severe paralysis or motor impairment may not have the capacity to train for weeks to months. So it needs to work straight away. And ideally, you don’t want it to be recalibrated every day. So we’ve had our system-- I mean, we’re going to publish all this, but we’ve working and designing towards having the system working on day one as soon as it’s turned on with level of functionality that lets the user immediately have functionality at some particular level that is enough to let them perform some of the critical activities of daily living, the tasks that I just mentioned earlier. And then I think the vision is that we build a training program within the system that lets users build up their capability to increasing levels of capability, but we’re much more focused on the lowest level of function that everyone can achieve and make it easy to do.
</p><p>
<strong>Strickland: </strong>For it to work right out of the box, how do you make that work? Is one person’s brain signals pretty much the same as another person’s?
</p><p>
<strong>Oxley: </strong>Yeah, so <a href="https://www.linkedin.com/in/petereliyoo/" rel="noopener noreferrer" target="_blank">Peter Yoo</a> is our superstar head of algorithms and neuroscience. He has pulled together this incredible team of neuroscientists and engineers. I think the team is about 10 people now. And these guys have been working around the clock over the last 12 months to build an automated decoder. And we’ve been talking about this internally recently as what we think is one of the biggest breakthroughs. We’ll publish it at a point that’s at the right time, but we’re really excited about this. We feel like we have built a decoder that does not need to be tuned individually at all and will just work out of the box based upon what we’ve learned so far. And we expect that kind of design ethos to continue over time, but that’s going to be a critical part of the focus on making the system easy to use for our patients.
</p><p>
<strong>Strickland: </strong>When a user wants to click on something, what do they do? What’s the mental process that they go through?
</p><p>
<strong>Oxley:</strong> Yeah. So I’ve talked about the fact that we do population-level activation of motor cortical neurons. So what does your motor cortex do? Your motor cortex is about 10% of your brain, and you were born with it, and it was connected to all of these muscles in your body. And you learned how to walk. You learned how to run. My daughter just learned how to jump. She’s two and a little bit. And so you spend those early years of your life training your brain on how to utilize the motor cortex, but it’s connected to those certain physically tethered parts of your body. So one theory in BCI, which is what the kind of multi-unit decoding theory is, is that, “Let’s train the neurons to do a certain task.” And it’s often like training it to work within certain trajectories. I guess the way we think about it is, “Let’s not train it to do anything. Let’s activate the motor cortex in the way that the brain already knows how to activate it in really robust, stable ways at a population level.” So probably tens of thousands of neurons, maybe hundreds of thousands of neurons. And so how would you do that? Well, you would make the brain think about what it used to think about to make the body move. And so in people who have had injury or disease, they would have already lived a life where they have thought about pressing down their foot to press the brake pedal on the car, or kicking a ball, or squeezing their fist. We identify robust, strong motor intention contemplations, which we know are going to activate broad populations of neurons robustly.
</p><p>
<strong>Strickland:</strong> And so that gives them the ability to click, and I think there’s also something else they can do to scroll. Is that right?
</p><p>
<strong>Oxley: </strong>Yeah. So right now, we’re not yet at the point where we’ve got the cursor moving around the screen, but we have a range of— we have multi-select, scroll, click, click and hold, and some other things that are coming down the pipeline, which are pretty cool, but enough for the user to navigate their way around a screen like an Apple on like an iOS and make selections on the screen. And so the way we’re thinking about that is so converting that into a clinical metric. David Petrino at Mount Sinai has recently published this paper on what he’s called the digital motor output, DMO. And so the conversion of those population neurons into these constrained or not constrained, but characterized outputs, we’re calling that a DMO. And so the DMO-- the way I think about a DMO is that is your ability to accurately select a desired item on a screen with a reasonable accuracy and latency. And so the way we’re thinking about this is how well can you make selections in a way that’s clinically meaningful and which serves the completion of those tasks that you couldn’t do before?
</p><p>
<strong>Strickland: </strong>Are you aiming for eventually being able to control a cursor as it goes around the screen? Is that on the roadmap?
</p><p>
<strong>Oxley: </strong>That is on the roadmap. That’s where we are headed. And I mean, I think ultimately, we have to prove that it’s possible from inside a blood vessel. But I think when we do prove that, I think— I’m excited that there’s a history in medicine that minimally invasive solutions that don’t require open surgery tend to be the desired choice of patients. And so we’ve started this journey in a big blood vessel with a certain amount of access, and we’ve got a lot of other exciting areas that we’re going to go into that give us more and more access to more brain, and we just want to do it in a stepwise and safe fashion. But yeah, we are very excited that that’s the trajectory that we’re on. But we also feel that we’ve got a starting point, which we think is the stepwise fashion, a safe starting point.
</p><p>
<strong>Strickland: </strong>I think we’re just about out of time, so maybe just one last question. Where are you on the path towards FDA approval? What do you anticipate happening as next steps there?
</p><p>
<strong>Oxley:</strong> So we’ve just finished enrollment of our 10th patient in our feasibility study. Well, we had four patients in our first Australian study and now six patients in an early feasibility study. That will continue to run formally for another, I believe, six months or so. And we’ll be collecting all that data. And we’re having very healthy conversations with the FDA, with Heather Dean’s group in the FDA. And we’ll be discussing what the FDA need to see to demonstrate both safety and efficacy towards a marketing approval with what we hope will be the first commercial implantable BCI system. But we’ve still got a way to go. And there’s a very healthy conversation happening right now about how to think about those outcomes that are meaningful for patients. So I would say over the next few years, we’re just moving our way through the stages of clinical studies. And hopefully, we’ll be opening up more and more sites across the country and maybe globally to enroll more people and hopefully make a difference in the lives of this condition, which really doesn’t have any treatment right now.
</p><p>
<strong>Strickland:</strong> Well, Tom, thank you so much for joining me. I really appreciate your time.
</p><p>
<strong>Oxley: </strong>Thank you so much, Eliza.
</p><p>
<strong>Strickland: </strong>That was Tom Oxley speaking to me about his company, Synchron, and its innovative brain-computer interface. If you want to learn more, we ran <a href="https://spectrum.ieee.org/synchron-bci" target="_self">an article about Synchron</a> in <em>IEEE Spectrum</em>‘s January issue, and we’ve linked to it in the show notes. I’m Eliza Strickland, and I hope you’ll join us next time on Fixing the Future.
</p>]]></description><pubDate>Wed, 24 Jan 2024 10:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/brain-implant-close-to-market</guid><category>Bci</category><category>Synchron</category><category>Brain-computer-interface</category><category>Brain-implant</category><category>Fixing-the-future</category><category>Type-podcast</category><dc:creator>Eliza Strickland</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/51160803/origin.webp"/></item><item><title>Can Electronics Become Compostable?</title><link>https://spectrum.ieee.org/boosting-electronics-sustainability</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.webp?id=51037295&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/4fea4738" width="100%"></iframe><p>
<strong>Stephen Cass: </strong>Hello and welcome to <em>Fixing the Future,</em> an <em>IEEE Spectrum</em> podcast, where we look at concrete solutions to some big problems. I’m your host, Stephen Cass, a senior editor at IEEE Spectrum. And before we start, I just want to tell you that you can get the latest coverage from some of Spectrum’s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to<a href="https://spectrum.ieee.org/newsletters/" target="_self"> spectrum.ieee.org/newsletters</a> to subscribe. Sustainable electronics is becoming an increasingly important topic around the world, and today we’re going to be talking with<a href="https://cris.vtt.fi/en/persons/liisa-hakola" rel="noopener noreferrer" target="_blank"> Liisa Hakola</a>, a senior scientist at<a href="https://www.vttresearch.com/en" rel="noopener noreferrer" target="_blank"> VTT in Finland</a>, about the European Union’s<a href="https://sustronics.eu/" rel="noopener noreferrer" target="_blank"> Sustronics program</a> aimed at this very topic. I’d like to welcome you to the show. Thank you so much, Liisa.
</p><p>
<strong>Liisa Hakola: </strong>Thank you. Nice to be here. Thank you for inviting.
</p><p>
<strong>Cass:</strong> You’re very welcome. So as I said, sustainable electronics is becoming a bigger and bigger topic, but it seems to be one of those things that people talk about it more than actually doing anything about it. How is the EU Sustronics project going to help with that, and where does VTT fit into that?
</p><p>
<strong>Hakola:</strong> Thank you for the question. Indeed, the Sustronics project is a large initiative with 46 partners from 11 different European countries. And our main topic is about finding ways to make electronics more sustainable throughout their life cycle. So not just focusing on one aspect but taking into account different opportunities that might arise from selection of materials or manufacturing technologies or circular economic strategies that could be used. And VTT’s role is, first of all, to be the technical manager of the project to ensure that the different partners work together and the different activities are interacting with each other in order to have a joint effort. But on top of that, VTT also brings some of its technologies, mainly from printed electronics, to the project.
</p><p>
<strong>Cass: </strong>Is it a case that you look for industry partners who then come in and work with you? They look around. They think you’re a good fit within the program. Or are you actively searching out people and going, “Oh, we think we have some technology that might help you out here”?
</p><p>
<strong>Hakola: </strong>Well, basically, I think they’re both ways. Of course, there are 46 partners already in the consortium, and over half of them are from the industry, large enterprises and SMEs. So of course, they have specific needs, and we have been already agreeing during the proposal phase that VTT could offer certain technologies for them to then start testing for their products and if that could help with decreasing their environmental footprint.
</p><p>
<strong>Cass:</strong> I guess the question is, why would anybody join the program, especially if you’re a manufacturer and so on? I mean, as a citizen of Earth, I think it’s a great idea, but we often hear about bottom-line issues and so on. What’s the incentive, if you are somebody who’s making electronics, to become one of these partners?
</p><p>
<strong>Hakola:</strong> Well, first of all, in the EU, we have this<a href="https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/european-green-deal_en" rel="noopener noreferrer" target="_blank"> Green Deal</a>. So the regulations and the legislation is developing into a direction where all of the companies in the EU have to take into account the sustainability aspects of the products they are developing and selling. So in order to achieve that, to be able to meet the requirements coming from the EU side, the companies need to develop new ways to maintain or improve sustainability of their products. And this is one opportunity because collaborating with the research institutes and universities, the companies get access to kind of technologies that have been in development in those, and then they can try them out in their own products, and then in that way to get closer to meeting the sustainability requirements.
</p><p>
<strong>Cass:</strong> So we’re based in New York, in the United States, where it’s quite a different regulatory regime. But can you tell me, what is the enforcement mechanism for those sustainability regulations? What happens if you don’t do it? Because I can imagine some people just thinking, oh, it’s just a slap on the wrist, or it’s a fine. It’s just a cost of doing business. How is those rules really enforced?
</p><p>
<strong>Hakola:</strong> Well, of course, EU is developing the regulations all the time, so there might come new enforcements in the future. But the upcoming regulation about ecodesign for sustainable products, so that regulation demands that there is going to be a digital product passport that would give information about the environmental impact of the product. And that kind of information would be available even for consumers. So actually, if the consumers are environmentally aware, they would start selecting the products that are environmentally friendly. So that’s, of course, quite strong way to make companies work towards making more sustainable products. Because if consumers start selecting the sustainable products, then the non-sustainable ones will lose their market share.
</p><p>
<strong>Cass: </strong>So you talked a little bit earlier about the entire sort of lifecycle and sustainability. Along that life cycle, what are some of the biggest obstacles that currently exist towards making electronics more sustainable?
</p><p>
<strong>Hakola: </strong>Well, there are a couple of things that are quite dominant. So first of all, the raw materials that are used for making electronic products, they are mostly fossil-based, like different metals that are needed for making conductive structures. And also, the substrates where the metals are put, they are usually based on some plastics or plastic composite materials. And then we are actually talking about materials that are critical or rare or quite valuable. So it’s quite a challenge to find materials that could substitute the existing materials because we know that those are well-performing. So can we actually find some sustainable alternatives for them?
</p><p>
	And another thing is, of course, that the processes that are used for making circuit boards, for example, they consume quite a lot of energy and raw materials. And that, of course, is not very good for the environment because it’s not very energy or material efficient to manufacture in a way that a lot of material is wasted and processed several times. And of course, the whole electronics industry is quite complex and fragmented industry. There are a lot of layers, and it’s really difficult to get them all to work together and sort of transparently transfer data and information between the different players.
</p><p>
<strong>Cass: </strong>So I’d like to go into that—and maybe this is some of VTT’s special expertise—and talk a little about the work that you’ve done in materials specifically then.
</p><p>
<strong>Hakola: </strong>Yes. So VTT has focused quite a lot on replacing the fossil-based substrate materials with materials that are bio-based or renewable materials. And well, in Finland, the forest industry has typically been quite strong. So of course, we have studied how to use the cellulose-based materials like paper as a substrate for electronics. But there are also a lot of these biopolymer-based substrates that are-- basically, they look and feel like plastics, but they are from bio-based resources, so they are kind of renewable. And some of them are really easy to recycle, or some of them can even be compostable.
</p><p>
<strong>Cass: </strong>You said compostable there. I’m a little worried because I have these compostable plastic bags in my kitchen that just don’t last very long. And so when you say that, I’m a little concerned about putting that in my electronics. Or is it for very short-lived sort of disposable electronics, given some of them have very short life cycles?
</p><p>
<strong>Hakola: </strong>Yes. If we are talking about using printing as a manufacturing technology, so then of course we are able to manufacture electronics that have a shorter lifetime, and they can be even used just one time. But if you produce a lot of electronics that is for single-use purpose, then actually you are creating a lot of new electronic waste. So you have to somehow tackle this issue with having single-use electronics, but then being able to somehow recycle or dispose that electronics. And in that case, if there is, for example, some diagnostic device where you measure something, then probably there would be a single-use part on that device that could then probably be compostable. But then there would also be a reusable part. So after doing some diagnostic measurements, you change only one piece of the device, and then that changeable part would then be compostable. Or it can also be that the recycling process is established, and it would be easily recyclable. But in that kind of cases, you might think about the compostable solutions also.
</p><p>
<strong>Cass: </strong>So I’d like to talk a little bit more about recycling there. Electronic waste is notoriously very difficult to waste. We have to separate out our electronic waste and we have to put it somewhere else. There are special pickup days, which I do dutifully. But then I sometimes think about when all this stuff is put on the valley, how is anybody going to realistically recycle that 10-year-old broken projector or those collection of printers and so on? How do you make recycling work better?
</p><p>
<strong>Hakola:</strong> Well, yeah, that’s of course a matter of— first of all, you need to establish the recycling process, and there would have to be different collection bins where people could dispose their electronics. But of course, I come from Finland. Actually, in my apartment where I live, there are something like seven different recycling bins where I put the different type of waste. So adding there eighth bin for electronics wouldn’t be that big of an issue. But if you think recycling also from the scratch, then the electronic devices actually have to be designed in a way that they are better for recycling. So we talk about circular design, for example. Already in the design phase of the products, you actually think about the recycling and then design the electronics in a way that it’s, for example, modular, so you can disintegrate the different components easily and recover the materials. So actually, everything starts in the design phase.
</p><p>
<strong>Cass:</strong> Does this also help with things like serviceability or repairability? I find myself that sometimes it’s easier for me to<a href="https://spectrum.ieee.org/upcycle-a-vintage-lcd" target="_self"> repair something that is 40 years old</a>. I’ve brought these products back from the dead. But a product I buy today, it’s a blob. I have to use very specialized tools to get it open, if I can. I often have to send away for a special kit. Is part of this design process also looking at those issues?
</p><p>
<strong>Hakola: </strong>Yes, yes. That’s the same thing that already in the design phase. Design the devices in a way that parts can be replaced later on, and people don’t have to buy the new model. I understand that, of course, for the electronics companies, their business to sell new models all the time. But perhaps they can find a suitable business model also from repairing the devices. There could be some business opportunities also.
</p><p>
<strong>Cass: </strong>So you talked a little bit about manufacturing processes and making those a little bit more sustainable. Can you expand on that?
</p><p>
<strong>Hakola: </strong>So what VTT has been developing for over 20 years is printed electronics. So it means that we are using printing as a manufacturing technology for electronics. And compared to the current state-of-the-art electronics manufacturing, printing is an additive method. So we actually add materials only where they are needed, and we don’t strip them away later on and then try to figure out what to do with that kind of material. So that’s an opportunity for electronic manufacturing to decrease its material but also energy consumption. We have carried out some life cycle assessment analysis where it has been shown that the printed electronics consumes less energy during manufacturing than traditional manufacturing. So there is actually already an opportunity there. But besides this energy issue, the bio-based and renewable substrate materials are already compatible with the printing technology. It’s actually quite challenging to print those, for example, paper as a substrate to traditional electronic manufacturing. But for printing, it’s quite easy because you know that you can print on paper, so using that to make electronics is a kind of easier task.
</p><p>
<strong>Cass:</strong> So can you talk a little bit about some of the sort of very concrete examples you’ve developed with some of your partners?
</p><p>
<strong>Hakola: </strong>Yes. So if you think about the Sustronics program-- so there are actually a lot of development for these single-use diagnostic devices. So the goal is to develop the kind of devices that people can actually even use at home to measure something from their saliva, or they can monitor how the wound is healing by having just a plaster-type wearable device on the skin. And other things that we are developing are also these other wearable devices that are not for single use, but they are for sports and fitness sector where you can monitor how you are doing when you are exercising and you can even measure your heart rate, and then the app would-- the app you would have in your mobile phone would then tell you based on the measurement data that, okay, you did well today or something else.
</p><p>
	And one application area that VTT has been developing quite a lot devices already in the earlier research programs are these solutions for intelligent packaging. So if we talk about the packaging industry, and there is a lot of needs in the logistics of packages to measure, for example, temperature to make sure that the cold chain has not been broken and your products are not spoiling. So VTT has been developing electronics for that, like sensors attached to packages, electronic sensors that can transmit information to mobile phone. But if you think about the packaging industry, the packages are recyclable. So then actually we are adding electronics there, then the sustainability of these kind of smart tags, how we could call them, would be a really important aspect to consider. And there, these new kind of materials like using paper as a substrate for electronics have a really important role.
</p><p>
<strong>Cass: </strong>And how long do you think it’ll be before we start seeing these in the marketplace as something that consumers can sort of see and feel for themselves?
</p><p>
<strong>Hakola: </strong>Well, actually, some of them are already on the marketplace. Of course, not in really huge volumes. But there are, for example, contract manufacturers for printed electronics that manufacture something that is used as a part of a device that is sold in the market. But of course, we can’t print a mobile phone with these kind of technologies, at least not yet. So it depends. Perhaps some of them are already there. For some of them, it might take three to five years, and some even longer. But let’s say during the next decade, there would certainly be product announcements.
</p><p>
<strong>Cass:</strong> And so you mentioned manufacturers. Where are these manufacturers located? Are they local manufacturers, or is this something that we can see that is being integrated into the global supply chain in terms of those great manufacturing centers in China, for example?
</p><p>
<strong>Hakola: </strong>Yeah. Well, of course, the printed electronics contract manufacturers, they are not really large companies yet. They are still at the early phase, and they are located all around the world. Probably quite many of them in the Europe, because in Europe, we have been investigating printed electronics quite a lot. But yeah, there is no issue why they couldn’t be part of the global supply chains. But as we think, “What is the strategy of the EU?”, we actually want to-- the EU wants to also move again back to the European supply chains also to sort of maintain the local strategic availability of key technologies. So I think in the EU, there would be probably quite strong support in the future for making more manufacturers coming back to Europe or at least establishing new manufacturing units to Europe.
</p><p>
<strong>Cass: </strong>So if you could wave a magic wand and solve one problem right now that’s on your desk, what would that be?
</p><p>
<strong>Hakola: </strong>Ah. Well, probably I would make the products more repairable or reusable. I’ve personally had some issues with the devices recently, and it has been a bit of annoying that there is no repair option. So I’ve been forced to buy new devices, although I have not wanted to do so. So probably I would change the business a bit that the repair would always be an option unless you have something that is like 50 years old. Perhaps that would be an issue. But even for a 5-year-old device, it would be nice to have a repair option. So I guess I would develop the kind of design for the electronics that they really can be repaired or reused.
</p><p>
<strong>Cass: </strong>Can you talk a little bit more about Finland’s history with— you said it has this history coming out of the cellulose industry. So can you talk a little bit more about that point, about how Finland’s experience with cellulose and paper sort of fed into this program?
</p><p>
<strong>Hakola: </strong>Yeah. Perhaps the background is so that Finland has a long history of paper and forest technologies. And the first printed electronics projects that were initiated in Finland more than 20 years ago, there the role of the paper companies in Finland was really strong. So actually, at least in Finland, how we started to investigate printed electronics, the initiative was involving quite a lot of these forest industry companies. And that’s how we also at VTT got involved with using cellulose-based and paper as a substrate for electronics. And if you think about the sustainable electronics, the paper has been there first and only later came the other alternatives like biopolymers. So I guess in the early stage, the paper industry was actually looking for new business opportunities. And they thought that it can be found from printed electronics because printing on paper is something that is being done all the time. So that’s how I think the thing started, at least in Finland.
</p><p>
<strong>Cass:</strong> So this is a fascinating topic, which we could talk about all day, but I’m afraid we have to leave it there. Today we were talking with Liisa Hakola from VTT about sustainable electronics. It was so lovely to have you on the show.
</p><p>
<strong>Hakola:</strong> Thank you. It was lovely being here.
</p><p>
<strong>Cass: </strong>And for <em>IEEE Spectrum</em>, I’m Stephen Cass, and I hope you join us next time on <em>Fixing the Future</em>.
</p>]]></description><pubDate>Wed, 10 Jan 2024 10:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/boosting-electronics-sustainability</guid><category>Fixing-the-future</category><category>Green-electronics</category><category>Sustainability</category><category>Sustainable-electronics</category><category>Type-podcast</category><dc:creator>Stephen Cass</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/51037295/origin.webp"/></item><item><title>Stop Trusting Your Cloud Provider</title><link>https://spectrum.ieee.org/plan-for-better-data-security</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.webp?id=50757570&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/a554111b" width="100%"></iframe><p>
<strong>Stephen Cass: </strong>Hello and welcome to <em>Fixing the Future</em>, an <em>IEEE Spectrum</em> podcast where we look at concrete solutions to some tough problems. I’m your host <a href="https://spectrum.ieee.org/u/stephen-cass" target="_self">Stephen Cass</a>, a senior editor at <em>Spectrum</em>. And before we start, I just want to tell you that you can get the latest coverage from some of Spectrum’s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to <a href="https://spectrum.ieee.org/newsletters/" target="_self">spectrum.ieee.org/newsletters</a> to subscribe.
</p><p>
	The advent of cloud computing meant a wholesale migration of data and software to remote data centers. This concentration has proven to be a tempting target for corporations and criminals alike, whether it’s for reselling customer intelligence or stealing credit cards. There’s a constant stream now of stories of controversial items creeping into terms of service or data breaches leaving millions of customers exposed. In the December issue of <em>Spectrum</em>, data security experts <a href="https://www.schneier.com/" rel="noopener noreferrer" target="_blank">Bruce Schneier</a> and <a href="https://raghavan.usc.edu/" rel="noopener noreferrer" target="_blank">Barath Raghavan</a> present a bold new plan for <a href="https://spectrum.ieee.org/data-privacy" target="_self">preserving online privacy and security</a>. Here to talk about the plan is Barath Raghavan, a member of the Computer Science Faculty at the <a href="https://www.usc.edu/" rel="noopener noreferrer" target="_blank">University of Southern California</a>. Barath, welcome to the show.
</p><p>
<strong>Barath Raghavan: </strong>Great to be chatting with you.
</p><p>
<strong>Cass: </strong>I alluded to this in the introduction, but in your article, you write that cloud providers should be considered potential threats, whether due to malice, negligence, or greed, which is a bit worrying given they have all our data. And so can you elaborate on that?
</p><p>
<strong>Raghavan:</strong> Yeah. So we’ve been seeing over the course of the last 15 years as the cloud became the norm for how we do everything. We communicate, we store our data, and we get things done both in personal context and in work context. The problem is the cloud is just somebody else’s computer. That’s all the cloud hits. And we have to remember that. And as soon as it’s somebody else’s computer, that means all our data depends on whether they’re actually doing their job to keep it secure. It’s no longer on us to keep it secure. We’re delegating that to the cloud and the cloud providers. And there, we’ve seen, over and over again, they either don’t invest in security because they figure, “Well, we can deal with the fallout from a data breach later,” they sometimes see the value in mining and selling the data of their customers, and so they go down that road, or we run into these problems where we are combining so many different cloud providers and cloud services that we just lose track of how all of those things are being integrated and then where our data ends up.
</p><p>
<strong>Cass: </strong>You discussed three types of data: data in motion, data at rest, and data in use. Can you unpack those terms a little?
</p><p>
<strong>Raghavan: </strong>Sure. Yeah. So these are relatively standard terms, but we wanted to sort of look at each of those dimensions because it’s useful, and the way we secure them is a little bit different. So data in motion is the way we communicate over internet or specifically with cloud services over the internet. So this call right now over a video conferencing platform, this is an example of data in motion. Our data is in real time being sent from my computer to some cloud server and then over to you and then back and forth. There’s data at rest, which is the data that we’ve stored. Right? It could be corporate documents. It could be our email. It could be our photos and videos. Those are being stored both locally, usually, but also backed up or primarily stored in some cloud server. And then finally, we’ve got data in use. Often, we don’t just want to store something in the cloud, but we want to do data processing on it. This might be big data analytics that a company is doing. It might be some sort of photo sharing and analysis of which friends are present in this photo when you’re sharing it on social media. All of those are examples of processing being done on the cloud and on the cloud providers servers. So that’s data in use.
</p><p>
<strong>Cass:</strong> The heart of your proposal is something called data decoupling. So can you say what that is in general, and then maybe we can get into some specific examples?
</p><p>
<strong>Raghavan:</strong> Sure. Yeah. So the basic idea here is that we want to separate the knowledge that a cloud provider has so that they don’t see the entirety of what’s going on. And the reason is because of the malice, negligence, or greed. The risks have become so large with cloud providers that they see everything, they control everything about our data now. And it’s not even in their interests often to be in the hot seat having that responsibility. And so what we want to do is split up that role into multiple different roles. One company does one piece of it, another company does another piece. They have their own sort of security teams. They’ve got their own architecture. And so the idea is by dividing up the work and making it seamless to the end user so that it’s not harder to use, we get some security benefits. So an example of this is when we’re having this call right now, the video conferencing server knows everything about who we are, where we’re calling from, what we’re saying, and it doesn’t need any of that to do its job. And so we can split up those different pieces so that one server can see that I’m making a call to somebody, but it doesn’t know who it’s going to. Another server run by a different provider can see that somebody is making a call, but it doesn’t know who is making that call or where it’s going to. And so by splitting that into two different places, neither piece of information is super sensitive. And that’s an example of where we split the identity from the data. And then there’s lots of different forms of this, whether we’re talking data in motion or one of the others.
</p><p>
<strong>Cass:</strong> So that was a great example there. We’re talking about Zoom calls, which again in the article-- or actually, all video conferencing calls. I shouldn’t just single out Zoom there. But where it’s like, imagine if you had gone back 15 years ago and said, “Every important meeting your company is going to have, we’re going to have this, say, maybe a sonographer from another company sitting in every single conversation, but you’re maybe not going to know what they’re going to do with those records and so on.” But can you give another example of, say, decoupled web browsing was another sort of scenario you talked through in the article?
</p><p>
<strong>Raghavan: </strong>Yeah. So decoupled web browsing is actually becoming more common now with a few different commercial services, but it’s a relatively new thing. Apple released this thing they call iCloud Private Relay is an example of that. And the basic idea is-- some people are familiar with these things like<a href="https://en.wikipedia.org/wiki/Virtual_private_network" rel="noopener noreferrer" target="_blank"> VPNs</a>. Right? So there are various VPN apps. They sell themselves as providing you privacy. But really what they’re doing is they’re saying, when you’re browsing the web, you send all your traffic to that VPN company, and then that VPN company makes the requests on your behalf to the various websites. But that means that they’re sitting in between seeing everything, going to the web, and coming back from the web that you’re doing. So they actually know more than some random website. The idea with this sort of decoupled web browsing is that there are two hops that you go through. So you go through a first hop, which just knows who you are. They know that you’re trying to get to the web, but they don’t know what you’re trying to access. And then there’s a second hop which knows that some user somewhere, but they don’t know who, is trying to get to some website. And so neither party knows the full thing. And the way that you sort of design this is that they’re not colluding with each other. They’re not trying to put that data together because they’re trying to make the service so that if they get breached, they’re not losing their customers’ data. They’re not revealing private information of their customers. And so the companies are incentivized to keep each other at arm’s length.
</p><p>
<strong>Cass:</strong> So this sounds a little bit like the <a href="https://www.torproject.org/" rel="noopener noreferrer" target="_blank">Tor web browser</a>, which I think some listeners will be familiar with. Is it kind of based on that technology, or are you going beyond that model?
</p><p>
<strong>Raghavan:</strong> Yeah. So data in motion security and this kind of decoupling is something that Tor is using. And it really goes back to some seminal ideas from <a href="https://chaum.com/" rel="noopener noreferrer" target="_blank">David Chaum</a>, who’s a cryptographer who developed these ideas back in the 1980s. And so a lot of these ideas come from his research, but they had never become practical until the last few years. And so really, the reason that we started writing about this is because just the last two or three years, this stuff has become practical because the network protocols that make this possible so it’s fast and convenient, those have been developed. On the data and use side, there is support in processors now to do this both locally and in the cloud. And there are some new sort of technologies that have been developed, sort of open standards for data and rest, to make this possible as well. So it’s really the confluence of these things and the fact that ransomware attacks have skyrocketed, breaches have skyrocketed, so there’s a need on the other side as well.
</p><p>
<strong>Cass: </strong>So I just want to go through one last example and maybe talk about some of these implications. But credit card use is another one you step through in your article. And that seems to be like, well, how can I possibly-- I’m giving a credit card, and at some point, money is coming from A to B. How am I really kind of wrapping that up in a decoupled way?
</p><p>
<strong>Raghavan: </strong>Yeah. So actually, that was Chaum’s original or one of his original examples back in his research in the ‘80s. He was one of the pioneers of digital currencies, but in the sort of pre-cryptocurrency era. And he was trying to understand how could a bank enable a transaction without the bank basically having to know every single bit. Right? So he was trying to make basically digital cash, something which provides you the privacy that buying something from somebody with cash provides, but doing it with the bank in the middle brokering that transaction. And so there’s a cryptographic protocol he developed called blind signatures that enables that.
</p><p>
<strong>Cass: </strong>So some of these data decoupling, you talk about new intermediaries. And so where do they come from, and who pays for them as well?
</p><p>
<strong>Raghavan: </strong>Yeah. So the new intermediaries are really the same intermediaries we’ve got. It’s just that you now have multiple different companies collaborating to provide the service. And this too is not something that’s totally new. As we mentioned in the article, there’s only two tricks in all of computing. It’s abstraction and indirection. So you would try and abstract away the details of something so that you don’t see the mess behind the scenes. Right? So cloud services look clean and simple to us, but there’s actually a huge mess of data centers, all these different companies providing that service. And then indirection is basically you put something in between two different things, and it acts as a broker between them. Right? So all the ride-sharing apps are basically a broker between drivers and riders, and they’ve stuck themselves in between. And so we already have that in the cloud. The cloud is abstracting away the details of the actual computers that are out there, and it’s providing layer after layer of indirection to sort of choose between which servers and which services you’re using. So what we’re saying that we’re doing is just use this in a way that architects-- this decoupling into all the cloud services that we’ve got. So an example would be in the case of <a href="https://support.apple.com/en-us/102602" rel="noopener noreferrer" target="_blank">Apple’s Private Relay</a>, where they’re going through two hops. They just partner with three existing CDN providers. So <a href="https://www.fastly.com/" rel="noopener noreferrer" target="_blank">Fastly</a>,<a href="https://www.cloudflare.com/" rel="noopener noreferrer" target="_blank"> Cloudflare</a>, and <a href="https://www.akamai.com/" rel="noopener noreferrer" target="_blank">Akamai</a> provide that second hop service. They already have global content delivery networks that are providing similar types of service. Now they just add this extra feature, and now they are the second hop for Apple’s users.
</p><p>
<strong>Cass: </strong>So you also write about that this gives people the ability to control their own data. It’s my data. I can say who has it. But users are notorious for just not caring about anything other than the task at hand, and they just don’t want to get involved in this. How important is sort of user awareness and education understanding to data decoupling, or is it something that can really happen behind the scenes?
</p><p>
<strong>Raghavan: </strong>The aim is that it should happen behind the scenes. And we’ve, over the years, seen that if security and privacy have to be something that ordinary users need to think about, we’ve already lost. It’s not going to happen. And that’s because it’s not on the ordinary users to make this work. There are sort of relatively complex things that need to happen in the backend that we know how to do. The other thing is that-- one of the things we talked about in the piece is security and privacy have really collapsed into one thing. In most contexts now, the security of a CEO’s email is provided by the same cloud provider and the same security sort of knobs as an ordinary user’s webmail. It’s the same service. It’s just being sold on one side, to businesses, on the other side, to consumers. Right? But it’s the same thing underneath, and the same servers are doing the same work. And so really where I think decoupling can start is for corporate customers, where, like you pointed out, if we were told 15 years ago that there was going to be-- every important business company meeting was happening over a third party’s communication infrastructure where they see and hear everything, people might have been a little bit reticent to do that, but now we just think it’s normal. And so that’s where we want to say, “Hey, you should demand that your video conferencing service provides you this sort of decoupled architecture where even if they’re breached, even if one of their employees goes rogue, they can’t see what you’re saying, and they don’t know who’s talking to whom because they don’t need to know.
</p><p>
<strong>Cass: </strong>So I want to just go back a little bit and poke into that question of security and privacy. So sometimes when you hear these words, they’re rolled off and they’re almost synonymous. Security and privacy is one thing. But in the past, there has been a tension between them in that maybe in order for us to secure the system, we have to be able to see what you’re doing, and so you don’t get any privacy. So can you talk a little bit about that historical tension and how data decoupling does help resolve it?
</p><p>
<strong>Raghavan: </strong>Yeah. So the historical tension, there’s sort of two threads of it. I mean, security as a word is very broad. So people can be talking about national security or computer security or whatever it might be. In this context, I’m just going to be talking about computer security. I often like to think of it as the difference between security and privacy is the protagonist of the story. And the protagonist of the story, if it’s an ordinary user who is trying to keep their personal files safe, then we call that privacy. And they’re trying to keep it safe from a company or from a government snooping or whoever it might-- or just other people who they don’t want to have access. In the corporate environment, if the company is the protagonist, then we call it enterprise security. Right? And that’s the way that we phrase it always. But like I mentioned, these two have collapsed because of the cloud, because both ordinary users and companies are using the same cloud companies, same cloud platforms. But like you pointed out, there’s this tension where sometimes you feel like, “Well, we need to know what’s going on to be able to secure things better.” And really what it comes down to is, who needs to know? Right? We’re in this weird place where what we need to do is push that knowledge to the edge. The edge in the sense of some intermediary cloud provider that is providing sort of the bits back and forth between us in this call, they don’t really need to know anything. Who needs to know who’s allowed to be in this call are you and me. And so we need to be given the tools to make those kinds of decisions, and it needs to be happening further to the edge rather than somewhere deep in the cloud, potentially at a provider we don’t even know exists that is doing the work on behalf of the company we really are paying the money to. Because usually, these things are nested in many layers.
</p><p>
<strong>Cass: </strong>So you’re right that cloud providers are unlikely to adopt data decoupling on their own, and some regulation will likely be needed. How do you think you can convince regulators to get involved?
</p><p>
<strong>Raghavan: </strong>They’re starting to already in certain ways. This aligns with some of the pushes towards sort of open protocols, open standards, enabling. Right? So EU has been a little bit further ahead on this, but there’s movement in the US as well, where there’s a recognition that you don’t want companies to lock their users in. And decoupling actually aligned really well with sort of the anti-lock-in policies. Because if you make sure that users have a choice, now they can send their traffic this way or they send their traffic the other way. They can store their data in one place or store their data in the other place. As soon as people have choices, the system has to have this indirection. It has to have the ability to let somebody choose. And then once you have that, you have sort of a standardized mechanism where you can say, “Well, yeah, maybe I want this photo app to be able to help me do analysis of my vacation photos or my corporate documents,” or whatever it might be. But I want to store the data in this other provider because I don’t want to get locked into this one company. And as soon as you have that, then you can get this data and rest security because then you can selectively and temporarily grant access to the data to an analytics platform. And then you can say, “Well, actually, now I’m done with that. I don’t want to give them any more access.” Right? And so the policies against sort of lock-in will help us move to this decoupled architecture.
</p><p>
<strong>Cass: </strong>So I just want to talk about some of those technical developments that have made this possible. And one of the things you’re talking about is this idea of these sort of trusted computing enclaves. Can you explain a little bit of what these are and how they help us out here?
</p><p>
<strong>Raghavan:</strong> Yeah. So for the last about 10 years or so, processor manufacturers, so this is <a href="https://www.intel.com/content/www/us/en/homepage.html" rel="noopener noreferrer" target="_blank">Intel</a> and <a href="https://www.arm.com/" rel="noopener noreferrer" target="_blank">ARM</a>, etc., they’ve all added support for what they call secure enclaves or <a href="https://en.wikipedia.org/wiki/Trusted_execution_environment" rel="noopener noreferrer" target="_blank">trusted execution environments</a> that are inside the CPU. You could think of this as a secure zone that is inside of your CPU. And it’s not just personal CPUs, but also all the Cloud Server CPUs that are out there now. What this allows you to do is run some piece of code on some data in a way that’s encrypted so that even the owner of that server doesn’t know what’s going on inside of that sort of secure enclave. And so the idea is that, let’s say you have your corporate data on<a href="https://aws.amazon.com/" rel="noopener noreferrer" target="_blank"> AWS</a>, you don’t want Amazon to be able to see your corporate data, what processing you’re doing on it. You can run it inside a secure enclave, and then they can’t see it, but you still get your compute done. And so it separates who owns the server and runs it from who you’re trusting to make sure that that code is running properly, that it’s the right code that’s running on your data, and that it’s kept safe. You’re trusting the processor vendor. And so as long as the processor vendor and the cloud provider aren’t colluding with each other, you get this security property that’s decoupled compute. So this is the data and use security that we talk about. And so all the big cloud providers now have support for this. Doing this right is tricky. It takes a lot of work. The processor companies have been developing it, getting hacked, fixing it. It’s the usual loop. Right? There’s always new vulnerabilities that’ll be found, but they’re actually pretty good now.
</p><p>
<strong>Cass: </strong>So in the security community, you’ve been circulating these ideas for a while, what has the response been?
</p><p>
<strong>Raghavan: </strong>It’s been a mix of a few things. So generally, this is the direction that we’re seeing movement anyway. So this is aligned with a lot of the efforts that people have been doing. Right? People have been doing this in the cloud secure compute context for the last few years. There have been people in the networking community doing the data in motion security. What we’re trying to argue for is that we need to do it more broadly. We need to build it into more types of services rather than just niche use cases. Web browsing, data decoupling is nice, but it’s not the most pressing use case, because ultimately, people are purchasing things over those connections. Even if you have decoupled communications, that website still knows who you are because you just bought something. Right? So there are those kinds of things where we need a little bit more of a holistic perspective and build this into everything. So that’s really what we’re arguing for. And the one place, and you raised this earlier, that people ask the question is, who’s going to pay for it? Because you do have to build slightly new systems. You do need to sometimes route traffic in slightly different ways. And there are sometimes minor overheads associated with that. This is partly where we can look at some of the costs that we’re bearing, things like the cost of ransomware, the cost of different types of data breaches, where if the providers just didn’t have the data in the first place, we wouldn’t have had that cost. And so the way that we kind of like to think about it is, by decoupling things properly, it’s not that we are going to prevent a breach from happening, but we’re just going to make the breach not as damaging because the data wasn’t there in the first place.
</p><p>
<strong>Cass:</strong> So finally, is there any question you think I should ask you which I haven’t asked you?
</p><p>
<strong>Raghavan: </strong>Yeah. Nothing specifically comes to mind. Yeah
</p><p>
<strong>Cass: </strong>Well, this is a fascinating topic, and we could talk about this, I think, at length, but I’m afraid we have to wrap it up there. So thank you very much for coming on the show. That was really fascinating.
</p><p>
<strong>Raghavan: </strong>Yeah. Thanks a lot for having me.
</p><p><strong>Cass: </strong>So today, we were talking with Barath Raghavan about data decoupling and how it might protect our online privacy and security. I’m Stephen Cass, and I hope you’ll join us next time on <em>Fixing the Future</em></p><p>
	.
</p>]]></description><pubDate>Wed, 13 Dec 2023 10:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/plan-for-better-data-security</guid><category>Clouds</category><category>Data-decoupling</category><category>Fixing-the-future</category><category>Privacy</category><category>Security</category><category>Type-podcast</category><dc:creator>Stephen Cass</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/50757570/origin.webp"/></item><item><title>A Watch That Runs For Over A Decade On A Single Battery</title><link>https://spectrum.ieee.org/hybrid-smartwatch</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.webp?id=50571332&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/c514a1c1" width="100%"></iframe><p>
<strong>Glenn Zorpette: </strong>Hello and welcome to <em>Fixing the Future</em>, an <em>IEEE Spectrum </em>podcast where we look at concrete solutions to some big problems. I’m your host, <a href="https://spectrum.ieee.org/u/glenn-zorpette" target="_self">Glenn Zorpette</a>, Editorial Director at <em>Spectrum</em>. And before we start, I just want to tell you that you can get the latest coverage from some of <em>Spectrum</em>‘s most important beats - including AI, climate change, and robotics - by signing up for one of our free newsletters. Just go to <a href="https://spectrum.ieee.org/newsletters/" target="_self">spectrum.ieee.org/newsletters</a> to subscribe.
</p><p>
	Electronic quartz watches arrived on the scene around 1970, and even today, they have advantages over their smartwatch brethren with batteries that last years, not days. But the motor that drives the hour and minute hands of the watch hasn’t really changed since then. Now, French company<a href="https://www.silmach.com/en" rel="noopener noreferrer" target="_blank"> SilMach</a> is using a new wristwatch to demonstrate its advanced silicon MEMS technology with a new watch movement that’s so efficient you might only need to change the battery about once a decade. Here to talk about that watch and the technology behind it is SilMach’s co-CEO and Chief Sales Officer, <a href="https://www.linkedin.com/in/pierre-fran%C3%A7ois-louvigne-0014b3108/" rel="noopener noreferrer" target="_blank">Pierre-Francois Louvigne</a>, and also <a href="https://www.linkedin.com/in/jean-baptiste-carnet-cfa-0b488793/" rel="noopener noreferrer" target="_blank">Jean-Baptiste Carnet</a>, the co-CEO and Chief Financial Officer. Pierre Francois and Jean-Baptiste, welcome to the show.
</p><p>
<strong>Pierre-Francois Louvigne: </strong>Welcome.
</p><p>
<strong>Jean-Baptiste Carnet: </strong>Thank you for having us.
</p><p>
<strong>Louvigne: </strong>Thank you.
</p><p>
<strong>Zorpette: </strong>Glad to have you here. So my first question, the question that popped into my mind when I first read about your remarkable new watch motor is, why make a tiny electric motor now, at this time, for an analog watch? Aren’t smartwatches taking over the wristwatch market now, the Apple Watch, and so on? Aren’t those the thing everyone seems to be buying?
</p><p>
<strong>Louvigne: </strong>Yeah. Thank you for inviting us in this talk. Thank you very much. What we can say is that in the quartz watch, there is this technology name <a href="https://en.wikipedia.org/wiki/Lavet-type_stepping_motor" rel="noopener noreferrer" target="_blank">Lavet motor</a> since more than 50 years. And if you open a classic quartz watch, you will see that there is a motor that is old technology, electromagnetic technology. And we invented at SilMach a new motor based on the most advanced technology and that is fully compatible with electronics.
</p><p>
<strong>Zorpette:</strong> So you have a particular strategy or a kind of watch in mind that you think will grow in the future?
</p><p>
<strong>Louvigne: </strong>The point is that this technology is obviously dedicated first to the smartwatch market because this market is ready to use these micromotors.
</p><p>
<strong>Zorpette: </strong>So you don’t mean the smartwatch that most people think of, which is the Apple Watch, which has no hands at all. I mean, no physical hands. It’s just a display screen. You seem to be referring to what is sometimes called the hybrid smartwatch or the hybrid-connected wristwatch. Is that correct?
</p><p>
<strong>Louvigne: </strong>Yes, it’s correct. Yes. The objective is to give the opportunity to any watchmaker, including connected watchmakers, that it’s possible now to use a motor to drive hands on a PCB, on the electronic board. And offering this opportunity, we think that those makers can design new watches.
</p><p>
<strong>Zorpette: </strong>So who are some of the companies that make these hybrid smartwatches?
</p><p>
<strong>Louvigne:</strong> I think the most advanced company is <a href="https://www.withings.com/us/en/" rel="noopener noreferrer" target="_blank">Withings</a>.
</p><p>
<strong>Zorpette:</strong> Withings.
</p><p>
<strong>Louvigne:</strong> Withings. It’s a French brand. And the market that they are targeting with the watch is the health market. And we believe that in this market, the people are willing to have like a classic watch; they can say also vintage look. And if you want to do that, then you need motor to drive the hands.
</p><p class="shortcode-media shortcode-media-rebelmouse-image">
<img alt="A photo of 3 men sitting around a table. " class="rm-shortcode" data-rm-shortcode-id="58411502a66f0dd3bfac8a345979b5ad" data-rm-shortcode-name="rebelmouse-image" id="3a503" loading="lazy" src="https://spectrum.ieee.org/media-library/a-photo-of-3-men-sitting-around-a-table.jpg?id=50634445&width=980"/>
<small class="image-media media-caption" placeholder="Add Photo Caption...">The co-CEOs of SilMach, Jean-Baptiste Carnet [left] and Pierre-François Louvigné [center], dropped by IEEE Spectrum’s offices to discuss their revolutionary new wristwatch motor with Spectrum </small><small class="image-media media-photo-credit" placeholder="Add Photo Credit...">Stephen Cass</small></p><p>
<strong>Zorpette: </strong>Physical hands.
</p><p>
<strong>Louvigne:</strong> Physical hands, yes, correct.
</p><p>
<strong>Zorpette:</strong> So Withings is a big name in this category, but there are others that are in the category, correct?
</p><p>
<strong>Louvigne: </strong>Yeah, there is <a href="https://www.garmin.com/en-US/" rel="noopener noreferrer" target="_blank">Garmin</a>. Everybody knows Garmin. <a href="https://www.fossil.com/en-us/" rel="noopener noreferrer" target="_blank">Fossil</a> also, that is a big player. They are all looking for a part of the market. Garmin is more for sport activity. Fossil is more on fashion design. And Withings is for health.
</p><p>
<strong>Zorpette:</strong> So for listeners who might not be familiar with it, this is a wristwatch. And when you look at it, it looks like an old-school, a conventional analog wristwatch. However, they often have small electronic screens that show information. And in fact, the watch can also typically connect to your smartphone, so it can gather data from your smartphone. These watches often have accelerometers or blood pressure monitors and so on in them. So they typically have a lot of electronics, because not only do they have these sensors, but they’ve got to have the motors. And if I understand you correctly, your motor is more compact and efficient, which gives you advantages in this space.
</p><p>
<strong>Louvigne: </strong>Yes, right. One of the big advantages of the motor is that it is very compact. Roughly, you gain 50 percent in volume. Could be footprint, could be height, about 50 percent. So it’s [crosstalk].
</p><p>
<strong>Zorpette: </strong>50 percent more room inside the watch case.
</p><p>
<strong>Louvigne: </strong>Yes, yes, yes. And as you said, effectively in this type of watch, you have a lot of very advanced technology. And now the motor is as advanced as the other function in the watch. That’s the big difference now.
</p><p>
<strong>Zorpette: </strong>So tell us a little bit about your wristwatch motor. I believe you call it the MEMS box is how you refer to it. What are the advantages that your watch motor has if you are going to integrate it with other electronics on a tiny circuit board that goes inside a wristwatch?
</p><p>
<strong>Louvigne: </strong>The very big change compared to the current technology, the Lavet one, is that the MEMS box--
</p><p>
<strong>Zorpette:</strong> So the existing motor is called the Lavet motor—
</p><p>
<strong>Louvigne: </strong>Yeah, the Lavet motor.
</p><p>
<strong>Zorpette: </strong>—as you mentioned.
</p><p>
<strong>Louvigne: </strong>From the name of the inventor, Marius Lavet. He was a French guy in the ‘30s, 1936, exactly. And he invented this technology that is in each of quartz watch today, more than one billion of quartz watch. So this technology is the only one you can use so far, okay? But the motor is not coming from the electronics. So it’s electromagnetic. It’s like a bulky micromotors, and you have to screw the motor on the PCB. Screw it. So you imagine the cost of screwing a motor on a PCB. In our case, the MEMS box is designed to be SMT-compatible.
</p><p>
<strong>Zorpette: </strong>So you just use surface mount soldering technology to mount it right to the circuit board?
</p><p>
<strong>Louvigne: </strong>Yes, correct. It’s like any other electronic component. You can handle it and solve it on the PCB as another one.
</p><p>
<strong>Zorpette: </strong>So Jean-Baptiste, what are some of the advantages now with this motor? What are some of the things that you now have in your watch that you couldn’t have with the old-style Lavet motor?
</p><p>
<strong>Carnet:</strong> Yes. As Pierre-Francois mentioned, it’s more compact. It’s much thinner. And inside the hybrid smartwatch, people have to imagine that almost half of the space inside the watch is actually dedicated just to the micromotor as it is today. And the little sensors, all the technology, all the know-how of the brands that are developed today, they have to adjust around half of the space already being taken by the micromotor. So it’s by far the biggest part inside of the watch, and making it much more compact, about 50 percent, much thinner, either allows for new designs of watches where you could make much smaller watches. For example, we know that the Garmin watches are pretty bulky. Or you could keep the same design as today, but implement more technology inside of it because you have more space, or make it last longer with a bigger battery, for example, because you freeze quite a lot of space. So that’s a big point. The SMT compatibility is also very interesting because as you get rid of the labor-intensive aspect of assembling a watch, you’re now free to assemble it anywhere you want. You don’t need to maybe go to a country where labor is more affordable. You can imagine assembling it in Europe, assembling it in the US, which is currently not possible. Also, the motor is anti-magnetic, which is interesting because the sensor interactivity can derail the current motor a little bit, or the interaction with magnets inside the woman’s purse, for example. Watchmakers told us, “Oh, that’s a very interesting feature because it’s a problem we have right now.”
</p><p>
<strong>Zorpette: </strong>How about the energy usage and the precision of the tick marks as it goes around the face of the watch?
</p><p>
<strong>Carnet:</strong> Yeah. That’s also one aspect of it is, first, the motor is consuming less energy also than current technology. So you can imagine having a longer battery life as well. That’s why the watch we are launching with the technology can offer more than 10 years of battery on a regular battery that you can buy anywhere. And also, as you said, the freedom of movement is an important feature because the motor is pretty much electronics. You can program it, pilot it however you want. You can have it go forward, backward, faster, slower. And what watch enthusiasts are interested about is you can either have it tick, for example, the seconds like a traditional quartz watch, or you can have it make a much more fluid movement, which is somewhat of a holy grail for watch enthusiasts. So that’s things that become possible with our technology that was not with traditional ones. And that’s what watch designers since we unveiled the technology a couple weeks ago, and we will keep on doing it at CES Engineering.
</p><p>
<strong>Zorpette: </strong>So here’s my own pet peeve about watches. When I was a young man and I did a lot of scuba diving, I actually had a watch that had tiny tritium gas markings, so it glowed all the time. It didn’t need to be charged with a bright light. And I loved this watch because I could read it clearly underwater. Even at night, it was bright enough. And also, at night when I was sleeping, if I woke up in the middle of the night and the watch was on my bedside table, I could see what time it was without having to turn a light on or anything like that. And I know that those watches can be tricky. They go dead after five or six or seven years because the tritium half-life, they’re too dim to read. But I’ve always wondered if it would be possible using perhaps some ultra-efficient light-emitting diode or other technology to recreate that somehow. If you had enough space for a battery and other power supplies, if you had enough room in the case, if you could create these watches, which were popular at one time. If you go back even to World War II, they made watches using radium and so on. It just seemed such a practical thing when I had it, other than the fact that it went dead. But is that more possible now, technologically, with a very tiny and efficient motor?
</p><p>
<strong>Louvigne:</strong> Yeah, clearly it’s possible. I don’t know if you know some of the watch coming from<a href="https://timex.com/" rel="noopener noreferrer" target="_blank"> Timex</a>. They have a specific patent on that. It’s like a luminescent dial that gives you the opportunity to see what time is it even in the dark, fully dark. So yes, it’s possible to combine the technology with other ones providing such a result. Yeah.
</p><p>
<strong>Zorpette: </strong>So you mentioned Timex. And in fact, that gets me into my next question, which is, have you had interest from any major watchmakers yet in your MEMS box motor?
</p><p>
<strong>Louvigne: </strong>Yes, we know most of them— we know most of them because those companies, they were looking for the progress we made on the technology. So we have been in contact with them for more than 15 years. So yes, it’s done. The particular partnership we have with Timex was based on the fact that we are a small company. We are about 30 people in France. We are in the good region for watchmaking because it’s the former one. In the past, there was a huge activity in watchmaking industry. So we are in the good region for that. And there is one important partner we have that is a subsidiary of Timex in France. And this company named <a href="https://fralsen.com/en/company" rel="noopener noreferrer" target="_blank">Fralsen</a> is making all the small parts used in quartz movements. So it was obvious that the— not the compatibility, but the synergy between our very new technology and those classic parts was very interesting. And then we went to the Timex Group and we decided to build a joint venture, so a common company that is based in France, and the name is TiMach. Timex, SilMach, TiMach.
</p><p>
<strong>Zorpette:</strong> TiMach. Okay. And so in the future, we may be seeing this motor in watches from Timex or other companies.
</p><p>
<strong>Louvigne: </strong>Yeah. I mean, the objective of the joint venture is to sell the technology all over the world. It’s obviously not specific to Timex. They will use it in their watch. But no, it’s open to the market widely.
</p><p>
<strong>Zorpette: </strong>So I guess another question that might be on the minds of some of our listeners is, why did it take so long for someone to harness the technology of silicon microelectromechanical systems or silicon MEMS? We mentioned that silicon MEMS technology has been around for decades, 20 or 30 years, as far as I know, but yours is the first to harness it for a wristwatch motor. Was there some challenges that delayed this use?
</p><p>
<strong>Louvigne:</strong> Yes. Clearly, what we are doing is very unique because there is no other company or even university that made this development, okay? This is very unique because, in fact, we are combining the MEMS technology that is very advanced. You have to go in clean rooms to manufacture the silicon. But as soon as you will finish the silicon parts, you made only a part of the journey development, okay? You have to combine those silicon parts with more classic parts coming from the watchmaking industry. And this is what we do. We call this hybridization, meaning that we connect the silicon with classic micromechanics. And we are the only company that makes that in the world. So we have invented the motor, but we have also invented the technology for the assembly. This is completely new. And we have patents in both sectors. So yeah, you need a lot of time for developing all those steps of the technique.
</p><p><strong>Zorpette: </strong>Well, thank you both very much. Again, I’ve been talking with Pierre-Francois Louvigne and Jean-Baptiste Carnet. They’re both with SilMach, and they have a remarkable silicon MEMS wristwatch motor called the MEMS box. And we’ve heard a lot about the promise and challenges of this watch. For <em>IEEE Spectrum</em>’s <em>Fixing the Future</em>, I’m Glenn Zorpette and I hope you’ll join us next time.</p>]]></description><pubDate>Wed, 29 Nov 2023 10:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/hybrid-smartwatch</guid><category>Smartwatches</category><category>Type-podcast</category><category>Fixing-the-future</category><category>Mems</category><dc:creator>Glenn Zorpette</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/50571332/origin.webp"/></item><item><title>A New Enterprise Linux Alliance</title><link>https://spectrum.ieee.org/suse-oracle-ciq-linux</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/fixing-the-future-podcast-logo.jpg?id=46469653&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/55ee6462" width="100%"></iframe><p style="">
<strong>Stephen Cass: </strong>Hello and welcome to <em>Fixing the Future</em>, an <em>IEEE Spectrum</em> podcast where we look at concrete solutions to some big problems. I’m your host, Stephen Cass, a senior editor at IEEE Spectrum. And before we start, I just want to tell you that you can get the latest coverage from some of Spectrum’s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe.
</p><p style="">
	Today, our guest is <a href="https://www.linkedin.com/in/alanhclark/" rel="noopener noreferrer" target="_blank">Alan Clark</a> from SUSE’s CTO office. <a href="https://www.suse.com/" rel="noopener noreferrer" target="_blank">SUSE</a> is one of the oldest open-source companies in the world. I think I still have some SUSE Linux CD-ROMs from the 1990s lurking in a drawer myself. But it’s now a founding member of one of the newest trade associations, the <a href="https://openela.org/" rel="noopener noreferrer" target="_blank">Open Enterprise Linux Association</a>, or OpenELA, along with <a href="https://www.oracle.com/" rel="noopener noreferrer" target="_blank">Oracle</a> and <a href="https://ciq.com/" rel="noopener noreferrer" target="_blank">CIQ</a>. We’re going to be talking with Alan about the crisis that prompted the creation of the OpenELA and how the new association hopes to address it. Alan, welcome to the show.
</p><p style="">
<strong>Alan Clark: </strong>Thanks, Stephen. It’s great to be here. And by the way, I wish I had kept those floppies and CDs from those old releases, just for the museum piece, right?
</p><p style="">
<strong>Cass: </strong>Yeah, they’re just deep, deep in a drawer in that. I cannot— can I toss that? No. No, I can’t. But I mentioned a crisis. For people who aren’t familiar with the world of enterprise Linux and the companies involved, can you explain what happened earlier this year that really upset a lot of people?
</p><p style="">
<strong>Clark: </strong>Yeah, so there was an action by <a href="https://www.redhat.com/" rel="noopener noreferrer" target="_blank">Red Hat</a> that upset a lot of people. We can talk about why, but it’s actually been a trend for quite a while. And then they made the announcement that they were going to remove public access to the RHEL source code. And that’s really contrary to open source principles and values, right? And so that created a lot of concerns amongst vendors, developers, and users of the technology, right?
</p><p style="">
<strong>Cass:</strong> So RHEL is Red Hat Enterprise Linux.
</p><p style="">
<strong>Clark:</strong> Yes.
</p><p style="">
<strong>Cass: </strong>And why is it so important that it would cause so many people to go, “Bah”?
</p><p style="">
<strong>Clark: </strong>Well think about it from open-source perspectives, right? Open source has always had the meaning that I can take that and do things with it, right? I can create innovation and I can use it for the things that fit my need. And then all of a sudden now, they’ve switched the game and people are going, “Wait, will I not be able to use this anymore? Will I not be able to use it how I need it to be used, right? Is this going to kill my innovation?” And so that’s caused great consternation, not just from other vendors that are part of the ecosystem, but from users themselves.
</p><p style="">
<strong>Cass: </strong>And this is because Red Hat was also a very early entrant, it’s been around a long time, and so people have kind of coalesced around it in many ways. And so this was a bit of a shock to them.
</p><p style="">
<strong>Clark:</strong> It is a bit of a shock, and two aspects of that. One is you’re exactly correct, there’s a lot of people that have been using this technology for a long time and based their business on it. And then the second aspect, when you think about it, I’m sure it’s upwards of 90 percent of businesses are using open source today, right? So they’ve caught on to the benefits that open source brings, and then all of a sudden you’re saying, “Well, this isn’t quite so open,” and they’re going, “Wait, my business is built on those concepts of open source, and now you’re ripping that away. What does this mean to me?”
</p><p style="">
<strong>Cass:</strong> So maybe just for readers who might not be familiar, because <a href="https://www.linux.org/" rel="noopener noreferrer" target="_blank">Linux</a> comes in so many different flavors. It’s found everywhere from satellites to mainframes. What is kind of the defining characteristic of enterprise Linux?
</p><p style="">
<strong>Clark:</strong> So enterprise Linux, and you’re correct, it does come in all kinds of flavors from very small to very large, right? The enterprise portion of this is that it’s ready to run your critical business processes, right? That’s what we define as being enterprise ready. So I can use it in a hobby situation, right? And there’s a lot of distros that are attuned to specific hobby needs, right? I know people that run HO scale railroad systems using Linux, for example. Well, if it has a fault and crashes, it’s not a big deal. You put the train back on the track and away it goes. If you’re using Linux for air traffic control, right, that has got to be really hardened and tested and secure. And so that’s what the enterprise portion of this means.
</p><p style="">
<strong>Cass:</strong> So can you talk a little bit about the genesis of OpenELA? So we have this controversy, people are unhappy with what Red Hat has been doing. How is it that Oracle and CIQ and SUSE kind of like pick up the <a href="https://en.wikipedia.org/wiki/Bat_phone" rel="noopener noreferrer" target="_blank">bat phone</a> and call each other and start this ball rolling?
</p><p style="">
<strong>Clark: </strong>Well, so their announcement spurred us to say, “Oh, we should do something and we should react to this.” But on the other hand, part of this has come about just because the power of collaboration, right? And the simplest aspect of that is we’re reducing cost, right, by sharing that cost. And those are the costs of getting a code and assembling it and putting it in a format where we can consume it. It’s not a market differentiator. And so by sharing that cost amongst us, we’ve reduced it for everybody, and it makes it quicker to market, reduces our costs. The other aspect of it is— that I think is key and why we really want others and others want to come join us is we’re preventing the market from fragmenting, right? Like you said, there’s all kinds of distros out there, but we’re looking to continue on with this enterprise Linux standard that Red Hat has set. And if we all go off and do our own little thing, there’s a chance it’ll fragment. And we know what happens when that occurs, right? You look back at the Unix days and you cause that fragmentation and all of a sudden you can’t get applications and services that work on everybody’s distros, right? By pulling together, unifying together, we’re going to keep that market whole.
</p><p style="">
<strong>Cass: </strong>And what is now OpenELA actually going to do in concrete terms in terms of stopping that fragmentation from happening and maintaining a standard sort of independent of Red Hat’s current offerings?
</p><p style="">
<strong>Clark: </strong>Yeah. So the first thing— one of the big things we’re working on is creating a neutral legal body, right, so that it’s not controlled by any single vendor, right? So we’ve all come together, big, small, whatever, it doesn’t matter. We’re all going to be equal players, right? So that’s key in building good open source practices. So the second thing we’ve done or are working on is building the ability to have the source code that is, we’ll call it pristine. It’s in line and in tune with what Red Hat has been producing, right? And we will keep that compatibility. We want to keep that compatibility. And so we’re setting up the code repository so that we can keep that compatibility. But then we’re also setting them up so that innovation can occur. And so I’ll be able to come in there and say, “I just want to stay in step with the standard that Red Hat is setting. And that’s what I want. I don’t want anything else.” Others will be able to come in and say, “I want to contribute this piece.” And they’ll be able to pick up that as well as the one-to-one compatibility. So those are the big things we’re working on right now.
</p><p style="">
<strong>Cass: </strong>When the announcement was made to launch OpenELA, you did say, yes, it’s going to be under control of a nonprofit board of directors and the bylaws will be published shortly. So how are the formation of the board and the creation of the bylaws going?
</p><p style="">
<strong>Clark:</strong> They’re coming along quite well, actually. I smile because this is one of those things that always takes longer than you want, right? But they are coming along. Legal things are always slow, slower than you want them to be. But they’re moving along quite well. We’ve actually are pushing ahead with a stronger-- I wouldn’t say stronger. Very concerted effort to get the technical stuff done, because that’s really the proof of it, right, that we can actually get the code out there and make it available to everybody. So we’ve been putting a really large amount of effort into getting that completed as well.
</p><p style="">
<strong>Cass:</strong> And how is that development? You talked about organizing source code, and also there’s creation of software tooling that has to go along with that. How is that work going? I mean, is it being evenly distributed across sort of the three founders, or is one group taking a lead at this particular moment, or is it all being done in parallel? How is that work being done?
</p><p style="">
<strong>Clark:</strong> It’s working out very well. You recognize that these companies have been doing this for years, right? So we don’t have to reinvent everything, right, or invent everything. It’s already being done. So it’s more a matter of taking the best of everything we’ve got and putting it into a format that we know will be usable by everybody. So we don’t have to start from scratch. We’re able to pick up a lot of the tools and stuff that are already being used and tune them and modify them to fit OpenELA.
</p><p style="">
<strong>Cass: </strong>So OpenELA was founded just a couple of months ago, so I appreciate it’s very early days. But what kind of response have you had from the wider community?
</p><p style="">
<strong>Clark: </strong>It’s been very positive, really positive. We have a lot of people that are anxious to get started. A lot of people have been pinging us going, “Hey, we want to contribute. We want to join. How do I do that?” And we’re going, “Hang on just a little bit longer, just a little bit longer.” We really got to get that legal entity so that it’s a neutral body, right? We don’t want it to be not neutral. So we got to get those rules down on how people can join and so forth. So they’re coming out really soon, so.
</p><p style="">
<strong>Cass: </strong>So looking to the future, we talked about maintaining the sort of enterprise Linux standard, which is closely based on the Red Hat de facto standard. Do you foresee a time in the future where maybe those might diverge? And so you have the OpenELA enterprise Linux standard, and then over here is RHELs. And maybe those two aren’t tightly as coupled before. One is RHELs thing, and the other is this open source community thing.
</p><p style="">
<strong>Clark:</strong> I don’t have a crystal ball, so I don’t know what will happen. Right now, our mission is that we will stay one-to-one compatible with them. If they make some decisions that personally, I believe would actually very much hurt them, themselves, right, self-inflicted wounds kind of thing, it’s possible they could do something. But you also have to remember that everything we’re dealing with here is open source, right? And it’s open source that SUSE has been contributing to, like you said, what, 30-something years? Oracle, the same thing, they contributed for years and years and years in CIQ and all these other community members. So it’s all open source. So unless they do something really dramatic and go proprietary, even more proprietary, right, it all feeds back upstream. So it’s all going to be available. So I’m not overly worried about it, given their current decisions, that we’ll be able to stay one-to-one compatible.
</p><p style="">
<strong>Cass: </strong>So just I want to step back for a moment while I have you and just look at some big question issues. I talk about Linux in the ‘90s, and the first time I touched a Linux machine was as an undergraduate in the early ‘90s, when it was this very fascinating, if somewhat clunky thing. And we’ve had this evolution with people like <a href="https://www.linkedin.com/in/linustorvalds/" rel="noopener noreferrer" target="_blank">Linus Torvalds</a> has been the guy for 30 years and so on. And we’re kind of— I know I’m not as young as I used to be, and we’re kind of coming to this generation inflection point with Linux, where sort of a new cadre of people are coming up and using it. What are your thoughts about how sort of open source has evolved in 30 years? Is it recognizable from those early days to what is now? And where do you think it’s going to go as we start to see people in the next 10, 15 years start to retire and a new generation take over?
</p><p style="">
<strong>Clark:</strong> Well, the beauty of open source is sometimes people say, “Well, it’s like herding cats,” because you’ve got so many people involved, right, and they’re all there to serve their own needs, right? Some will say that’s bad. I say that’s really good. But what it’s proven out over the years— and yeah, it has changed, it’s grown, right? I’ve seen these projects. Some of these projects that I’m involved with have thousands of engineers, right? And a couple of things that I’ve seen happen over the years is they’ve become very diverse geographically and people wise, just the different varying talents and skills and backgrounds has really grown over the years. And the big thing is, is I’ve seen this talent emerge. And because of the collaborative nature, it’s not that a single person has all the knowledge, right? I’ve worked in proprietary software, and you end up depending on this key guy that knows it all, right? And the company sits and worries about what if the train hits this guy tomorrow and he dies? What’s the company going to do, right? The stock will crash or whatever. I’m not as worried about that with open source, because there’s so much. It’s so open and transparent that people with all these different talents are able to come in and become a real critical piece to this. And so I think that with that talent pool, I’m not worried about the future of open source. It’ll just keep rolling on. We’ve got some real good leaders today. I do not want to see them disappear, right? People like Linus, they are a key, they are really key. But open source will continue to grow and move on.
</p><p style="">
<strong>Cass: </strong>So I just want to finish up. Is there any question you think I should have asked you, which I haven’t asked you?
</p><p style="">
<strong>Clark:</strong> That’s always the catch-all question, isn’t it? No, I think we’ve talked about a lot of good things. I’m just very excited about the future of open source and the potential that it brings, right, the innovation. I see all these new concepts. I remember when I first started, I started in engineering and networking, right? And TCP/IP developed and everybody says, “It’s done.” Right? “TCP/IP, it’s done. Let’s all move on to something else.” Right? And then all of a sudden it was like, oh, wait a minute, we didn’t write TCP/IP with enough addresses to cover the world. We never envisioned that everybody would have 10 devices in their house, let alone 100. And all of a sudden, you got to invent again, right? And so I just think there’s so much new technology to be invented that I’m very excited about the future.
</p><p style="">
<strong>Cass:</strong> Wonderful. So today we were talking with Alan Clark of SUSE. Thank you so much for coming on the show.
</p><p style="">
<strong>Clark:</strong> Thank you, Stephen.
</p><p style="">
<strong>Cass:</strong> And Alan was talking about the new OpenLinux Enterprise Association. And for more information on that, you can visit their website, which is openela.org, I believe.
</p><p style="">
<strong>Clark: </strong>Correct.
</p><p style="">
<strong>Cass: </strong>And yeah, please come back and check out in two weeks’ time another episode of <em>Fixing the Future</em> here from IEEE Spectrum.
</p>]]></description><pubDate>Wed, 15 Nov 2023 10:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/suse-oracle-ciq-linux</guid><category>Type-podcast</category><category>Software</category><category>Linux</category><category>Red-hat</category><category>Oracle</category><category>Suse</category><category>Open-source</category><category>Fixing-the-future</category><dc:creator>Stephen Cass</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/46469653/origin.jpg"/></item><item><title>Justine Bateman's Fight Against Generative AI In Hollywood</title><link>https://spectrum.ieee.org/justine-bateman-hollywood-generative-ai</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.webp?id=50335986&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/512fcd36" width="100%"></iframe><p style="">
<strong>Stephen Cass: </strong>Hello and welcome to <em>Fixing the Future</em>, an <em>IEEE Spectrum</em> podcast where we look at concrete solutions to some big problems. I’m your host, Stephen Cass, senior editor at <em>Spectrum</em>. And before we start, I just want to tell you that you can get the latest coverage from some of <em>Spectrum</em>‘s most important beats including AI, climate change, and robotics by signing up for one of our free newsletters. Just go to<a href="https://spectrum.ieee.org/newsletters/" target="_self"> spectrum.ieee.org/newsletters</a> to subscribe.
</p><p>
	The rapid development of generative AI technologies over the last two years, from deepfakes to large language models, has threatened upheavals in many industries. Creative work that was previously believed to be largely immune to automation now faces that very prospect. One of the most high-profile flashpoints in creative workers pushing back against digital displacement has been the months-long dual strikes by Hollywood writers and actors. The writers recently claimed victory and have gone back to work, but as of this recording, actors and their union<a href="https://www.sagaftra.org/" rel="noopener noreferrer" target="_blank"> SAG-AFTRA</a> remain on the picket lines. Today, I’m very pleased to be able to speak with someone with a unique perspective on the issues raised by generative AI,<a href="https://www.justinebateman.com/" rel="noopener noreferrer" target="_blank"> Justine Bateman</a>. Some of you may remember Justine from her starring role as big sister<a href="https://familyties.fandom.com/wiki/Mallory_Keaton" rel="noopener noreferrer" target="_blank"> Mallory</a> in the 1980s hit sitcom<a href="https://www.youtube.com/watch?v=7H3JuQUQTLQ" rel="noopener noreferrer" target="_blank"> <em>Family Ties</em></a>, and she’s had a fascinating career since as a filmmaker and author. Justine has also displayed her tech chops by getting a degree in computer science from UCLA in 2016, and she has testified before Congress about net neutrality. She is currently SAG-AFTRA’s advisor on AI. Justine, welcome to the show.
</p><p style="">
<strong>Justine Bateman: </strong>Thank you.
</p><p style="">
<strong>Cass:</strong> So a lot of industries are being affected by generative AI. How did writers and actors become the focal point in the controversy about the future of work?
</p><p style="">
<strong>Bateman: </strong>Well, it’s curious, isn’t it? I guess it was low-hanging fruit because I mean, I feel like tech should solve problems, not introduce new ones like massive unemployment. And also, we have to remember that so much of this, to me, the root of it all is greed. And the arts can be a lucrative place. And it can also be very lucrative in selling to others the means by which they can feel like, “they are artists too,” in heavy quotes, which is not true. Either you’re born an artist or you’re not. Either you’re gifted at art or you’re not, which is true of everything else, sports, coding. Either you’re gifted as a coder or not. I’ll tell you this even though I have a computer science degree. I know I am gifted as a writer, as a director, and my previous career of being an actor. I am not gifted in coding. I worked my butt off. And once you know what it feels like to be gifted at something, you know what it feels like to not be gifted at something and to have to really, really work hard at it. So yeah, but I did it anyway, but there’s a difference. So yeah, I mean, and in that direction, there’s many people, they’d like to imply that they are gifted at coding by giving the generative AI a solution to that. Yeah.
</p><p style="">
<strong>Cass:</strong> So by here or by they, you really are locating your beef with the companies like<a href="https://openai.com/" rel="noopener noreferrer" target="_blank"> OpenAI</a> and so on, more so than perhaps the studios?
</p><p style="">
<strong>Bateman: </strong>Well, they’re both complicit.<a href="https://twitter.com/sama" rel="noopener noreferrer" target="_blank"> Sam Altman</a> and OpenAI and everyone involved there, those that are doing the same at the Google offshoots, Microsoft, which is essentially OpenAI, I guess. I mean, if most of your money’s from there, I don’t know what else you are. Where else? Where else? I know<a href="https://openai.com/dall-e-2" rel="noopener noreferrer" target="_blank"> DALL-E</a>, I believe, is on top of OpenAI’s neural network. And there’s<a href="https://www.midjourney.com/" rel="noopener noreferrer" target="_blank"> Midjourney</a>. There’s so many other places.<a href="https://about.meta.com/" rel="noopener noreferrer" target="_blank"> Meta</a> has their own generative AI model, I believe. This is individuals making a decision to pull generative AI into our society. And so it’s not only them, but then those that subscribe to it that will financially subscribe to these services like the heads of the studios. They will all go down in the history books as having been the ones that ended the 100-year-old history— well, the 100-year-old entertainment business. They chose to bring it into their studios, into the business, and then everyone else. Everyone else who manages multiple people who is now deciding whether or not to pull in generative AI and fire their workforce, their human labor workforce. All those people are complicit too. Yeah.
</p><p style="">
<strong>Cass:</strong> When I looked up SAG-AFTRA’s proposal on AI, the current official statement reads, “Establish a comprehensive set of provisions to protect human-created work and require informed consent and fair compensation when a digital replica is made of a performer or when their voice, likeness, or performance will be substantially changed using AI.” Can you sketch out what some of those provisions might look like in a little more detail?
</p><p style="">
<strong>Bateman: </strong>Well, I can only say so much because I’m involved with the negotiations, but let’s just play it ourselves. Imagine if the digital replica was made of you, you would want to know what are you going to do with this? What are you going to have the say? What are you going to have this digital replica do? And how much are you going to pay me to essentially not be involved? So it kind of comes down to that. And at the bare minimum, granting your permission to even do that because I’m sure they’d like to not have to ask for permission and not have to pay anybody. But what we’re talking about, I mean, with the writers and the directors, it’s bad enough that you’re going to take all of their past work and use it to train models. It’s already been done. I think that should be absolutely not permitted. And if somebody wants to participate in that, they should give their consent and they should be compensated to be part of a training set. But the default should be no, instead of this ******* fair-use argument on all the copyrighted material.
</p><p style="">
<strong>Cass: </strong>So yeah, I’d love to drill down a little bit more into the copyright issues that you just talked about. So with regard to copyright, if I read a whole bunch of fantasy novels and I make what is clearly a kind of a bad imitation of<a href="https://tolkiengateway.net/wiki/The_Lord_of_the_Rings" rel="noopener noreferrer" target="_blank"> <em>Lord of the Rings</em></a>, it’s like, “Okay, you kind of synthesize something. It’s not a derivative work. You can have your own copyright.” But if I actually go to <em>Lord of the Rings</em> and I very just change a few names around the place or maybe rearrange things a little bit, that is considered a derivative work. And so therefore, I’m not entitled to the copyright on it. Now the large language model creators would say, “Well, ours is more like the case of where we’re synthesizing across so many works, we’re creating new works. We’re not being derivative. And so therefore, of course, we reserve the copyrights.” Whereas I think you have a different view on that in terms of these derivative works.
</p><p style="">
<strong>Bateman: </strong>Sure, I do. First of all, your first example is a person with a brain. The other example is code. Code for a for-profit organization, multiple organizations. Totally different. Here’s the biggest difference. If you wanted to write a fantasy book, you would not have to read anything by Tolkien or anybody else, and you could come up with a fantasy book. An LLM? Tell me what it can do without ingesting any data. Anything? No, it’s like an empty blender. That’s the difference. So if this empty blender that is— I think these companies are valued at $1 trillion less, more? I don’t know right now. And yet, it is wholly dependent. And I believe I’m correct, wholly dependent on absorbing all this, yeah, now it’s just going to be called data, okay? But it’s really copyrighted books. And much of what is written— much of what you output is, by default, copyrighted. If you file it with the copyright office, it makes it easier to defend that in a court but scraping everybody else’s work.
</p><p>
	Now if this LLM or a generative AI model was able to spit something out on its own without absorbing anything, or it was only trained on those CEO’s home movies and diaries, then fine. Let’s see what you can do. But no. If they think that they can write a bunch of— quote, “write a bunch of books” because they’ve absorbed all the books that they could get a hold of and then chop it all up and spit out little Frankenstein spoonfuls, no. That is all copyright violation. All of it. You think you’re going to make a film because you have ingested all of the films of the last 100 years? No, that’s a copyright violation. If you can do it on your own, terrific. But if you can’t do it unless you absorb all of our work, then that’s illegal.
</p><p style="">
<strong>Cass:</strong> So with regards to these questions of likenesses and sort of basically turning existing actors into sort of puppets that can say and do anything the studio wants, do you worry that studios will start looking for ways to just bypass human actors entirely and create born digital characters? I’m thinking of the big superhero franchises that already got plenty of these<a href="https://xmen-movies-by-deadpool-tv.fandom.com/wiki/Colossus" rel="noopener noreferrer" target="_blank"> CGI characters</a> that are pretty photorealistic. I mean, completely human ones, maybe still a little uncanny valley, but how hard would it be to make all those human characters CGI, too, and now you’ve got replaceable animators and voice actors and maybe motion capture performers instead of one big tentpole actor who you maybe really do have to negotiate with because they have the star power?
</p><p style="">
<strong>Bateman: </strong>No, that’s exactly what they’ll do. Everything you just said.
</p><p style="">
<strong>Cass:</strong> Is there any way within sort of your sort of SAG-AFTRA’s remit to prevent that from happening? Or are we kind of looking at the last few years before the death of the big movie star? And maybe the idea of the big movie star will become extinct. And while there’ll be human actors, it’ll never be that Chris Pratt sort of J. Law level of performer again.
</p><p style="">
<strong>Bateman: </strong>Well, everything that’s going to happen now with generative AI, we’ve been edging towards for the last 15 years. Generative AI is very good at spitting out some Frankenstein regurgitation of the past, right? That’s what it does. It doesn’t make anything new. It’s the opposite of the future. It’s the opposite of something new. And a lot of filmmaking in the last 15 years<a href="https://en.wikipedia.org/wiki/Crisis_on_Infinite_Earths" rel="noopener noreferrer" target="_blank"> has been that</a>, okay? So the audience is sort of primed for that kind of thing. Another thing you talk about, big movie stars. And I’m going to name some others like Tom Cruise, Meryl Streep, Harrison Ford, Meg Ryan like this. Well, all these people— with the exception of maybe Harrison Ford, but all these people really hit it in their 20s. Now who in their 20s is a big star now, Zendaya? Oh,<a href="https://www.imdb.com/name/nm3154303/" rel="noopener noreferrer" target="_blank"> the actor who’s in <em>Call Me By Your Name</em></a>. The name’s slipping my mind right now. There’s a couple, but where’s the new crop? And it’s not their fault. It’s just they’re not being made. So we’re already edging towards— the biggest movie stars that we have in the business right now, most of them are in their late 40s, early 50s, or older. So we’ve already not been doing that. We’ve already not been cultivating new film stars.
</p><p>
	Yeah. And then you look at the amount of CGI that we just put on regular faces or plastic surgery. So now we’re edging closer and closer to audience accepting a full— or not CGI but a full generative AI person. And frankly, a lot of the demos that I’ve seen, you just can’t tell the difference. So yeah, all of that is going to happen. And then they’ll see there’s— and the other element that’s been going on for the last 10, 15 years is this obsession with content. And that’s thanks to the streamers. Come in, just churn it out as much as possible and as sort of in a most— the note that I’ve heard like Netflix gives— people that I know who are running TV shows, the note they get is make it more second screen. Meaning, the viewer’s phone or laptop is their first screen. And then what’s up on their television through internet connection, on Netflix or Amazon, whatever, is secondary. So you don’t have something on the screen that distracts them from their primary screen because then they might get up and shut it off. Somebody coined the term visual Muzak once. So that they don’t want you to get up. They don’t want you to pay attention. They don’t want you to see what’s going on.
</p><p>
	And also, if you do happen to look up, they want to make sure that if you haven’t been looking up for the last 20 minutes, you’re not lost at all. So that kind of thing, generative AI can churn out 24/7, and also customize it to your particular viewing habits. And then people go, “Oh, no, it’s going to be okay because anything that’s fully generative AI can’t be copyrighted.” And my answer to that is, “Who’s going to be trying to copyright all these one-off films that they just churn out?” They’re going to be like Kleenex. Who cares? They make something specifically for you because they see that you like nature documentaries and then dramas that take place in outer space? So then they’ll just do films that combine— all generative AI films will combine all these things. And for an upcharge, you can go get scanned and put yourself in it and stuff. Where else are they going to show that? And if you screen record it and then post it somewhere, what do they care? It was a nominal cost compared with making a regular film with a lot of people. And so what do they care? They just make another one for you and another one for you and another one for you. And they’re going to have generative AI models just spitting stuff out round the clock.
</p><p style="">
<strong>Cass: </strong>So the economics of mass entertainment, as opposed to live theater and so on, has always been that the distribution model allowed for a low marginal cost per copy, whether that’s VHS cassettes or reels that are shown in the cinema and so on. And this is just an economic extension of that all the way back to production, essentially.
</p><p>
	Bateman: I think so. But yes, and if we’re just looking at dollars, it is the natural progression of that. But it completely divorces itself— or any company engaging in this completely divorces themselves from actually being in the film business because that is not filmmaking. That is not series making. That doesn’t have anything to do with the actual art of filmmaking. So it’s a choice that’s being made by the studios, potentially, if they’re going to man the streamers and if they’re going to make all AI films. Or they’re right now trying to negotiate different ways that they are going to replace human actors. That’s a choice that’s being made, essentially, to not be in the film business.
</p><p style="">
<strong>Cass:</strong> So I’m not terribly familiar with acting as a professional discipline. And so can you tell a little bit for people with a tech background what actors really bring to the table in terms of guiding characters, molding characters, moving it from beyond just the script on the page how ever that’s produced? What’s the extra creative contribution that actors really put in beyond just, “Oh, they’re able to do a convincing sad face or happy face”?
</p><p style="">
<strong>Bateman:</strong> Sure. That’s a great question. And not all people working as actors do what I’m about to say, okay? Every project should have a thesis statement that the kind of— or an intention. I mean, in coding, it’s like what’s the spec? I mean, what is it you want this code to do? And that’s for script, what’s the intention? What do you want audiences to come away with? Fine. And the writer writes in that direction. Regardless of what the story is, there’s some sort of thesis statement, like I said. Director, same thing. Everybody’s got to be on board with that. And what the director’s pulling in, what the writers’ pulling everything, it’s like a mood and circumstances that deliver that to the audience. Now you’re delivering it ideally emotionally to them, right? So it really gets under their skin. And there’s a lot of films that any of your listeners have watched where it’s some film that made a big impact on them. This is when it’s a great actor, you really get pulled in, right? And when, say, somebody’s just standing in front of the camera saying lines, you’re not as emotionally engaged, right? So it’s an interesting thing to notice next time you see a film, whether or not you were emotionally engaged or not. And other things can contribute to that like the editing or the story or the cinematography and various things. But yeah, bottom line, the actor is a tour guide. Your emotional tour guide through this story. And they should also support whatever that thesis statement is.
</p><p style="">
<strong>Cass: </strong>So in your thesis for your computer science degree, you were really bemoaning, I think, Hollywood’s conservatism when it comes to exploring these technologies for new possibilities in storytelling. And so do you have any ideas of how some of these maybe technologies could actually work with actors and writers to explore new fun storytelling possibilities?
</p><p style="">
<strong>Bateman:</strong> Absolutely. You get the prize, Stephen. I don’t think anybody— yeah, I know I have that posted still. It’s from 2016. So this is a while ago. And yeah, it is posted on my LinkedIn. But good for you. I hope you didn’t read the entire thing. It’s a long one. So of course, I mean, there’s a reason I got a computer science degree. And I love tech. I think there are incredible ways that it can change the structure of a script. And one of the things I probably expressed in there, they’re what I call layered projects instead of having a story that’s written out in a line because that’s the way you’re delivering it in a theater or you’re watching the beginning and then the middle and then the end. Delivering a story that’s more so shaped like a tree and not choose your own adventure, but rather the story is that big.
</p><p>
	And yeah, anyway, I could talk for a while about sort of the pseudocode of the designs of the layered projects that I’ve got, but that is a case. All those projects that I’ve designed that are these layered projects where we’re using either touchscreen technology or augmented reality, they service my thesis statement of my project. They service the story. They service the way the audience is perhaps going to watch the story. That is where I see technology servicing the artists such that they can expand what they’re wanting to do. I don’t see generative AI like that at all. I see generative AI as a regurgitation of our past work for those who, frankly, aren’t artists. And because it’s a replacement, it’s not people— I know there’s people, especially the<a href="https://www.statista.com/chart/28633/verified-users-on-twitter/" rel="noopener noreferrer" target="_blank"> blue-check people</a> like to say that this is a tool. And I think, “Well, I forgive you because you’re not an artist and you don’t know the business and you don’t know filmmaking. You don’t understand how this stuff’s put together at all.” Fine. But blue-check guy, if you think this is just a tool, then I’d like to introduce you to any generative AI software that does code in place of coders. I’m sure there are a lot of software engineers that just are like, “What the hell?”
</p><p style="">
<strong>Cass: </strong>So just to wrap up that then, is there any question you think I should have asked you, which I haven’t asked you?
</p><p style="">
<strong>Bateman:</strong> What’s going to happen after the inferno?
</p><p style="">
<strong>Cass:</strong> Oh, what’s the inferno? What’s going to happen after the inferno? Now I’m worried.
</p><p style="">
<strong>Bateman: </strong>This is going to get very bad in every sector. This is no joke. And I’m not even talking about-- I know there are a lot of people talking about like, “Oh, it’s going to get into our defense system, and it’s going to set off nuclear bombs and stuff.” That may be true. But I’m talking about everything that’s going to happen before that. Everything that’s starting to happen right now. And that’s the devaluing of humans that’s making people feel like they’re just cogs in some machine and they have no agency and they don’t really matter. And tech is just at the forefront of everything. And we just have to go along with whatever it’s coming up with. I don’t think tech’s in the forefront of **** right now, honestly. And like I said, I’m a soft— I have a CS degree. I love tech. I mean, I wouldn’t have spent four years doing all of that if I didn’t. But for Christ’s sake, it needs to sit down for a minute. Just ******* sit down. Unless you see some problems that can actually be solved with tech, it’s going to destroy it with all the things I just said about how it’s going to make people feel. It’s going to be taking their jobs. It’s going to infiltrate education system. Everybody’s going to be learning the same thing because everybody is going to be as if everybody’s at the same university. They’re all going to be tapped into the same generative AI programs. It’s starting to happen now. You take one program. Instead of learning something from one teacher and a bunch of students are learning from that one teacher, that one school, they’re tapping into certain programs that multiple schools are using. And they’re all learning to write in the same way.
</p><p>
	Anyway, all that is going to— it’ll crush the structure of the entertainment business because the structure of the entertainment business is a pipeline of duties and tasks by various people from conception to release of that project. And you start pulling out chunks of that pipeline, and the whole structure collapses. But I think on the other side of this inferno— but I think on the other side of it, there is going to be something really raw and really real and really human that will be brand new in the way jazz was new or rock and roll was new or as different as the 1960s were from the 1950s. Because when you think about it, when you look at the 20th century, all of these decades, something specific happened in them, multiple things happened in them that were specific that really showcased or instigated by the arts, politics. Everything changed. Every era has its own kind of flavor. And that stopped in about 2000. When I ask you about the aughts or you had to go to a party that was dressed in the aughts, what would you put on? I don’t know. What are these decades at all? There’s a lot of great things about the internet and some good things about social media, but basically it flattened everything. And so I feel that after this burns everything down, we’re going to actually have something new. A new genre in the arts. A new kind of day. A new decade like we haven’t had since the ‘90s, really. And that’s what I’m looking forward to. That’s what I’m built for, I mean, as far as being a filmmaker and a writer. So I’m looking forward to that.
</p><p>
<strong>Cass:</strong> Wow. Well, those are some very prophetic words. Maybe we’ll see, hopefully, whether or not there is an inferno or what’s on the other side of the inferno. But yeah, thank you so much for coming on and chatting with us today. It was really super talking with you today.
</p><p style="">
<strong>Bateman: </strong>My pleasure.
</p><p><strong>Cass: </strong>Today, we were speaking with Justine Bateman who is the AI advisor of the SAG-AFTRA Actors Union. I’m Stephen Cass for <em>IEEE Spectrum</em>‘s <em>Fixing the Future</em>, and I hope you’ll join us next time.</p>]]></description><pubDate>Wed, 01 Nov 2023 09:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/justine-bateman-hollywood-generative-ai</guid><category>Hollywood</category><category>Generative-ai</category><category>Type-podcast</category><category>Fixing-the-future</category><dc:creator>Stephen Cass</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/50335986/origin.webp"/></item><item><title>Your Life As A Digital Ghost</title><link>https://spectrum.ieee.org/datafication-and-digital-reanimation</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.webp?id=49468405&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/abe2d740" width="100%"></iframe><p style=""><strong>Eliza Strickland:</strong> Hi, I’m Eliza Strickland for <em>IEEE Spectrum</em>‘s <em>Fixing the Future</em> podcast. Before we start, I want to tell you that you can get the latest coverage from some of Spectrum’s most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe.</p><p style="">Imagine getting a birthday email from your grandmother who died several years ago, or chatting with her avatar as she tells you stories of her youth from beyond the grave. These types of post-mortem interactions aren’t just feasible with today’s technology, they’re already here.</p><p><a href="https://www.wendyhwong.com/" target="_blank">Wendy H. Wong</a> describes the new digital afterlife industry in <a href="https://spectrum.ieee.org/digital-afterlife" target="_self">a chapter</a> of her new book from MIT Press, <a href="https://www.penguinrandomhouse.com/books/730905/we-the-data-by-wendy-h-wong/" rel="noopener noreferrer" target="_blank">We the Data: Human Rights in the Digital Age</a>. Wendy is a Professor of Political Science and Principles Research Chair at the University of British Columbia. Wendy, thanks so much for joining me on Fixing the Future.</p><p style=""><strong>Wendy H. Wong:</strong> Thanks for having me.</p><p style=""><strong>Strickland:</strong> So we’re going to dive into the digital afterlife industry in just a moment. But first I want to give listeners a little bit of context. So your book takes on a much broader topic, the <a href="https://en.wikipedia.org/wiki/Datafication" rel="noopener noreferrer" target="_blank">datafication</a> of our daily lives and the human rights implications of that phenomenon. So can you start by just explaining the term datafication?</p><p style=""><strong>Wong:</strong> Sure. So datafication is really, I think, quite straightforward in the sense that it’s just kind of trying to capture the idea that all of our daily behaviors and thoughts are being captured and stored as data in a computer or in computers and servers all over the world. And so the idea of datafication is simply to say that our lives are not just lived in the analog or physical world, but that actually they’re becoming digital.</p><p style=""><strong>Strickland:</strong> And, yeah, you mentioned a few aspects of how that data is represented that makes it harder for it to be controlled, really. You say that it’s sticky and distributed and co-created. Can you talk a little bit about some of those terms?</p><p style=""><strong>Wong:</strong> So in the book, what I talk about is the fact that data are sticky, and they’re sticky in four ways. They’re sticky because they’re about mundane things. So as I was saying, about everyday behaviors that you really can’t avoid. So we’re starting to get to the point where devices are tracking our movements. We’re all familiar with typing things in the space bar. There’re trackers when we visit websites to see how long it takes us to read a page or if we click on certain things. So these are behaviors that are mundane. They’re every day. Some might say they’re boring. But the fact is they’re things we don’t and can’t really avoid through living our daily lives. So the first thing about data that makes it sticky is that they’re mundane.</p><p>The second thing is, of course, that data are linked. So data in one data set doesn’t just stay there. Data are bought and sold and repackaged all the time. The third thing that makes data sticky are that they are fundamentally forever. And I think this is what we’ll talk about a little bit in today’s conversation in the sense that there’s no real way to know where data go once they’re created about you. So effectively they are immortal. Now whether they are actually immortal, again, that’s something that no one really knows the answer to. And the last thing that makes data sticky, the fourth criteria I guess is that they are co-created. So this is a big thing I spend a lot of time talking about in the rest of the book because I think it’s important to remember that although we are the subjects of the data and the datafication, we are actually only half of the process of making data. So someone else—I call them the data collectors in the book—typically they’re corporations, but data collectors have to decide what kinds of characteristics, what kinds of behaviors, what kinds of things they want to collect data on about what human beings are doing.</p><p style=""><strong>Strickland:</strong> So how did your research on datafication and human rights lead you to write this chapter about the digital afterlife industry?</p><p style=""><strong>Wong:</strong> That’s a really good question. I was really fascinated when I ran across the digital afterlife industry because I have been studying human rights for a couple of decades now. And when I started this project, I really wanted to think about how data and datafication affect the human life. And I started realizing actually that they affect how we die, at least in the social way. They don’t affect our physical death, unfortunately, for those of us who want to live forever, but they do affect how we go on after we’re physically gone. And I found this really fascinating because that’s a gap in the way we think about human rights. Human rights are about living life to a minimal standard, to our fullest potential. But death is not really part of that framework. And so I wanted to think that through because if now a datafied afterlife can exist and is possible, can we use some of the concepts that are very important to human rights, things like dignity, autonomy, equality, and the idea of human community? Can we use those values to evaluate this digital afterlife that we all may have?</p><p style=""><strong>Strickland:</strong> So how do you define the <a href="https://www.thedigitalbeyond.com/online-services-list/" rel="noopener noreferrer" target="_blank">digital afterlife industry</a>? What kind of services are on offer these days?</p><p style=""><strong>Wong:</strong> So I mean, this is, again, like a growing, but actually quite populated industry. So it’s really interesting. So there are ways you can include services like what to do with data when people are deceased, right? So that’s part of the digital afterlife industry. A lot of companies that keep data, big tech, like a lot of the companies we know and are familiar with, like <a href="https://spectrum.ieee.org/tag/google" target="_self">Google</a> and <a href="https://spectrum.ieee.org/tag/meta" target="_self">Meta</a>, they’re going to have to decide what to do with all these data about people once they physically die. But there are also companies that try to either create persons out of data, so to speak, or there are companies that replicate a living person who has died. I mean, it’s possible to replicate that person when they’re living too, in a digital way. And there are some companies that will have advertised posting information as though you’re living whether you’re sleeping or dead. So there are lots of different ways to think about this industry and what to do with data after we die.</p><p style=""><strong>Strickland: </strong>Yeah, it’s fascinating to see what’s on offer. Companies that say they’ll <a href="https://www.mywishes.co.uk/digital-legacy" rel="noopener noreferrer" target="_blank">send out emails</a> on specific dates after your deaths, you can still communicate with loved ones. And although I don’t know how that would feel to be on the receiving end of such a message, honestly. But the part that feels creepiest to me is the idea of a datafied version of me that sort of living on after I’m gone. Can you talk a little bit about different ideas people have had about how they can recreate someone after their death? And oh, there was a <a href="https://spectrum.ieee.org/tag/microsoft" target="_self">Microsoft</a> patent that you mentioned in the chapter that was interesting in this way.</p><p style=""><strong>Wong: </strong>Yeah, I mean, I’m really curious why your discomfort with that, but let’s sort of table that. Maybe you can talk a little about that too, because I mean, for me, what really hits home with these sort of digital avatars that act on their own, I guess, in your stead, is that it pushes back on this question of how autonomous we are in the world. And because these bots or these algorithms are designed to interact with the rest of the world, it is a little bit weird, and it speaks to also what we think the edges of human community are.</p><p> So most of the time when we think about death, there’s a way to commemorate a dead person in a community, and sort of there’s a moving on to the rest of the living, while also remembering the person who’s died. But there are ways that human communities have developed to deal with the fact that we’re not all here forever. I think it’s a really interesting anthropological and sociological question when it is possible that people can still participate, at least in digital fora, even though they’re dead. So I think that’s a real question for human community.</p><p>I think that there are questions of dignity. How do we treat these digitized entities? Are they people? Are they the person who has died? Are they a different type of entity? Do they need a different classification for legal, political, and social purposes?</p><p>And finally, the other human rights value that I really think this chapter actually pushes on is that question of equality. Not everyone gets to have a digital self because these are actually quite expensive. And also, even if they become more accessible in price, perhaps there are other barriers that prevent certain types of people from wanting to engage in this. So then you have a human community that is populated only by certain types of digital afterlife selves. So there are all these different human rights values questions. And in the process of researching the book, yes, I did come across this Microsoft <a href="https://patents.google.com/patent/US10853717B2/en" rel="noopener noreferrer" target="_blank">patent</a>. They have <a href="https://www.cnn.com/2021/01/27/tech/microsoft-chat-bot-patent/index.html" rel="noopener noreferrer" target="_blank">put things on hold</a> as far as I can tell. There was a bit of publicity around it, several media reports around this patent that had been secured by Microsoft, essentially to create a version of a person living or dead, real or not, based on social data. And they define social data very broadly. It’s really anything you think about when you interact with digital devices these days.</p><p>And I just thought there’s so many concerns with that. One, I mean, who authorizes the use of that kind of data, but then also, how does the machine actually recognize the type of data and what’s appropriate to say and what’s not? And I think that’s the other thing that is not a human rights concern, but it’s a human concern, which is that we all have discretion when we’re living. And it’s not clear to me that that’s true if we’re gone and we’ve just left data about what we’ve done.</p><p style=""><strong>Strickland: </strong>Right, and so the Microsoft patent, as far as we know, they’re not acting on it, it’s not going forward, but some versions of this phenomenon have already happened. Can you tell me the story of Roman Mazurenko and what happened to him?</p><p style=""><strong>Wong: </strong>Yeah, so Roman’s story, it’s very tragic and also very compelling. Casey Newton, a reporter, actually wrote a <a href="https://www.theverge.com/a/luka-artificial-intelligence-memorial-roman-mazurenko-bot" rel="noopener noreferrer" target="_blank">really nice profile piece</a>. That’s how I initially got familiar with this case. And I just thought it illustrated so many things. So Roman Mazurenko was a Russian tech entrepreneur who unfortunately died in an accident at a very young age. And he was very much embedded in a very lively community. And so when he died, it left a really big hole, especially for his friend, <a href="https://www.linkedin.com/in/eugenia-kuyda-638a8a1b/" rel="noopener noreferrer" target="_blank">Eugenia Kuyda</a>, and I hope I’m saying her name right, but she was a fellow tech entrepreneur. And because Roman, he was young, he hadn’t left really a plan, right? And he didn’t actually have a whole lot of ways for his friends to grieve loss of his life. So she got the idea of setting up a chatbot based on texts that she and Roman had exchanged while he was living. And she got a handful of other family and friends to contribute texts. And she managed to create, by all accounts, a very Roman-like chatbot, which raised a lot of questions. If me, I think in some ways it really helped his friends cope with the loss of him, but also what happens when data are co-created? In this case, it’s very clear. When you send a text message, both sides, or however many people are on the text chain, get a copy of the words. But whose words are they? And how do you decide who gets to use them for what purpose?</p><p style=""><strong>Strickland:</strong> Yeah, that is such a compelling case. Yeah, and you asked before why I find the idea creepy of being resurrected in such a digital form. Yeah, for me, it’s kind of like a flattening of a person into what sort of resembles like an AI chatbot. It just feels like losing, I guess, the humanity there. But that may just be my current limited thinking. And maybe when I-- maybe in some decades, I’ll feel much more inclined to continue on if that possibility exists. We’ll see, I guess.</p><p style=""><strong>Wong:</strong> In terms of thinking about your discomfort, I don’t know if there’s a right answer because I think this is such a new thing we’re encountering. And the level of datafication has become so mundane, so granular that on the one hand, I think you’re right, and I agree with you. I think there’s more to human life than just what we do that can be recorded and digitized. On the other hand, it is starting to be one of those things where philosophers and other folks who really think about the bound, what does it mean to be human? Is it the sum total of our activities and thoughts? Or is there something else, right? This idea, whether they believe in a soul or you believe in conscious, like what consciousness is, like these are all things that are coming into question.</p><p style=""><strong>Strickland:</strong> So trying to think about some of the things that could go wrong with trying to replicate somebody from their data, you mentioned the question of discretion and curating. I think that’s a really important one. If everything I’ve ever said in an email to my partner was then said to my mom, would that be a problem, that kind of thing. But what else could go wrong? What are the other kind of technical problems or glitches that you could imagine in that kind of scenario?</p><p style=""><strong>Wong:</strong> I mean, first of all, I think that’s one of the worries I would have is, because we don’t tag our data secret or only for family, right? And so these are things that could come up very readily. But I think there are other just very common concerns like software glitches. Like what happens if there’s a bug in the code and someone or someone, like the digital representation of someone says something totally weird or totally offensive or totally inappropriate, do we then, how do we update our thinking about that person when they were alive? And is that digital version the same thing as that living person or that deceased person? I think that’s a real judgment call. I think that some other things that might come up are simply that data could get lost, right? Data could get corrupted. And then what? What happens to that digital person? What are the guarantees we might have if someone really wanted to make a digital version of themselves and have that version persist even after they’re physically dead, what would they say if some data got lost? Would that be okay? I mean, I think these are sort of questions that are exactly what we’ve been talking about. What does it mean to be a person? And is it okay if data from a five-year period of your life is lost? Would you still be a complete human representation in digital form?</p><p style=""><strong>Strickland:</strong> Yeah, these are such interesting questions. And you also mentioned in the book the question of whether a digital afterlife person would be sort of frozen in time when they died, or would they be continuing to update with the latest news?</p><p style=""><strong>Wong:</strong> And is that okay? Again, those are, you don’t want to make someone a caricature of themselves if they can’t speak to current events. Because sometimes, we think we have these thought experiments, like what would some famous historical figures say about racism or sexism today, for example? Well, if they can’t update with the news, then it’s not really useful. But if they update with the news, that’s also very weird because we’ve never experienced that before in human history, where people who are dead can actually very accurately speak to current events. So it does raise some issues that I think, again, make us uncomfortable because they really push the boundaries of what it means to be human.</p><p style=""><strong>Strickland:</strong> Yeah. And in the chapter, you raised the question of whether a digitally reconstructed person should perhaps have human rights, which is so interesting to think about. I guess I sort of thought of data more as like property or assets. But yeah, how do you think about it?</p><p style=""><strong>Wong: </strong>So I don’t have an answer to that. One of the things I do try to do in the book is to encourage people not to think about data as property or assets in the transactional market sense. Because I think that the data are getting so mundane, so granular, that they really are saying something about personhood. I think it’s really important to think about the fact that these are-- data are not byproducts of us. They are revealing who we are. And so it’s important to recognize the humanity in the data that we are now creating on a second-by-second basis. In terms of thinking about the rights of digital persons if they are created, I think that’s a really hard question to answer because anyone who tells you something-- anyone who has a very straightforward answer to this is probably not thinking about it in human rights terms.</p><p>And I think that what I’m trying to emphasize in the book is that we have come up with a lot of rights in the global framework that try to preserve a sense of a human life and what it means to live to your fullest potential as an individual. And we try to protect those rights that would enable a person to live to their potential. And the reason they’re rights is because their entitlements, they’re obligations that someone has to you. And in our conception now, it’s usually states have obligations to individuals or groups. So then if you try to move that to thinking about a data person or a digital person, what kind of potential do they live to? Would it be the same as that physical person? Would it be different because they’re data? I mean, I don’t know. And I think this is a question that needs exploration as more of these technologies come to bear. They come to market. People use them. But we’re not thinking about how we treat the data person. How do we interact with a datafied version of a person who existed, or even just a synthesized computer person, a person or-- sorry, a digital version of some being that’s generated, let’s say by a company based on no real living person? How do we interact with that digital entity? What rights do they have? I don’t know. I don’t know if they have the same kinds of rights that human beings do. So there’s a long way to answer your question, but in a way, that’s exactly what I’m trying to think through in this chapter.</p><p style=""><strong>Strickland:</strong> Yeah. So what would you imagine as sort of next steps for human rights lawyers, regulators, people who work in that space? How can they even begin to grapple with these questions?</p><p style=""><strong>Wong:</strong> Okay, so this chapter is one of several explorations of how human rights are affected by dataification and vice versa. So I talk about data rights. I talk about facial recognition technology. And I talk about the role of big tech as well in enforcing human rights. And so I end with a chapter that argues that we need a right, we need a human right to <a href="https://thedataliteracyproject.org/" rel="noopener noreferrer" target="_blank">data literacy</a>, which is tied to our right to education that already exists. And I say this because I think what we all need to do, not just lawmakers and lawyers and such, but what we all need to do is really become conversant in data. Not just digital data. I don’t mean everyone should be a data scientist. That’s not what I mean. I mean we need to understand the importance of data in our society, how digital data, but also just general data really runs how we think about the world. We’ve become a very analytical and numbers-focused world. And that is something that we need to think about not just from a technical perspective, but from a sociological perspective, and also from a political perspective.</p><p>So who is making decisions about the types of data that are being created? How are we using those? Who are those uses benefiting? And who are they hurting? And really think about the process of data. So, again, back to this co-creation idea that there is a data collector and there’re data subjects. And those are different populations often. But we need to think about the power dynamic and the differences between those, between collectors and subjects. And this is something I talk a lot about in the book. But also, I think we need to think about the process of data making and how it is that collectors make different priority choices over selecting some types of characteristics to record and not others.</p><p>And so once we kind of understand that, I think then once we have sort of this more data literate society, I think it’ll make it easier, perhaps, to answer some of these really big questions in this chapter about death. What do we do? I mean if everyone was more data literate, maybe we could enable people to make choices about what happens to their data when they die. Maybe they want to have these digital entities floating around. And so then we would need to decide how to treat those entities, how to include those entities or exclude them. But right now, I do think people are making choices or would be making choices based on a lack of support. When we die, there’s not a lot of options right now, or they think it’s interesting, or they want to be around for their grandkids. But at what cost? I think that’s really— I think that’s really important and it hasn’t been addressed in the way we think about this stuff.</p><p style=""><strong>Strickland:</strong> Maybe to end with a practical question: Would you recommend that people make something like a <a href="https://www.everplans.com/articles/digital-cheat-sheet-how-to-create-a-digital-estate-plan" rel="noopener noreferrer" target="_blank">digital estate plan</a> to sort of set forth their wishes for how their data is used or repurposed or deleted after their demise?</p><p style=""><strong>Wong: </strong>I think people should think very hard about the types of digital data they’re leaving behind. I mean let’s take it out of the realm of the morbid. I think it’s really about what we do now in life, right? What kind of digital footprint are you creating on a daily basis? And is that acceptable to you? And I think in terms of what happens after you’re gone, I mean, we do have to make decisions about who gets your passwords, right? Who has the decision-making power to delete your profiles or not? And I think that’s a good thing. I think people should probably talk about this with their families. But at the same time, there’s so much that we can’t control. Even through a digital estate plan, I mean, think about the number of photos you appear in in other people’s accounts, I mean. And there’re often you know multiple people in those pictures. If you didn’t take the picture, whose is it, right? So there’re all these questions again about co-creation that really come up. So, yes, you should be more deliberate about it. Yes, you should try to think about and maybe plan for the things you can control. But also know that because data are effectively forever, that even the best-laid digital estate plan right now is not going to quite capture all the ways in which you exist as data.</p><p style=""><strong>Strickland: </strong>Excellent. Well, Wendy, thank you so much for talking me through all this. I think it’s absolutely fascinating stuff, really appreciate your time.</p><p style=""><strong>Wong: </strong>It was a great conversation.</p><strong>Strickland:</strong> That was <a href="https://www.wendyhwong.com/" rel="noopener noreferrer" target="_blank">Wendy H. Wong</a> speaking to me about the digital afterlife industry, a topic she covers in her book, <a href="https://mitpress.mit.edu/9780262048576/we-the-data/" rel="noopener noreferrer" target="_blank"><em>We the Data: Human Rights in a Digital Age</em></a>, just out from MIT Press. If you want to learn more, we ran <a href="https://spectrum.ieee.org/digital-afterlife" target="_self">a book excerpt</a> in <em>IEEE Spectrum</em>‘s November issue, and we’ve linked to it in the show notes. I’m Eliza Strickland, and I hope you’ll join us next time on <em>Fixing the Future</em>.]]></description><pubDate>Wed, 18 Oct 2023 09:00:04 +0000</pubDate><guid>https://spectrum.ieee.org/datafication-and-digital-reanimation</guid><category>Social-media</category><category>Type-podcast</category><category>Fixing-the-future</category><dc:creator>Eliza Strickland</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/49468405/origin.webp"/></item><item><title>The Future of Moore's Law Is Inside This Willy Wonka Machine</title><link>https://spectrum.ieee.org/the-moores-law-machine</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.webp?id=48173225&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/1cdb9171" width="100%"></iframe><h3 style="">Transcript</h3><p style=""><strong>Stephen Cass:</strong> Hello and welcome to <em>Fixing the Future</em>, an <em>IEEE Spectrum</em> podcast, where we look at concrete solutions to some big problems. I’m your host, <a href="https://spectrum.ieee.org/u/stephen-cass" target="_self">Stephen Cass</a>, a senior editor at <em>IEEE Spectrum</em>. And before we start, I just want to tell you that you can get the latest coverage from some of <em>Spectrum</em>‘s most important beats, including AI, climate change, aynd robotics by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe. Today, we’re going to be talking about making tiny things even tinier so that we can cram ever more computing power onto silicon chips. And to do that, I’m talking with another Spectrumite, senior editor, <a href="https://spectrum.ieee.org/u/samuel-k-moore" target="_self">Sam Moore</a>, who covers a semiconductor beat for us like a <a href="https://en.wikipedia.org/wiki/Field-effect_transistor" target="_blank">field effect transistor</a> covering a depletion layer. Sam, welcome to the show.</p><p style=""><strong>Samuel K. Moore:</strong> Thank you, Stephen. Great to be here.</p><p style=""><strong>Cass:</strong> So we’d often talk about <a href="https://spectrum.ieee.org/special-reports/50-years-of-moores-law/" target="_self">Moore’s law</a>, no relation, on this show, and the current state of it and how we always seem to be talking about it’s the end of Moore’s law. And yet, it keeps going. So can you talk a little about what the current state is and these new ideas for pushing that boat further down the stream?</p><p style=""><strong>Moore:</strong> Sure thing. Yes, Moore’s law is slowing down. That’s a sort of definitive fact. It is getting harder and more expensive to make more transistors on a given area of silicon. But it is continuing, and there’s a lot of effort to make that happen. Right now, we’re sort of around the 200 million transistors per square millimeter range, and going to continue to keep trying to make that smaller. You hear a lot about sort of five-nanometer node, three-nanometer node, and stuff like that. You want to keep in mind that these names actually have nothing to do with the size of stuff on things. The five-nanometer node— sorry, the five-nanometer node chips generally have sort of their smallest distance between the wires is 20 to 25 nanometers. So it’s all just a name. But they are going to continue to name new things and make new processes and make things even smaller.</p><p style=""><strong>Cass:</strong> So what do these names relate to at all then?</p><p style=""><strong>Moore:</strong> It’s historical. There was a time when they actually meant something. That measurement, it’s called the metal pitch. Basically, the distance between two wires used to actually be what they named things after, excuse me. But that kind of broke down in the late ‘90s or so. And so ever since then, it’s just been sort of a name.</p><p style=""><strong>Cass:</strong> And what is it based on?</p><p style=""><strong>Moore: </strong>Sorry. So what is it based on now?</p><p style=""><strong>Cass: </strong>Yeah.</p><p style=""><strong>Moore:</strong> Oh, well, they kept cutting that distance in half and in half again. And then they just kind of continued using that sort of division for the name, even though it wasn’t actually related to the size of things on the transistor.</p><p style=""><strong>Cass: </strong>It’s just like a general, “We are getting smaller kind of—”</p><p style=""><strong>Moore:</strong> Yeah, it’s just kind of marketing. And there’s so few companies who can actually do these really, really cutting-edge chips, so they could call them anything. And now they are. I mean, <a href="https://www.tsmc.com/english" target="_blank">TSMC</a> now calls its process N5. <a href="https://www.intel.com/content/www/us/en/homepage.html" rel="noopener noreferrer" target="_blank">Intel</a> is going to be— their next generation or the generation after that will be like A20. I think the A’s for Angstrom.</p><p style=""><strong>Cass:</strong> So for you, the real measure, though, is that one you said earlier, which is how many million transistors you’re going to get per square centimeter square.</p><p style=""><strong>Moore: </strong>Yeah. That’s really what matters. It’s just how many you can pack in.</p><p style=""><strong>Cass: </strong>So in the most recent issue of <em>Spectrum</em>, we had a fantastic feature that you edited <a href="https://spectrum.ieee.org/high-na-euv" target="_self">called the Moore’s Law Machine</a> about some of these efforts to sort of keep Moore’s law going with this fantastically elaborate device. And this was written by <a href="https://www.linkedin.com/in/jan-van-schoot/" rel="noopener noreferrer" target="_blank">Jan van Schoot</a>. So perhaps you could take me through this technology, which is called extreme ultraviolet, which sounds like a soda I might buy, but yeah.</p><p style=""><strong>Moore: </strong>Sorry. Well, they could have called it soft X-rays. But extreme ultraviolet is definitely cooler. So extreme ultraviolet lithography is how you make the— it’s the main machine involved in the latest two generations of cutting-edge semiconductor chips. So without it, you wouldn’t have your iPhone 12 through 15, I believe, the <a href="https://www.nvidia.com/en-us/data-center/h100/" rel="noopener noreferrer" target="_blank">NVIDIA H100</a>, that GPU that everybody’s trying to get their hands on to do their AI. You wouldn’t have at least one of the top 10 supercomputers and probably none of the next generation of them. This is the critical machine, and it is made by <a href="https://www.asml.com/en" rel="noopener noreferrer" target="_blank">one company in the Netherlands</a>. And it is fantastically complicated. Let me sort of tell you first what it does, and then I’m just going to give you some weird superlatives about it. What lithography is, is sort of, basically, you’ve got a pattern that you want to project onto the chip that will eventually make all the circuits and transistors and things like that. And with extreme ultraviolet lithography, you are using a wavelength of light that’s only 13 and a half nanometers long. This is a huge jump from what was used in the previous generation, which was 193 nanometers, which was called deep ultraviolet. It’s an enormous jump. It took more than 20 years of R&D to actually get to a machine that works and that is vaguely affordable. And when I say vaguely affordable, I really mean vaguely affordable. Each machine is more than $100 million. It’s got 100,000 or more components in it. It consumes a megawatt of electricity just so it can deliver a couple of hundred watts of this extreme ultraviolet light onto the silicon wafer.</p><p>The thing weighs like 180 tons. I mean, it’s massive. The current generation is like trailer size. I saw one of them being put together at a fab in upstate New York about five years ago. And it’s so big everybody looks like an <a href="https://www.youtube.com/watch?v=QkC8wPSmcPg" rel="noopener noreferrer" target="_blank">Oompa Loompa</a>. I mean, very. If the Oompa Loompa’s were the best in the world at what they did and the chocolate factory cost a billion dollars. It’s just an amazing machine. And the next version of this machine, which is sort of what we’re going to talk about today, is more than a third larger than today’s. So it’s just massive, complicated, super expensive, hard to get your hand on. And I want to tell you how they actually made it better.</p><p style=""><strong>Cass:</strong> So inside this giant trailer machine, there’s some really crazy components, including how they actually make these soft X-ray/extreme ultraviolet blink beams. And it involves molten metal and a carbon dioxide laser.</p><p style=""><strong>Moore:</strong> Yeah, it is the most bananas process you can kind of think of. So you’ve got a vacuum chamber and a little— I don’t know what to call it, but it’s spitting tiny molten drops of tin. And they shoot across the vacuum chamber, and then you hit that tin with your 40 kilowatt laser. You blast it into a plasma. And then this plasma glows in all kinds of fantastic colors. But the optics collect the 13 and a half nanometers that you actually want to use and project it into the guts of the machine itself.</p><p style=""><strong>Cass:</strong> This really does sound very Wonka-ish.</p><p style=""><strong>Moore:</strong> Yeah, it feels like there should be an easier way to do this, but apparently, there isn’t.</p><p style=""><strong>Cass:</strong> So these things have to be vacuum-sealed because this light gets absorbed by the air. And what are some of the other— why is this machine so big? Because it seems like you’ve got like a little tiny pattern. You’ve got little tiny chips, okay? These little tiny droplets. Why is it so big?</p><p style=""><strong>Moore:</strong> Well, a lot of it is actually the chamber containing the optics, which are just insanely precise. The mirrors are fantastically expensive. These aren’t just ordinary mirrors. They’re multiple layers of alternating exotic stuff in order to get this kind of light to reflect in the right direction with any efficiency because UV is absorbed by tons and tons of stuff, including air. And so a lot of it is just getting the light where it needs to go efficiently without disturbing any of the patterns that you actually want to project. And then there’s a lot of it that’s also just handling the wafers and handling the masks, which when they’re actually sort of in position, they got to be handled at nanometer precision. So these are incredibly fine moving machines.</p><p style=""><strong>Cass:</strong> So one of the big challenges I think that Schoot talked about in the article was, “Yes, you can set this up, and you can get the lasers going, the machine’s going, but your throughput is going to be very, very, very low, uneconomical for the size of the machine unless you try a couple of other tricks on top of the, again, molten droplets being blasted by a very powerful carbon dioxide laser,” which I know I’m hung up on, but tell me a little bit more about these other tricks.</p><p style=""><strong>Moore: </strong>Sure. Okay. So as you are a little hung up on— one of the biggest problems they had to solve just to get to the first generation was to make the tin explode in more brilliance so that you could get just that couple hundred watts of light because, the dimmer it is, the longer you have to expose the wafer. And so it’s all about throughput. That problem is basically solved. But in order to continue Moore’s law, you don’t just want this. You’ve got this light, but you want to actually keep making smaller and smaller features with this light. To do that, there’s three knobs that you can turn. One was the big knob of changing the wavelength, which makes sense. Smaller light, better resolution, totally straightforward. Two other knobs. One is kind of difficult to explain. It’s a bunch of optical tricks that you can do, which might include as much as projecting two patterns serially to get one pattern at the end or just making things look weird so that they look less weird when they get to the silicon wafer. And I can talk more about that later.</p><p>But the knob that they are turning with this newest machine that is currently being built in Belgium right now is to increase the numerical aperture. That’s sort of the angle of the light that you can operate within the optical system. Historically, they’ve turned all three knobs. Numerical aperture is one that actually gives you a really good return, historically. And so they really wanted to do this. They’re right around 13.5 nanometer resolution now, but if they want to dip down below, they’re going to have to do a high numerical aperture, extreme ultraviolet lithography. This causes a cascade of problems when you’re designing the system. This machine is already fantastically complicated. But as with any optics, you tweak something here, it’s going to have some other effect later on. So let me go through the cascade of the problems that they had to solve in order to make high NA extreme ultraviolet lithography.</p><p>Okay. So first, you want the numerical aperture increase at the wafer itself. That’s where you’ll get the resolution. But that means you also have to increase it at the mask. Now, the mask is where you store the pattern. So you bounce the light off the mask and goes through the optics, and then it lands on the silicon wafer, and that’s your point. Here’s the thing. So you got to bounce the light onto the wafer— sorry, onto the mask and then off of the mask. And here’s where you have sort of a <em>Ghostbusters</em> moment. Those two streams can’t cross. You think bad things will happen. <a href="https://www.youtube.com/watch?v=jyaLZHiJJnE" rel="noopener noreferrer" target="_blank">It would be bad</a>, I think, is the line, right? So you can’t cross the streams. So that means you have to angle them away from each other, okay? But you can’t angle them away from each other too much because these really specialized mirrors only work up to about 11 degrees. And in order to just— if you just wanted to do this without any adjustments, you’d need 18 degrees. So then they’re like, “Okay, well, now we’ve got to solve. We’re going to need this angle. There’s no getting around it.” So the way they solved it was by increasing the demagnification, which I know sounds kind of increasing a D, but basically, it’s shrinking stuff down a lot. They increased it by like eightfold or something like that.</p><p>So like, “Okay, hey, problem solved.” But not really because now your pattern on the wafer is really small. It’s like a postage stamp instead of— not your pattern, sorry. The amount of wafer that you can project onto all at once, super small. It’s like a postage stamp, and that means that you have to do more postage stamps, which means that you’re—</p><p style=""><strong>Cass:</strong> Because these wafers are large.</p><p style=""><strong>Moore:</strong> Yeah, they are 300 millimeters across and so dinner plate size or so. And so if you can only do a little bit at a time, it’s going to take you longer to do a wafer. And then it basically becomes so expensive, it’s not even worth it. So they had to solve that problem by doing something kind of weird. It was sort of like kind of funhouse mirror effects. Basically, they increased the demagnification in only one direction. So they came up with these specialized mirrors that kind of would stretch things out [laughter] and shrink them. And it had weird effects. I mean, you actually have to make the mask stretched out. So the pattern that’s on the mask is kind of this funhouse mirror version of what you want on the wafer. But amazingly, that actually does it. You still wind up with a little bit smaller than you’d like of a projection onto the wafer, but it’s acceptable as long as you increase the acceleration of how fast things are moving through the machine. So what? Five problems to solve?</p><p style=""><strong>Cass: </strong>Yeah, I think so, yeah. Yeah. But this effect, it sounds a little bit like in the old days before we all had widescreen, flat-screen TVs, sometimes when you were trying to be showing like a cinema movie on a TV and suddenly, the aspect ratio would get really weird and distorted because they were having to squeeze in on one axis to make it all fit, especially when the credits would roll and would get all distorted in one direction. And that kind of reminds me of that. But these are big machines. And you’ve done some other reporting, though, on some of the sort of the side effects of handling these big machines, which is how to operate them in a more sort of environmentally friendly way. And that was the work of this company called, I think, <a href="https://www.edwardsvacuum.com/en-us" rel="noopener noreferrer" target="_blank">Edwards</a> in England. So can you tell me a little bit about that?</p><p style=""><strong>Moore: </strong>Yeah. So you remember how I said everything has to happen in a vacuum on the inside of this? Kind of sort of not. There is a very small sort of flow of— I mean, sorry. It seems like a small flow of hydrogen, but it’s a really big machine, so it’s actually 600 liters per minute. [laughter] But this hydrogen is there for a couple of reasons. Everything in there is super delicate. You don’t want anything to get on the mirrors or on the mask or anything like that. But you are blasting molten tin in a chamber, and you have other chemicals that are involved in chip making and stuff like that, and you need to kind of sweep them away. And so that’s what this hydrogen is for. And you think, “Oh, hey, hydrogen. That’s green.” Not yet, actually. Most hydrogen is actually not made in any green process. It’s actually made by a chemical reaction between water and methane, so not great. And 600 standard liters per minute is actually kind of a lot. What they’re doing with it currently is they just burn it because you just get water, and all of the nasties that it’s picked up just kind of falls out. And that’s—</p><p style=""><strong>Cass:</strong> Kind of a smoke stack at the side with a flame on top, [laughter] and it’s even more Willy Wonka. But yeah, okay. So they’re just burning off the hydrogen.</p><p style="">Moore: Right. But that’s super wasteful. So what Edwards worked on was a system that can recycle the hydrogen. It’s actually pretty cool. It’s like a reverse fuel cell kind of. The used hydrogen and the icky components that it’s picked up along the way. Basically, go through one side, get ionized. Then an electric field sort of forces those protons through a proton exchange membrane. They come out of a membrane, excuse me. They come out the other side, recombine with electrons. You get pure hydrogen to go back into your process. And all the awful stuff stays on that to the other side. Yeah. So it works pretty good. They set one up at a nanoscience research organization called <a href="https://www.imec-int.com/en" rel="noopener noreferrer" target="_blank">IMEC</a>, which is kind of a key European research house. And it managed to recycle 70 to 80 percent of the hydrogen in their EUV machine. So now they just have to convince the big chip makers to adopt it as well.</p><p style=""><strong>Cass: </strong>So with all of these technologies— and I want to turn to sort of a competing technology in a moment, but for these technologies, how long do you think it’ll be before we see chips in our smartphones and our computers made with this new technology?</p><p style=""><strong>Moore:</strong> Right. With a high NA EUV, things that are made in 2025 will probably start to— at least the chips themselves will be made in 2025. It takes months after that for them to be in systems, but that’s probably in time for yet another NVIDIA GPU. So AI is driving a lot of the demand for particularly this most cutting edge. And so I’d expect to see it in sort of the generation of AI chips that are sort of made in 2025, ‘26. Also, Apple is always at the cutting edge. They always want the newest chip manufacturing techniques. So whatever iPhone [laughter] comes out in the latter half of the century will almost certainly involve this.</p><p style=""><strong>Cass:</strong> And that’s actually a perfect segue because you mentioned NVIDIA there. And NVIDIA is looking at enabling another approach to squeezing things down and keeping Moore’s law moving along, which is inverse lithography. So can you tell me a little bit about that? And why the fact that it’s an AI company works out for well for them? Because they’re a chip maker who happens to make like the AI chips.</p><p style=""><strong>Moore:</strong> Right. So let me sort of give us a little more context since you started. So NVIDIA, actually, they design the most in-demand AI chip in the world. Everybody wants their hands on an H100, which is just the current generation. The manufacturer of that chip, though, NVIDIA designs it. It’s manufactured by TSMC, which frankly, kind of dominates the world of the most cutting-edge chips right now. So they work closely together now because NVIDIA is probably one of their biggest customers. So if you can kind of go back for a second, remember I told you about those three knobs you can turn to make lithography better, to make your precision and your resolution better. One of those knobs was this weird one called K1. It was sort of the process stuff that you can do. So what NVIDIA has done is it’s made one of those process stuff that you can do much easier to compute. It’s a technique called inverse lithography. And here’s the thing. You might think that if you wanted to project, say, like a plus sign, something that was shaped like a plus sign onto a silicon wafer, on your mask, you’d put a plus, and then you’d get a smaller plus when it got to the silicon. And not the case. There’s enough distortions and other just stuff that you have to worry about optically when you’re dealing with this kind of operation below the wavelength of light that you’re using, that you have to do things like add little sort of dog ears at the end of the plus sign to make it look like a plus when it gets there. Those things have had to be progressively more complicated as we’ve kind of driven Moore’s law to its limits.</p><p>So now that plus sign would actually sort of look like— if you put it in a kaleidoscope and kind of turned it, it’s just this massive weird stuff that you have to put on the mask in order to get your simple plus sign at the wafer. Now, those tricks are actually kind of really hard to do computationally. So it’s the idea that like, “Okay, if I want this plus on the wafer, what do I have to have on the mask?” And it’s so computationally difficult that we’re talking like weeks of just— we’ve got a massive computer, and it’s going to just sit there for a couple of weeks and try to figure out what that shape should be. What NVIDIA has done is it’s come up with a system that turned that two weeks into an overnight job. And the thing that was that it used to be— it used to be a job for CPUs. My guess is it was instructionally complicated enough that it was not sort of inherently of the parallel nature that GPUs were on. So NVIDIA did a lot of work and came up with algorithms that are just perfectly fitted to a GPU. And so, basically, they did in the work— sorry, what would have taken 40,000 CPUs they did with 500 GPUs and two weeks versus overnight, which is actually— that’s huge. That eliminates a big bottleneck in getting your chip to market, for one thing. It allows you to use this really computationally expensive technique in more places rather than reserving it for the spots of the chip that were just really difficult.</p><p>And from the perspective of an environmental benefit, it’s 5 megawatts of power in the computing system versus 35 megawatts, which is not insubstantial. So yeah, this is a thing that they— this computational lithography system, it’s called cuLITHO. They introduced it, I think, in the early summer or late spring, and they’ve got <a href="https://www.synopsys.com/" rel="noopener noreferrer" target="_blank">Synopsis</a>, one of the electronic design automation companies bought in. TSMC has been working on it with them. And of course, ASML, which makes the EUV machine in question. And all the other lithography machine equipment they’re signed on as well. So it should really be making a difference both environmentally and in terms of getting chips done faster.</p><p style=""><strong>Cass:</strong> So just to wrap up, we’ve been talking about a lot of technologies that are actually very close to being deployed. Is there anything you’re seeing in the lab that’s further out that might help us like in the 2030s, basically?</p><p style=""><strong>Moore:</strong> There’s no clear answer to whether there’ll be sort of another wavelength of light that we use, and it seems kind of unlikely, actually. Even 13.5 nanometers is not that many atoms of material when you get down to it. So our ability to sort of shrink things down in two dimensions, it really is getting towards the end. And so transistor architecture is starting to go 3D. Or rather, in the lab, it’s starting to go 3D. But this seems like the path that everyone has chosen. So now there’s a new kind of transistor. I believe Samsung started using it in production last year, maybe TSMC this year. I might have those wrong, but they’re both well into this new structure. <a href="https://spectrum.ieee.org/the-nanosheet-transistor-is-the-next-and-maybe-last-step-in-moores-law" target="_blank">It’s called a nano sheet</a>. And Intel is moving to it the end of 2024. And the thing about the nano sheet is that it’s sort of conducive to making a second transistor right on top of it. So instead of trying to squeeze things together in two dimensions, we’re going to start adding layers. In addition to just at the transistor level making it 3D, we’ve already got quite a lot of work going on right now and quite a lot of production chips that involve 3D packaging, which is just taking one chip and stacking it on top of another in order to kind of make a superchip. And that’s happening now in production chips. So yeah, the future is three-dimensional.</p><p style=""><strong>Cass: </strong>Well, that was fantastic, Sam. Thank you so much for talking with us today.</p><p style=""><strong>Moore: </strong>It was a pleasure, Stephen, as always.</p><p style=""><strong>Cass:</strong> So today we were talking with Sam Moore, senior editor at IEEE Spectrum about extreme ultraviolet and other technologies to keep transistors getting ever smaller on computer chips. For <em>IEEE Spectrum</em>, I’m Stephen Cass, and I hope you see us next time on <em>Fixing the Future</em>.</p>]]></description><pubDate>Wed, 04 Oct 2023 09:00:05 +0000</pubDate><guid>https://spectrum.ieee.org/the-moores-law-machine</guid><category>Type-podcast</category><category>Extreme-ultraviolet</category><category>Nvidia</category><category>Semiconductors</category><category>Moore-s-law</category><category>Chip-manufacturing</category><category>Asml</category><category>Fixing-the-future</category><dc:creator>Stephen Cass</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/48173225/origin.webp"/></item><item><title>Drones That Can Fly Better Than You Can</title><link>https://spectrum.ieee.org/autonomous-drones</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/chatbot-podcast-logo-showing-two-robot-heads-facing-each-other.jpg?id=36416885&width=980"/><br/><br/><p class="shortcode-media shortcode-media-youtube">
<span class="rm-shortcode" data-rm-shortcode-id="fdf710186153980340a30cc6cf858cfe" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/Go0QMXnlIxs?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span>
<small class="image-media media-caption" placeholder="Add Photo Caption...">Episode 3: Drones That Can Fly Better Than You Can</small>
<small class="image-media media-photo-credit" placeholder="Add Photo Credit..."><a href="https://youtu.be/Go0QMXnlIxs" target="_blank"><br/>
</a></small>
</p><p>
<strong>Evan Ackerman: </strong>I’m Evan Ackerman, and welcome to <em>Chatbot</em>, a new podcast from <em>IEEE Spectrum</em> where robotics experts interview each other about things that they find fascinating. On this episode of <em>Chatbot</em>, we’ll be talking with <a href="https://rpg.ifi.uzh.ch/people_scaramuzza.html" rel="noopener noreferrer" target="_blank">Davide Scaramuzza</a> and <a href="https://www.skydio.com/company" rel="noopener noreferrer" target="_blank">Adam Bry</a> about agile autonomous drones. Adam Bry is the CEO of <a href="https://www.skydio.com/" rel="noopener noreferrer" target="_blank">Skydio</a>, a company that makes consumer camera drones with <a href="https://spectrum.ieee.org/skydio-2-review-this-is-the-drone-you-want-to-fly" target="_self">an astonishing amount of skill at autonomous tracking and obstacle avoidance</a>. Foundation for Skydio’s drones can be traced back to <a href="https://spectrum.ieee.org/skydio-camera-drone-autonomous-flying" target="_self">Adam’s work on autonomous agile drones at MIT</a>, and after spending a few years at Google working on <a href="https://wing.com/" rel="noopener noreferrer" target="_blank">Project Wing’s delivery drones</a>, Adam cofounded Skydio in 2014. Skydio is currently on their third generation of consumer drones, and earlier this year, the company brought on three PhD students from Davide’s lab to expand their autonomy team. Davide Scaramuzza directs the <a href="https://rpg.ifi.uzh.ch/" rel="noopener noreferrer" target="_blank">Robotics and Perception group at the University of Zürich</a>. His lab is best known for developing extremely agile drones that can autonomously navigate through complex environments at very high speeds. Faster, it turns out, than even the best human drone racing champions. Davide’s drones rely primarily on computer vision, and <a href="https://spectrum.ieee.org/event-camera-helps-drone-dodge-thrown-objects" target="_self">he’s also been exploring potential drone applications for a special kind of camera called an event camera</a>, which is ideal for fast motion under challenging lighting conditions. So Davide, you’ve been doing drone research for a long time now, like a decade, at least, if not more.
</p><p>
<strong>Davide Scaramuzza:</strong> Since 2009. 15 years.
</p><p>
<strong>Ackerman:</strong> So what still fascinates you about drones after so long?
</p><p>
<strong>Scaramuzza: </strong>So what fascinates me about drones is their freedom. So that was the reason why I decided, back then in 2009, to actually move from ground robots—I was working at the time on self-driving cars—to drones. And actually, the trigger was when <a href="https://robotsguide.com/robots/googlecar" rel="noopener noreferrer" target="_blank">Google announced the self-driving car project</a>, and then for me and many researchers, it was clear that actually many things were now transitioning from academia to industry, and so we had to come up with new ideas and things. And then with my PhD adviser at that time [inaudible] we realized, actually, that drones, especially quadcopters, were just coming out, but they were all remote controlled or they were actually using GPS. And so then we said, “What about flying drones autonomously, but with the onboard cameras?” And this had never been done until then. But what fascinates me about drones is the fact that, actually, they can overcome obstacles on the ground very quickly, and especially, this can be very useful for many applications that matter to us all today, like, first of all, search and rescue, but also other things like inspection of difficult infrastructures like bridges, power [inaudible] oil platforms, and so on.
</p><p>
<strong>Ackerman: </strong>And Adam, your drones are doing some of these things, many of these things. And of course, I am fascinated by drones and by what your drone is able to do, but I’m curious. When you introduce it to people who have maybe never seen it, how do you describe, I guess, almost the magic of what it can do?
</p><p>
<strong>Adam Bry:</strong> So the way that we think about it is pretty simple. Our basic goal is to build in the skills of an expert pilot into the drone itself, which involves a little bit of hardware. It means we need sensors that see everything in every direction and we need a powerful computer on board, but is mostly a software problem. And it becomes quite application-specific. So for consumers, for example, our drones can follow and film moving subjects and avoid obstacles and create this incredibly compelling dynamic footage. And the goal there is really what would happen if you had the world’s best drone pilot flying that thing, trying to film something in an interesting, compelling way. We want to make that available to anybody using one of our products, even if they’re not an expert pilot, and even if they’re not at the controls when it’s flying itself. <a href="https://spectrum.ieee.org/skydio-2-review-this-is-the-drone-you-want-to-fly" target="_self">So you can just put it in your hand, tell it to take off, it’ll turn around and start tracking you, and then you can do whatever else you want to do, and the drone takes care of the rest</a>. In the industrial world, it’s entirely different. So <a href="https://www.skydio.com/distribution-network-inspection" rel="noopener noreferrer" target="_blank">for inspection applications</a>, say, for a bridge, you just tell the drone, “Here’s the structure or scene that I care about,” and then we have a product called 3D Scan that will automatically explore it, build a real-time 3D map, and then use that map to take high-resolution photos of the entire structure.
</p><p>
	And to follow on a bit to what Davide was saying, I mean, I think if you sort of abstract away a bit and think about what capability do drones offer, thinking about camera drones, it’s basically you can put an image sensor or, really, any kind of sensor anywhere you want, any time you want, and then the extra thing that we’re bringing in is without needing to have a person there to control it. And I think the combination of all those things together is transformative, and we’re seeing the impact of that in a lot of these applications today, but I think that that really— realizing the full potential is a 10-, 20-year kind of project.
</p><p>
<strong>Ackerman: </strong>It’s interesting when you talk about the way that we can think about the Skydio drone is like having an expert drone pilot to fly this thing, because there’s so much skill involved. And Davide, I know that you’ve been working on very high-performance drones that can maybe challenge even some of these expert pilots in performance. And I’m curious, when expert drone pilots come in and see what your drones can do autonomously for the first time, is it scary for them? Are they just excited? How do they react?<a href="#_msocom_1" rel="noopener noreferrer" target="_blank"></a>
</p><p>
<strong>Scaramuzza:</strong> First of all, actually, they say, “Wow.” So they can not believe what they see. But then they get super excited, but at the same time, nervous. So we started working on autonomous drone racing five years ago, but in the first three years, we have been flying very slowly, like three meters per second. So they were really snails. But then in the last two years is when actually we started really pushing the limits, both in control and planning and perception. So these are our most recent drone, by the way. And now we can really fly at the same level of agility as humans. Not yet at the level to beat human, but we are very, very close. So we started the collaboration with <a href="https://marv-fpv.com/" rel="noopener noreferrer" target="_blank">Marvin, who is the Swiss champion</a>, and he’s only— now he’s 16 years old. So last year he was 15 years old. So he’s a boy. And he actually was very mad at the drone. So he was super, super nervous when he saw this. So he didn’t even smile the first time. He was always saying, “I can do better. I can do better.” So actually, his reaction was quite scared. He was scared, actually, by what the drone was capable of doing, but he knew that, basically, we were using the motion capture. Now [inaudible] try to play in a fair comparison with a fair setting where both the autonomous drone and the human-piloted drone are using both onboard perceptions or egocentric vision, then things might end up differently.
</p><p>
	Because in fact, actually, our vision-based drone, so flying with onboard vision, was quite slow. But actually now, after one year of pushing, we are at a level, actually, that we can fly a vision-based drone at the level of Marvin, and we are even a bit better than Marvin at the current moment, using only onboard vision. So we can fly— in this arena, the space allows us to go up to 72 kilometers per hour. We reached the 72 kilometers per hour, and we even beat Marvin in three consecutive laps so far. So that’s [inaudible]. But we want to now also compete against other pilots, other world champions, and see what’s going to happen.
</p><p>
<strong>Ackerman:</strong> Okay. That’s super impressive.
</p><p>
<strong>Bry:</strong> Can I jump in and ask a question?
</p><p>
<strong>Ackerman: </strong>Yeah, yeah, yeah.
</p><p>
<strong>Bry: </strong>I’m interested if you— I mean, since you’ve spent a lot of time with the expert pilots, if you learn things from the way that they think and fly, or if you just view them as a benchmark to try to beat, and the algorithms are not so much inspired by what they do.
</p><p>
<strong>Scaramuzza:</strong> So we did all these things. So we did it also in a scientific manner. So first, of course, we interviewed them. We asked any sort of question, what type of features are you actually focusing your attention, and so on, how much is the people around you, the supporters actually influencing you, and the hearing the other opponents actually screaming while they control [inaudible] influencing you. So there is all these psychological effects that, of course, influencing pilots during a competition. But then what we tried to do scientifically is to really understand, first of all, what is the latency of a human pilot. So there have been many studies that have been done for car racing, Formula One, back in the 80s and 90s. So basically, they put eye trackers and tried to understand— they tried to understand, basically, what is the latency between what you see until basically you act on your steering wheel. And so we tried to do the same for human pilots. So we basically installed an eye tracking device on our subjects. So we called 20 subjects from all across Switzerland, some people also from outside Switzerland, with different levels of expertise.
</p><p>
	But they were quite good. Okay? We are not talking about median experts, but actually already very good experts. And then we would let them rehearse on the track, and then basically, we were capturing their eye gazes, and then we basically measured the time latency between changes in eye gaze and changes in throttle commands on the joystick. And we measured, and this latency was 220 milliseconds.
</p><p>
<strong>Ackerman: </strong>Wow. That’s high.
</p><p>
<strong>Scaramuzza:</strong> That includes the brain latency and the behavioral latency. So that time to send the control commands, once you process the information, the visual information to the fingers. So—
</p><p>
<strong>Bry:</strong> I think [crosstalk] it might just be worth, for the audience anchoring that, what’s the typical control latency for a digital control loop. It’s— I mean, I think it’s [crosstalk].
</p><p>
<strong>Scaramuzza:</strong> It’s typically in the— it’s typically in the order of— well, from images to control commands, usually 20 milliseconds, although we can also fly with the much higher latencies. It really depends on the speed you want to achieve. But typically, 20 milliseconds. So if you compare 20 milliseconds versus the 220 milliseconds of the human, you can already see that, eventually, the machine should beat the human. Then the other thing that you asked me was, what did we learn from human pilots? So what we learned was— interestingly, we learned that basically they were always pushing the throttle of the joystick at the maximum thrust, but actually, this is—
</p><p>
<strong>Bry:</strong> Because that’s very consistent with optimal control theory.
</p><p>
<strong>Scaramuzza:</strong> Exactly. But what we then realized, and they told us, was that it was interesting for them to observe that actually, for the AI, was better to brake earlier rather than later as the human was actually doing. And we published these results in <a href="https://www.youtube.com/watch?v=ZPI8U1uSJUs" rel="noopener noreferrer" target="_blank">Science Robotics</a> last summer. And we did this actually using an algorithm that computes the time optimal trajectory from the start to the finish through all the gates, and by exploiting the full quadrotor dynamical model. So it’s really using not approximation, not point-mass model, not polynomial trajectories. The full quadrotor model, it takes a lot to compute, let me tell you. It takes like one hour or more, depending on the length of the trajectory, but it does a very good job, to a point that Gabriel Kocher, who works for the Drone Racing League, told us, “Ah, this is very interesting. So I didn’t know, actually, I can push even faster if I start braking before this gate.”
</p><p>
<strong>Bry:</strong> Yeah, it seems like it went the other way around. The optimal control strategy taught the human something.
</p><p>
<strong>Ackerman:</strong> Davide, do you have some questions for Adam?
</p><p>
<strong>Scaramuzza: </strong>Yes. So since you mentioned that basically, one of the scenarios or one of the applications that you are targeting, it is basically cinematography, where basically, you want to take amazing shots at the level of Hollywood, maybe producers, using your autonomous drones. And this is actually very interesting. So what I want to ask you is, in general, so going beyond cinematography, if you look at the performance of autonomous drones in general, it still looks to me that, for generic applications, they are still behind human pilot performance. I’m thinking of beyond cinematography and beyond the racing. I’m thinking of search and rescue operations and many things. So my question to Adam is, do you think that providing a higher level of agility to your platform could potentially unlock new use cases or even extend existing use cases of the Skydio drones?
</p><p>
<strong>Bry:</strong> You’re asking specifically about agility, flight agility, like responsiveness and maneuverability?
</p><p>
<strong>Scaramuzza:</strong> Yes. Yes. Exactly.
</p><p>
<strong>Bry: </strong>I think that it is— I mean, in general, I think that most things with drones have this kind of product property where the more you get better at something, the better it’s going to be for most users, and the more applications will be unlocked. And this is true for a lot of things. It’s true for some things that we even wish it wasn’t true for, like flight time. Like the longer the flight time, the more interesting and cool things people are going to be able to do with it, and there’s kind of no upper limit there. Different use cases, it might taper off, but you’re going to unlock more and more use cases the longer you can fly. I think that agility is one of these parameters where the more, the better, although I will say it’s not the thing that I feel like we’re hitting a ceiling on now in terms of being able to provide value to our users. There are cases within different applications. So for example, search and rescue, being able to fly through a really tight gap or something, where it would be useful. And for capturing cinematic videos, similar story, like being able to fly at high speed through some really challenging course, where I think it would make a difference. So I think that there are areas out there in user groups that we’re currently serving where it would matter, but I don’t think it’s like the— it’s not the thing that I feel like we’re hitting right now in terms of sort of the lowest-hanging fruit to unlock more value for users. Yeah.
</p><p>
<strong>Scaramuzza: </strong>So you believe, though, that in the long term, actually achieving human-level agility would actually be added value for your drones?<a href="#_msocom_2" rel="noopener noreferrer" target="_blank"></a>
</p><p>
<strong>Bry: </strong>Definitely. Yeah. I mean, one sort of mental model that I think about for the long-term direction of the products is looking at what birds can do. And the agility that birds have and the kinds of maneuvers that that makes them capable of, and being able to land in tricky places, or being able to slip through small gaps, or being able to change direction quickly, that affords them capability that I think is definitely useful to have in drones and would unlock some value. But I think the other really interesting thing is that the autonomy problem spans multiple sort of ranges of hierarchy, and when you get towards the top, there’s human judgment that I think is very— I mean, it’s crucial to a lot of things that people want to do with drones, and it’s very difficult to automate, and I think it’s actually relatively low value to automate. So for example, in a search and rescue mission, a person might have— a search and rescue worker might have very particular context on where somebody is likely to be stuck or maybe be hiding or something that would be very difficult to encode into a drone. They might have some context from a clue that came up earlier in the case or something about the environment or something about the weather.
</p><p>
	And so one of the things that we think a lot about in how we build our products—we’re a company. We’re trying to make useful stuff for people, so we have a pretty pragmatic approach on these fronts— is basically— we’re not religiously committed to automating everything. We’re basically trying to automate the things where we can give the best tool to somebody to then apply the judgment that they have as a person and an operator to get done what they want to get done.
</p><p>
<strong>Scaramuzza: </strong>And actually, yeah, now that you mentioned this, I have another question. So I’ve watched many of your previous tech talks and also interacted with you guys at conferences. So what I learned—and correct me if I’m wrong—is that you’re using a lot of deep learning on the perception side, so as part of a 3D construction, semantic understanding. But it seems to me that on the control and planning side, you’re still relying basically on optimal control. And I wanted to ask you, so if this is the case, are you happy there with optimal control? We also know that Boston Dynamics is actually using only optimal control. Actually, they even claim they are not using any deep learning in control and planning. So is this actually also what you experience? And if this is the case, do you believe in the future, actually, you will be using deep learning also in planning and control, and where exactly do you see the benefits of deep learning there?
</p><p>
<strong>Bry:</strong> Yeah, that’s a super interesting question. So what you described at a high level is essentially right. So our perception stack— and we do a lot of different things in perception, but we’re pretty heavily using deep learning throughout, for semantic understanding, for spatial understanding, and then our planning and control stack is based on more conventional kind of optimal control optimization and full-state feedback control techniques, and it generally works pretty well. Having said that, we did— we put out a blog post on this. We did a research project where we basically did end-to-end— pretty close to an end-to-end learning system where we replaced a good chunk of the planning stack with something that was based on machine learning, and we got it to the point where it was good enough for flight demonstrations. And for the amount of work that we put into it, relative to the capability that we got, I think the results were really compelling. And my general outlook on this stuff— I think that the planning and controls is an area where the models, I think, provide a lot of value. Having a structured model based on physics and first principles does provide a lot of value, and it’s admissible to that kind of modeling. You can write down the mass and the inertia and the rotor parameters, and the physics of quadcopters are such that those things tend to be pretty accurate and tend to work pretty well, and by starting with that structure, you can come up with quite a capable system.
</p><p>
	Having said that, I think that the— to me, the trajectory of machine learning and deep learning is such that eventually I think it will dominate almost everything, because being able to learn based on data and having these representations that are incredibly flexible and can encode sort of subtle relationships that might exist but wouldn’t fall out of a more conventional physics model, I think is really powerful, and then I also think being able to do more end-to-end stuff where subtle sort of second- or third-order perception impact— or second- or third-order perception or real world, physical world things can then trickle through into planning and control actions, I think is also quite powerful. So generally, that’s the direction I see us going, and we’ve done some research on this. And I think the way you’ll see it going is we’ll use sort of the same optimal control structure we’re using now, but we’ll inject more learning into it, and then eventually, the thing might evolve to the point where it looks more like a deep network in end-to-end.
</p><p>
<strong>Scaramuzza: </strong>Now, earlier you mentioned that you foresee that in the future, drones will be flying more agilely, similar to human pilots, and even in tight spaces. You mentioned passing through a narrow gap or even in a small corridor. So when you navigate in tight spaces, of course, ground effect is very strong. So do you guys then model these aerodynamic effects, ground effect— not just ground effect. Do you try to model all possible aerodynamic effects, especially when you fly close to structures?
</p><p>
<strong>Bry: </strong>It’s an interesting question. So today we don’t model— we estimate the wind. We estimate the local wind velocity—and we’ve actually found that we can do that pretty accurately—around the drone, and then the local wind that we’re estimating gets fed back into the control system to compensate. And so that’s kind of like a catch-all bucket for— you could think about ground effect as like a variation— this is not exactly how it works, obviously, but you could think about it as like a variation in the local wind, and our response times on those, like the ability to estimate wind and then feed it back into control, is pretty quick, although it’s not instantaneous. So if we had like a feed forward model where we knew as we got close to structures, “This is how the wind is likely to vary,” we could probably do slightly better. And I think you’re— what you’re pointing at here, I basically agree with. I think the more that you kind of try to squeeze every drop of performance out of these things you’re flying with maximum agility in very dense environments, the more these things start to matter, and I could see us wanting to do something like that in the future, and that stuff’s fun. I think it’s fun when you sort of hit the limit and then you have to invent better new algorithms and bring more information to bear to get the performance that you want.
</p><p>
	On this— perhaps related. You can tell me. So you guys have done a lot of work with event cameras, and I think that you were— this might not be right, but from what I’ve seen, I think you were one of the first, if not the first, to put event cameras on quadcopters. I’d be very interested in— and you’ve probably told these stories a lot, but I still think it’d be interesting to hear. What steered you towards event cameras? How did you find out about them, and what made you decide to invest in research in them?
</p><p>
<strong>Scaramuzza: </strong>[crosstalk] first of all, let me explain <a href="https://spectrum.ieee.org/drone-with-event-camera-takes-first-autonomous-flight" target="_self">what an event camera is</a>. An event camera is a camera that has also pixels, but differently from a standard camera, an event camera only sends information when there is motion. So if there is no motion, then the camera doesn’t stream any information. Now, the camera does this through smart pixels, differently from a standard camera, where every pixel triggers information the same time at equidistant time intervals. In an event camera, the pixels are smart, and they only trigger information whenever a pixel detects motion. Usually, a motion is recorded as a change of intensity. And the stream of events happens asynchronously, and therefore, the byproduct of this is that you don’t get frames, but you only get a stream of information continuously in time with microsecond temporal resolution. So one of the key advantages of event cameras is that, basically, you can actually record phenomena that actually would take expensive high-speed cameras to perceive. But the key difference with a standard camera is that an event camera works in differential mode. And because it works in differential mode, by basically capturing per-pixel intensity differences, it consumes very little power, and it also has no motion blur, because it doesn’t accumulate photons over time.
</p><p>
	So I would say that for robotics, what I— because you asked me how did I find out. So what I really, really saw, actually, that was very useful for robotics about event cameras were two particular things. First of all, the very high temporal resolution, because this can be very useful for safety, critical systems. And I’m thinking about drones, but also to avoid collisions in the automotive setting, because now we are also working in automotive settings as well. And also when you have to navigate in low-light environments, where using a standard camera with the high exposure times, you would actually be coping with a lot of motion blur that would actually cause a feature loss and other artifacts, like impossibility to detect objects and so on. So event cameras excel at this. No motion blur and very low latency. Another thing that could be also very interesting for especially lightweight robotics—and I’m thinking of micro drones—would be actually the fact that they consume also very little power. So little power, in fact, just to be on an event camera consumes one milliwatt, on average, because in fact, the power consumption depends on the dynamics of the scene. If nothing moves, then the power consumption is very negligible. If something moves, it is between one milliwatt or maximum 10 milliwatt.
</p><p>
	Now, the interesting thing is that if you then couple event cameras with the spiking neuromorphic chips that also consume less than one milliwatt, you can actually mount them on a micro drones, and you can do amazing things, and we started working on it. The problem is that how do you train spiking networks? But that’s another story. Other interesting things where I see potential applications of event cameras are also, for example— now, think about your keyframe features of the Skydio drones. And here what you are doing, guys, is that basically, you are flying the drones around, and then you’re trying to send 3D positions and orientation of where you would like then [inaudible] to fly faster through. But the images have been captured while the drone is still. So basically, you move the drone to a certain position, you orient it in the direction where later you want it to fly, and then you record the position and orientation, and later, the drone will fly agilely through it. But that means that, basically, the drone should be able to relocalize fast with respect to this keyframe. Well, at some point, there are failure modes. We already know it. Failure modes. When the illumination goes down and there is motion blur, and this is actually something where I see, actually, the event camera could be beneficial. And then other things, of course [crosstalk]—
</p><p>
<strong>Ackerman: </strong>Do you agree with that, Adam?
</p><p>
<strong>Bry: </strong>Say again?
</p><p>
<strong>Ackerman: </strong>Do you agree, Adam?
</p><p>
	Bry: I guess I’m— and this is why kind of I’m asking the question. I’m very curious about event cameras. When I have kind of the pragmatic hat on of trying to build these systems and make them as useful as possible, I see event cameras as quite complementary to traditional cameras. So it’s hard for me to see a future where, for example, on our products, we would be only using event cameras. But I can certainly imagine a future where, if they were compelling from a size, weight, cost standpoint, we would have them as an additional sensing mode to get a lot of the benefits that Davide is talking about. And I don’t know if that’s a research direction that you guys are thinking about. And in a research context, I think it’s very cool and interesting to see what can you do with just an event camera. I think that the most likely scenario to me is that they would become like a complementary sensor, and there’s probably a lot of interesting things to be done of using standard cameras and event cameras side by side and getting the benefits of both, because I think that the context that you get from a conventional camera that’s just giving you full static images of the scene, combined with an event camera could be quite interesting. You can imagine using the event camera to sharpen and get better fidelity out of the conventional camera, and you could use the event camera for faster response times, but it gives you less of a global picture than the conventional camera. So Davide’s smiling. Maybe I’m— I’m sure he’s thought about all these ideas as well.
</p><p>
<strong>Scaramuzza:</strong> Yeah. We have been working on that exact thing, combining event cameras with standard cameras, now for the past three years. So initially, when we started almost 10 years ago, of course, we only focused on event cameras alone, because it was intellectually very challenging. But the reality is that an event camera—let’s not forget—it’s a differential sensor. So it’s only complementary with standard camera. You will never get the full absolute intensity from out of an event camera. We show that you can actually reproduce the grayscale intensity up to an unknown absolute intensity with very high fidelity, by the way, but it’s only complementary to a standard camera, as you correctly said. So actually, you already mentioned everything we are working on and we have also already published. So for example, you mentioned unblurring blurry frames. This also has already been done, not by my group, but a group of Richard Hartley at the University of Canberra in Australia. And what we also showed in my group last year is that you can also generate super slow motion video by combining an event camera with a standard camera, by basically using the events in the blind time between two frames to interpolate and generate arbitrary frames at any arbitrary time. And so we show that we could actually upsample a low frame rate video by a factor of 50, and this with only consuming one-fortieth of the memory footprint. And this is interesting, because—
</p><p>
<strong>Bry: </strong>Do you think from— this is a curiosity question. From a hardware standpoint, I’m wondering if it’ll go the next— go even a bit further, like if we’ll just start to see image sensors that do both together. I mean, you could certainly imagine just putting the two pieces of silicon right next to each other, or— I don’t know enough about image sensor design, but even at the pixel level, you could have pixel— like just superimposed on the same piece of silicon. You could have event pixels next to standard accumulation pixels and get both sets of data out of one sensor.
</p><p>
<strong>Scaramuzza: </strong>Exactly. So both things have been done. So—
</p><p>
<strong>Bry:</strong> [crosstalk].
</p><p>
<strong>Scaramuzza:</strong> —the latest one I described, we actually installed an event camera side by side with a very high-resolution standard camera. But there is already an event camera called DAVIS that outputs both frames and events between the frames. This has been available already since 2016, but at the very low resolution, and only last year it reached the VGA resolution. That’s why we are combining—
</p><p>
<strong>Bry: </strong>That’s like [crosstalk].
</p><p>
<strong>Scaramuzza: </strong>—an event camera with a high-resolution standard camera, because want to basically see what we could possibly do one day when these event cameras are also available [inaudible] resolution together with a standard camera overlaid on the same pixel array. But there is a good news, because you also asked me another question about cost of this camera. So the price, as you know very well, drops as soon as there is a mass product for it. The good news is that Samsung has now a product called <a href="https://www.samsung.com/se/smartthings/camera/smart-things-vision-gp-u999gteeaea/" rel="noopener noreferrer" target="_blank">SmartThings Vision Sensor</a> that basically is conceived for indoor home monitoring, so to basically detect people falling at home, and this device automatically triggers an emergency call. So this device is using an event camera, and it costs €180, which is much less than the cost of an event camera when you buy it from these companies. It’s around €3,000. So that’s a very good news. Now, if there will be other bigger applications, we can expect that the price would go down a lot, below even $5. That’s what these companies are openly saying. I mean, what I expect, honestly, is that it will follow what we experience with the time-of-flight cameras. I mean, the first time-of-flight cameras cost around $15,000, and then 15 years later, they were below $150. I’m thinking of the first Kinect tool that was time-of-flight and so on. And now we have them in all sorts of smartphones. So it all depends on the market.
</p><p>
<strong>Ackerman:</strong> Maybe one more question from each of you guys, if you’ve got one you’ve been saving for the end.
</p><p>
<strong>Scaramuzza: </strong>Okay. The very last question [inaudible]. Okay. I ask, Adam, and then you tell me if you want to answer or rather not. It’s, of course, about defense. So the question I prepared, I told Evan. So I read in the news that <a href="https://www.skydio.com/blog/skydio-raises-230-million-series-e-funding-round" rel="noopener noreferrer" target="_blank">Skydio donated 300K of equivalent of drones to Ukraine</a>. So my question is, what are your views on military use or dual use of quadcopters, and what is the philosophy of Skydio regarding defense applications of drones? I don’t know if you want to answer.
</p><p>
<strong>Bry:</strong> Yeah, that’s a great question. I’m happy to answer that. So our mission, which we’ve talked about quite publicly, is to make the world more productive, creative, and safe with autonomous flight. And the position that we’ve taken, and which I feel very strongly about, is that working with the militaries of free democracies is very much in alignment and in support of that mission. So going back three or four years, we’ve been working with the US Army. We won the Army’s <a href="https://www.skydio.com/blog/skydio-selected-sole-platform-for-us-army-srr" rel="noopener noreferrer" target="_blank">short-range reconnaissance program</a>, which was essentially a competition to select the official kind of soldier-carried quadcopter for the US Army. And the broader trend there, which I think is really interesting and in line with what we’ve seen in other technology categories, is basically the consumer and civilian technology just raced ahead of the traditional defense systems. The military has been using drones for decades, but their soldier-carried systems were these multi-hundred-thousand-dollar things that are quite clunky, quite difficult to use, not super capable. And our products and other products in the consumer world basically got to the point where they had comparable and, in many cases, superior capability at a fraction of the cost.
</p><p>
	And I think— to the credit of the US military and other departments of defense and ministries of defense around the world, I think people realized that and decided that they were better off going with these kind of dual-use systems that were predominantly designed and scaled in civilian markets, but also had defense applicability. And that’s what we’ve done as a company. So it’s essentially our consumer civilian product that’s extended and tweaked in a couple of ways, like the radios, some of the security protocols, to serve defense customers. And I’m super proud of the work that we’re doing in Ukraine. So we’ve donated $300,000 worth of systems. At this point, we’ve sold way, way more than that, and we have hundreds of systems in Ukraine that are being used by Ukrainian defense forces, and I think that’s good important work. The final piece of this that I’ll say is we’ve also decided and we aren’t doing and we won’t put weapons on our drones. So we’re not going to build actual munition systems, which I think is— I don’t think there’s anything ethically wrong with that. Ultimately, militaries need weapons systems, and those have an important role to play, but it’s just not something that we want to do as a company, and is kind of out of step with the dual-use philosophy, which is really how we approach these things.
</p><p>
	I have a question that I’m— it’s aligned with some of what we’ve talked about, but I’m very interested in how you think about and focus the research in your lab, now that this stuff is becoming more and more commercialized. There’s companies like us and others that are building real products based on a lot of the algorithms that have come out of academia. And in general, I think it’s an incredibly exciting time where the pace of progress is accelerating, there’s more and more interesting algorithms out there, and it seems like there’s benefits flowing both ways between research labs and between these companies, but I’m very interested in how you’re thinking about that these days.
</p><p>
<strong>Scaramuzza: </strong>Yes. It’s a very interesting question. So first of all, I think of you also as a robotics company. And so what you are demonstrating is what [inaudible] of robotics in navigation and perception can do, and the fact that you can do it on a drone, it means you can also do it on other robots. And that actually is a call for us researchers, because it pushes us to think of new venues where we can actually contribute. Otherwise, it looks like everything has been done. And so what, for example, we have been working on in my lab is trying to— so towards the goal of achieving human-level performance, how do humans do navigate? They don’t do ultimate control and geometric 3D reconstruction. We have a brain that does everything end to end, or at least with the [inaudible] subnetworks. So one thing that we have been playing with has been now deep learning for already now, yeah, six years. But in the last two years, we realized, actually, that you can do a lot with deep networks, and also, they have some advantages compared to the usual traditional autonomy architectures— architecture of autonomous robots. So what is the standard way to control robots, be it flying or ground? You have [inaudible] estimation. They have a perception. So basically, special AI, semantic understanding. Then you have localization, path planning, and control.
</p><p>
	Now, all these modules are basically communicating with one another. Of course, you want them to communicate in a smart way, because you want to also try to plan trajectories that facilitate perception, so you have no motion blur while you navigate, and so on. But somehow, they are always conceived by humans. And so what we are trying to understand is whether you can actually replace some of these blocks or even all blocks and up to each point with deep networks, which begs the question, can you even train a policy end to end that takes as input some sort of sensory, like either images or even sensory obstructions, and outputs control commands of some sort of output abstraction, like [inaudible] or like waypoints? And what we found out is that, yes, this can be done. Of course, the problem is that for training these policies, you need a lot of data. And how do you generate this data? You can not fly drones in the real world. So we started working more and more in simulation. So now we are actually training all these things in simulation, even for forests. And thanks to the video game engines like Unity, now you can download a lot of these 3D environments and then deploy your algorithms there that train and teach a drone to fly in just a bunch of hours rather than flying and crashing drones in the real world, which is very costly as well. But the problem is that we need better simulators.
</p><p>
	We need better simulators, and I’m not just thinking of for the realism. I think that one is actually somewhat solved. So I think we need the better physics like aerodynamic effects and other non-idealities. These are difficult to model. So we are also working on these kind of things. And then, of course, another big thing would be you would like to have a navigation policy that is able to abstract and generalize to different type of tasks, and possibly, at some point, even tell your drone or robot a high-level description of the task, and the drone or the robot would actually accomplish the task. That would be the dream. I think that the robotics community, we are moving towards that.
</p><p>
<strong>Bry:</strong> Yeah. I agree. I agree, and I’m excited about it.
</p><p>Ackerman: We’ve been talking with Adam Bry from Skydio and Davide Scaramuzza from the University of Zürich about agile autonomous drones, and thanks again to our guests for joining us. For <em>Chatbot</em> and <em>IEEE Spectrum</em>, I’m Evan Ackerman.</p>]]></description><pubDate>Tue, 03 Oct 2023 10:00:01 +0000</pubDate><guid>https://spectrum.ieee.org/autonomous-drones</guid><category>Drones</category><category>University-of-zurich</category><category>Event-cameras</category><category>Robots</category><category>Skydio</category><category>Simulations</category><category>Chatbot-podcast</category><category>Type-podcast</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/36416885/origin.jpg"/></item><item><title>Creating Domestic Robots That Really Help</title><link>https://spectrum.ieee.org/domestic-robots</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/chatbot-podcast-logo-showing-two-robot-heads-facing-each-other.jpg?id=36416762&width=980"/><br/><br/><p class="shortcode-media shortcode-media-youtube">
<span class="rm-shortcode" data-rm-shortcode-id="90b50eec7875be630b223030476bf436" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/QMjEtL9i1zI?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span>
<small class="image-media media-caption" placeholder="Add Photo Caption...">Episode 2: How Labrador and iRobot Create Domestic Robots That Really Help</small>
<small class="image-media media-photo-credit" placeholder="Add Photo Credit..."><a href="https://youtu.be/QMjEtL9i1zI" target="_blank"><br/>
</a></small>
</p><p>
<strong>Evan Ackerman: </strong>I’m Evan Ackerman, and welcome to ChatBot, a new podcast from<em> IEEE Spectrum</em> where robotics experts interview each other about things that they find fascinating. On this episode of ChatBot, we’ll be talking with Mike Dooley and Chris Jones about useful robots in the home. <a href="https://labradorsystems.com/about/" target="_blank">Mike Dooley is the CEO and co-founder of Labrador Systems</a>, the startup that’s developing an assistive robot in the form of a sort of semi-autonomous mobile table that can help people move things around their homes. Before founding Labrador, Mike led the development of <a href="https://spectrum.ieee.org/review-evolution-robotics-mint-sweeper" target="_self">Evolution Robotics’ innovative floor-cleaning robots</a>. And when <a href="https://spectrum.ieee.org/irobot-sweeps-up-evolution-robotics-for-74-million" target="_self">Evolution was acquired by iRobot in 2012</a>, Mike became iRobot’s VP of product and business development. Labrador Systems is getting ready to launch its first robot, the <a href="https://spectrum.ieee.org/labrador-systems-robot" target="_self">Labrador Retriever</a>, in 2023. <a href="https://www.linkedin.com/in/cjonesrobotics" target="_blank">Chris Jones is the chief technology officer at iRobot</a>, which is arguably one of the most successful commercial robotics companies of all time. Chris has been at iRobot since 2005, and he spent several years as a senior investigator at iRobot research working on some of <a href="https://spectrum.ieee.org/video-tour-all-of-irobots-coolest-stuff" target="_self">iRobot’s more unusual and experimental projects</a>. iRobot Ventures is one of the investors in Labrador Systems. Chris, you were doing some interesting stuff at iRobot back in the day too, that I think a lot of people may not know how diverse iRobot’s robotics projects were.<br/>
</p><p>
<strong>Chris Jones:</strong> I think iRobot as a company, of course, being around since 1990, has done all sorts of things. Toys, commercial robots, consumer, military, industrial, all sorts of different things. But yeah, myself in particular, I spent the first seven, eight years of my time at iRobot doing a lot of super fun kind of far-out-there research types of projects, a lot of them funded by places like DARPA and working with some great academic collaborators, and of course, a whole crew of colleagues at iRobot. But yeah, <a href="https://spectrum.ieee.org/irobot-developing-inflatable-robot-arms-inflatable-robots" target="_self">some of those</a> were ranged from completely squishy robots to robot arms to robots that could climb mountainsides to robots under the water, all sorts of different fun, useful, but fun, of course, and really challenging, which makes it fun, different types of robot concepts.
</p><p>
<strong>Ackerman:</strong> And those are all getting incorporated to the next generation Roomba, right?
</p><p>
<strong>Jones:</strong> I don’t know that I can comment on—
</p><p>
<strong>Ackerman: </strong>That’s not a no. Yeah. Okay. So Mike, I want to make sure that people who aren’t familiar with Labrador get a good understanding of what you’re working on. So can you describe kind of Labrador’s robot, what it does and why it’s important?
</p><p>
<strong>Mike Dooley:</strong> Yeah. So Labrador, we’re developing <a href="https://spectrum.ieee.org/labrador-systems-robot" target="_self">a robot called the Retriever</a>, and it’s really designed as an extra pair of hands for individuals who have some issue either with pain, a health issue or injury that impacts their daily activities, particularly in the home. And so this is a robot designed to help people live more independently and to augment their abilities and give them some degree of autonomy back where they’re fighting that with the issue that they’re facing. And the robot, I think it’s been— after previewing its CES, it has been called a self-driving shelf. It’s designed to be really a mobile platform that’s about the size of a side table but has the ability to carry things as large as a laundry basket or set the dinner and plates on it, automatically navigates from place to place. It raises up to go up to countertop height when you’re by the kitchen sink and lowers down when you’re by your armchair. And it has the ability to retrieve too. So it’s a cross between robots that are used in warehousing to furniture mixed together to make something that’s comfortable and safe for the environment, but really is really meant to help folks where they have some difficulty moving themselves. This is meant to help them give that some degree of that independence back, as well as extend the impact of it for caregivers.
</p><p>
<strong>Ackerman: </strong>Yeah, I thought that was a fantastic idea when I first saw it at <a href="https://www.ces.tech/" target="_blank">CES</a>, and I’m so glad that you’ve been able to continue working on it. And especially with some support from folks like iRobot, right? Chris, iRobot is an investor in Labrador?
</p><p>
<strong>Jones:</strong> Correct. Through iRobot Ventures, we’re an early investor in Labrador. Of course, where that means, and we continue to be super excited about what they’re doing. I mean, for us, anyone who has great ideas for how robots can help people, in particular, assist people in their home with independent living, etc., I think is something we strongly believe is going to be a great application for robots. And when making investments, I’ll just add, of course, that earliest stage, a lot of it is about the team, right? And so Mike and the rest of his team are super compelling, right? That paired with a vision, that’s something that we believe is a great application for robots. It makes it an easy decision, right, to say there’s someone we’d like to support. So we love seeing their progress.
</p><p>
<strong>Ackerman:</strong> Yeah, me too.
</p><p>
<strong>Dooley: </strong>And we appreciate your support very much. So yeah.
</p><p>
<strong>Ackerman: </strong>All right, so what do you guys want to talk about? Mike, you want to kick things off?
</p><p>
<strong>Dooley: </strong>I can lead off. Yeah, so in full disclosure, at some point in my life, I was-- Chris, what’s the official name for an iRobot employee? I forgot what they came up with. It’s not iRoboteer, is it?
</p><p>
<strong>Jones:</strong> iRoboteer. Yeah.
</p><p>
<strong>Dooley: </strong>Okay, okay. All right, so I was an iRoboteer in my past life and crossed over with Chris for a number of years. And I know they’ve renovated the building a couple times now, but these products you mentioned or the robots you mentioned at the beginning, a lot of them are in display <a href="https://experience.irobot.com/irobot-education-virtual-museum-tours-2023" target="_blank">in a museum</a>. And so I think my first question to Chris was, can you think of one of those, either that you worked on or maybe it didn’t, but you go, “Man, this should have taken off or this should have been this--” or it should have or you wished it would have. It would have been great if one of those that’s in there because there’s a lot, so.
</p><p>
<strong>Jones:</strong> Yes, there are a lot. You’re right. We have a museum, and it has been renovated in the last couple years, Mike, so you should come back and visit and check out the new updated museum. How would I answer that? There are so many things in there. I would say one that I have some sentimentality toward, and I think it holds some really compelling promise, even though at least to date, it hasn’t gone anywhere outside of the museum, Evan, is related to the squishy robots I was talking about. And in my mind, in one of the key challenges in unlocking future value in robots, and in particular, in autonomous robots, for example, in the home, is manipulation, is physical manipulation of the environment in the home. And Mike and Labrador are doing a little bit of this, right, by being able to maneuver and pick up, carry, drop off some things around the home. But the idea of a robot that’s able to physically pick up, grasp objects, pick them up off the floor, off a counter, open and close doors, all of those things is kind of the Holy Grail, right, if you can cost-effectively and robustly do that. In the home, there’s all sorts of great applications for that. <a href="https://spectrum.ieee.org/universal-jamming-gripper" target="_self">And one of those research projects that’s in the museum was actually something called the Jamming Gripper</a>. Mike, I don’t know if you remember seeing that at all, but this takes me back. And Evan, actually, I’m sure there are some IEEE stories and stuff back in the day from this. But this was an idea of a very compliant, it’s a soft manipulator. It’s not a hand. It’s actually very close to imagining a very soft membrane that’s filled with coffee grounds. So imagine a bag of coffee, right? Very soft and compliant.
</p><p>
	But vacuum-packed coffee, you pull a vacuum on that bag. It turns rigid in the shape that it was in. It’s like a brick, which is a great concept for thinking about robot manipulation. That’s one idea. We had spent some research time with some folks in academia, had built a huge number of prototypes, and I still feel like there’s something there. There’s a really interesting concept there that can help with that more general purpose manipulation of objects in the home. So Mike, if you want to talk to us about licensing, maybe we can do that for Labrador with all your applications.
</p><p>
<strong>Dooley: </strong>Yeah. Actually, that’s what you should add. It would probably increase your budget dramatically, but you should add live demonstrations to the museum. See if you can have projects to get people to bring some of those back. Because I’m sure I saw it. I never knew it was doing that.
</p><p>
<strong>Jones:</strong> I mean, maybe we can continue this. There might be a little bit of a thread to continue that question into—the first one that came to my mind, Mike, when I was thinking about what to ask. And it’s something I have a lot of admiration or respect for you and how you do your job, which is you’re super good at engaging and listening to users kind of in their context to understand what their problems are. Such that you can best kind of articulate or define or ideate things that could help them address problems that they encounter in their everyday life. And that then allows you kind of as a leader, right, to use that to motivate quick prototype development to get the next level of testing or validation of what if this, right? And those things may or may not involve duct tape, right, involve some very crude things that are trying to elicit kind of that response or feedback from a user in terms of, is this something that would be valuable to you in overcoming some challenges that I’ve observed you having, let’s say, in your home environment? So I’m curious, Mike, how do you think about that process and how that translates into shaping a product design or the identification of an opportunity? I’m curious, maybe what you’ve learned through Labrador. I know you spent a lot of time in people’s homes to do exactly that. So I’m curious, how do you conduct that work? What are you looking for? How does that guide your development process?
</p><p>
<strong>Dooley: </strong>The word that you talk about is customer empathy, is are you feeling their pain? Are you understanding their need, and how are you connecting with it? And my undergrad’s in psychology, so I always was interested in what makes people think the way they do. I remember a iRobot study going into a home. And we were in the last day testing with somebody and a busy mom. And we’re testing <a href="https://www.irobot.com/en_US/braava.html" rel="noopener noreferrer" target="_blank">Braava Jet</a>. It’s a little robot that iRobot sells, that it’s really good for places with tight spaces for spraying and scrubbing floors, like kitchens and bathrooms. And the mom said, she almost said it was exhaustion, is that— I said, “What is it?” She says, “Does this do as good of a job as you could do?” And I think most people from iRobot would admit, “No. Can I match what the grease power, all the effort and everything I can put into this?” And she says, “But at least I can set this up, hit a button, and I can go to sleep. And at least it’s getting the job done. It’s doing something, and it gives me my time back.” And when you hear that, people go, “Well, Roomba is just something that cleans for people or whatever.” Like, “No. Roomba gives people their time back.” And once you’re on that channel, then you start thinking about, “Okay, what can we do more with the product that does that, that’s hitting that sort of core thing?” So yeah, and I think having the humbleness to not build a product you want, build it to the need, and then also the humbleness about where you can meet that need and where you can’t. Because robotics is hard, and we can’t make Rosey yet and things like that, so.
</p><p>
<strong>Ackerman:</strong> Mike, I’m curious, did you have to make compromises like that? Is there an example you could give with Labrador?
</p><p>
<strong>Dooley: </strong>Oh, jeez, all the— yeah. I mean, no, Labrador is perfect. No, I mean, we go through that all the time. I think on Labrador, no, we can’t do everything people want. What you’re trying to say, is it— I think there’s different languages of minimum viable product or good enough. There was somebody at Amazon used the term— I’m going to blank on it. It was like wonderful enough or something, or they have a nicer—
</p><p>
<strong>Jones:</strong> Lovable?
</p><p>
<strong>Dooley: </strong>Lovable. Yeah, lovable enough or something. And I think that that’s what you have to remember, is like, so on one hand, you have to be— you have to sort of have this open heart that you want to help people. And the other point, you have to have a really tight wallet because you just can’t spend enough to meet everything that people want. And so just a classic example is, Labrador goes up and down a certain amount of height. And people’s cabinets and someone in a wheelchair, they would love it if we would go up to the upper cabinets above the kitchen sink or other locations. And when you look at that, mechanically we can, but that then creates-- there’s product realities about stability and tilt testing. And so we have to fit those. Chris knows that well with <a href="https://robotsguide.com/robots/ava" rel="noopener noreferrer" target="_blank">Ava</a>, for instance, is how heavy the base is for every inch you raise the mass above a certain amount. And so we have to make a limit. You have to say, “Hey, here’s the envelope. We’re going to do this to this, or we’re going to carry this much because that’s as much as we could deliver with this sort of function.” And then, is that lovable enough? Is that is that rewarding enough to people? And I think that’s the hard [inaudible], is that you have to do these deliveries within constraints. And I think sometimes when I’m talking to folks, they’re either outside robotics or they’re very much on the engineering side and not thinking about the product. They tend to think that you have to do everything. And it’s like that’s not how product development works, is you have to do just the critical first step, because then that makes this a category, and then you can do the next one and the next one. I think it brings to mind— Roomba has gone through an incredible evolution of what its functions were and how it worked and its performance since the very first version and to what Chris and team offer now. But if they tried to do the version today back then, they wouldn’t have been able to achieve it. And others fail because they probably went to the wrong angle. And yeah.
</p><p>
<strong>Jones: </strong>Evan, I think you asked if there are anything that was operating under constraints. I think product development in general, I presume, but certainly, robotics is all about constraints. It’s how do you operate within those? How do you understand where those boundaries are and having to make those calls as to— how are you going to have to— how are you going to decide to constrain your solution, right, to make sure that it’s something that’s feasible for you to do, right? It’s meeting a compelling need. It’s feasible for you to do. You can robustly deliver it. Trying to get that entire equation to work means you do have to reckon with those constraints kind of across the board to find the right solve. Mike, I’m curious. You do your user research, you have that customer empathy, you’ve perhaps worked through some of these surprising challenges that I’m sure you’ve encountered along the way with Labrador. You ultimately get to a point that you’re able to do pilots in homes, right? You’re actually now this— maybe the Duct Tape is gone or it’s at least hidden, right? It’s something that looks and feels more like a product and you’re actually getting into some type of more extended pilot of the product or idea of the product in users’ homes. What are the types of things you’re looking to accomplish with those pilots? Or what have you learned when you go from, “All right, I’ve been watching this user in their home with those challenges. So now I’m actually leaving something in their home without me being there and expecting them to be able to use it”? What’s the benefit or the learnings that you encounter in conducting that type of work?
</p><p>
<strong>Dooley: </strong>Yeah, it’s a weird type of experiment and there’s different schools of thought of how you do stuff. Some people want to go in and research everything to death and be a fly on the wall. And we went through this— I won’t say the source of it. A program we had to go through because of some of the— because of some of the funding that we’re getting from another project. And the quote in the beginning, they put up a slide that I think it’s from Steve Jobs. I’m sure I’m going to butcher it, that people don’t know what they want until I show them or something. I forget what the exact words are. And they were saying, “Yeah, that’s true for Steve Jobs, but for you, you can really talk to the customer and they’re going to tell you what they need.” I don’t believe that.
</p><p>
<strong>Jones:</strong> They need a faster horse, right? They don’t need a car.
</p><p>
<strong>Dooley: </strong>Yeah, exactly.
</p><p>
<strong>Jones:</strong> They’re going to tell you they need a faster horse.
</p><p>
<strong>Dooley:</strong> Yeah, so I’m in the Steve Jobs camp and on that. And it’s not because people aren’t intelligent. It’s just that they’re not in that world of knowing what possibilities you’re talking about. So I think there is this sort of soft skill between, okay, listen to their pain point. What is that difficulty of it? You’ve got a hypothesis to say, “Okay, out of everything you said, I think there’s an overlap here. And now I want to find out—” and we did that. We did that in the beginning. We did different ways of explaining the concept, and then the first level we did was just explain it over the phone and see what people thought of it and almost test it neutrally. Say, “Hey, here’s an idea.” And then, “Oh, here’s an idea like Roomba and here’s an idea like Alexa. What do you like or dislike?” Then we would actually build a prototype that was remote-controlled and brought it in their home, and now we finally do the leave-behind. And the whole thing is it’s like how to say it. It’s like you’re sort of releasing it to the world and we get out of the way. The next part is that it’s like letting a kid go and play soccer on their own and you’re not yelling or anything or don’t even watch. You just sort of let it happen. And what you’re trying to do is organically look at how are people— you’ve created this new reality. How are people interacting with it? And what we can see is the robots, they won’t do this in the future, but right now they talk on Slack. So when they send it to the kitchen, I can look up and I can see, “Hey, user one just sent it to the kitchen, and now they’re sending it to their armchair, and they’re probably having an afternoon snack. Oh, they sent it to the laundry room. Now they sent it over to the closet. They’re doing the laundry.” And the thing for us was just watching how fast were people adopting certain things, and then what were they using it for. And the striking thing that was—
</p><p>
<strong>Jones: </strong>That’s interesting.
</p><p>
<strong>Dooley: </strong>Yeah, go ahead.
</p><p>
<strong>Jones:</strong> I was just going to say, I mean, that’s interesting because I think I’m sure it’s very natural to put the product in someone’s home and kind of have a rigid expectation of, “No, no, this is how you use it. No, no, you’re doing it wrong. Let me show you how you use this.” But what you’re saying is it’s almost, yeah, you’re trying your best to solve their need here, but at some point you kind of leave it there, and now you’re also back into that empathy mode. It’s like, “Now with this tool, how do you use it?” and see kind of what happens.
</p><p>
<strong>Dooley:</strong> I think you said it in a really good way, is that you’ve changed this variable in the experiment. You’ve introduced this, and now you go back to just observing, just hearing what they’re— just watching what they’re doing with it, being as in-intrusive as possible, which is like, “We’re not there anymore.” Yeah, the robot’s logging it and we can see it, but it’s just on them. And we’re trying to stay out of the process and see how they engage with it. And that’s sort of like the thing that— we’ve shared it before, but we were just seeing that people were using it 90 to a 100 times a month, especially after the first month. It was like, we were looking at just the steady state. Would this become a habit or routine, and then what were they using it for?
</p><p>
<strong>Jones:</strong> So you’re saying when you see that, you have kind of a data point of one or a small number, but you have such a tangible understanding of the impact that this seems to be having, that you as an entrepreneur, right, that gives you a lot of confidence that may not be visible to whatever people that are outside the walls just trying to look at what you’re doing in the business. They see one data point, which is harder to grapple with, but you, being that close and understanding in that connection between what the product is doing and the needs that that gives you or the team a substantial confidence boost, right, is to, “This is working. We need to scale it. We have to show that this ports to other people in their homes, etc.,” but it gives you that confidence.
</p><p>
<strong>Dooley: </strong>Yeah, and then when we take the robots away, because we only have so many and we rotate them, getting the guilt trip emojis two months later from people, “I miss my robot. When are you going to build a new one?” and all that and stuff. So—
</p><p>
<strong>Jones:</strong> Do people name the robots?
</p><p>
<strong>Dooley:</strong> Yeah. They immediately do that and come up with creative names for it. One was called Rosey, naturally, but others was like— I’m forgetting the name she called it. It was inspired by a science fiction on an artificial AI companion and things. And it was just quite a bit of just different angles of— because she saw this as her assistant. She saw this as sort of this thing. But yeah, so I think that, again, for a robot, what you can see in the design is the classic thing at CES is to make a robot with a face and arms that doesn’t really do anything with those, but it pretends to be humanoid or human-like. And so we went the entire other route with this. And the fact that people then still relate to it that way, it means that-- we’re not trying to be cold or dispassionate. We’re just really interested in, can they get that value? Are they reacting to what the robot is doing, not to what the sort of halo that you sort of dressed it up as for that?
</p><p>
<strong>Jones: </strong>Yeah, I mean, as you know, like with Roomba or Braava and things like that, it’s the same thing. People project anthropomorphism or project that personality onto them, but that’s not really there, right, in a strong way. So yeah.
</p><p>
<strong>Dooley:</strong> Yeah, no, and it’s weird. And it’s something they do with robots in a weird way that they don’t-- people don’t name their dishwasher usually or something. But no, I would have-
</p><p>
<strong>Jones: </strong>You don’t?
</p><p>
<strong>Dooley:</strong> Yeah, [inaudible]. I did for a while. The stove got jealous, and then we had this whole thing when the refrigerator got into it.
</p><p>
<strong>Ackerman:</strong> I’ve heard anecdotally that maybe this was true with PackBots. I don’t know if it’s true with Roombas. That people want their robot back. They don’t want you to replace their old robot with a new robot. They want you to fix the old robot and have that same physical robot. It’s that lovely connection.
</p><p>
<strong>Jones:</strong> Yeah, certainly, PackBot on kind of the military robot side for bomb disposal and things like that, you would directly get those technicians who had a damaged robot, who they didn’t want a new robot. They wanted this one fixed, right? Because again, they anthropomorphize or there is some type of a bond there. And I think that’s been true with all of the robots, right? It’s something about the mobility, right, that embodies them with some type of a-- people project a personality on it. So they don’t have to be fancy and have arms and faces necessarily for people to project that on them. So that seems to be a common trait for any autonomously mobile platform.
</p><p>
<strong>Ackerman:</strong> Yeah. Mike, it was interesting to hear you say that. You’re being very thoughtful about that, and so I’m wondering if Chris, you can address that a little bit too. I don’t know if they do this anymore, but for a while, robots would speak to you, and I think it was a female voice that they had if they had an issue or something or needed to be cleaned. And that I always found to be an interesting choice because it’s sort of like the company is now giving this robot a human characteristic that’s very explicit. And I’m wondering how much thought went into that, and has that changed over the years about how much you’re willing to encourage people to anthropomorphize?
</p><p>
<strong>Jones: </strong>I mean, it’s a good question. I mean, that’s evolved, I would say, over the years, from not so much to there’s more of kind of a vocalization coming from the robot for certain scenarios. It is an important part. Some users, that is a primary way of interacting. I would say more of that type of feedback these days comes through more of kind of the mobile experience through the app to give both the feedback, additional information, actionable next steps. If you need to empty the dustbin or whatever it is, that that’s just a richer place to put that and a more accepted or common way for that to happen. So I don’t know, I would say that’s the direction things have trended, but I don’t know that that’s— that’s not because I don’t believe that we’re not trying to humanize the robot itself. It’s just more of a practical place where people these days will expect. It’s almost like Mike was saying about the dishwasher and the stove, etc. If everything is trying to talk to you like that or kind of project its own embodiment into your space, it could be overwhelming. So I think it’s easier to connect people at the right place and the right time with the right information, perhaps, if it’s through the mobile experience though.
</p><p>
	But it is. That human-robot interaction or that experience design is a nuanced and tricky one. I’m certainly not an expert there myself, but it’s hard to find that right balance, that right mix of, what do you ask or expect of the user versus what do you assume or don’t give them an option? Because you also don’t want to overload them with too much information or too many options or too many questions, right, as you try to operate the product. So sometimes you do have to make assumptions, make defaults, right, that maybe can be changed if there’s really a need to that might require more digging. And Mike, I was curious. That was a question I had for you, was you have a physically, a meaningfully-sized product that’s operating autonomously in someone’s home, right?
</p><p>
<strong>Dooley:</strong> Yes.
</p><p>
<strong>Jones:</strong> Roomba can drive around and will navigate, and it’s a little more expected that we might bump into some things as we’re trying to clean and clean up against walls or furniture and all of that. Then it’s small enough that that isn’t an issue. How do you design for a product of the size that you’re working on, right? What went into kind of human-robot interaction side of that to allow for people who need to use this in their home that are not technologists, but they can take advantage of the— that can take advantage of the great value, right, that you’re trying to deliver for them. But it’s got to be super simple. How did you think about that HRI kind of design?
</p><p>
<strong>Dooley:</strong> There’s a lot wrapped into that. I think the bus stop is the first part of it. What’s the simplest way that they can command in a metaphor? Like everybody can relate to armchair or front door, that sort of thing. And so that idea that the robot just goes to these destinations is super simplifying. People get that. It’s almost now at a nanosecond how fast they get that and that metaphor. So that was one of it. And then you sort of explain the rules of the road of how the robot can go from place to place. It’s got these bus routes, but they’re elastic and that it can go around you if needed. But there’s all these types of interactions. Okay, we figured out what happens when you’re coming down the hall and the robot’s coming down. Let’s say you’re somebody else and they just walk towards each other. And I know in hospitals, the robot’s programmed to go to the side of the corridor. There’s no side in a home. That’s the stuff. So those are things that we still have to iron out, but there’s timeouts and there’s things of—that’s where we’ll be—we’re not doing it yet, but it’d be great to recognize that’s a person, not a closed door or something and respond to it. So right now, we have to tell the users, “Okay, it’ll spin a time to make sure you’re there, but then it’ll give up. And if you really wanted to, you could tell it to go back from your app. You could get out of the way if you want, or you could stop it by doing this.”
</p><p>
	And so that’ll get refined as we get to the market, but those interactions, yeah, you’re right. You have this big robot that’s coming down. And one of the surprising things was it’s not just people. One of the women in the pilot had a Border Collie, and their Border Collie’s, by instinct, bred to herd sheep. So it would hear the robot. The robot’s very quiet, but she would command it. It would hear the robot coming down the hall and it would put its paw out to stop it, and that became it’s game. It started herding the robot. And so it’s really this weird thing, this metaphor you’re getting at.
</p><p>
<strong>Jones:</strong> Robots are pretty stubborn. The robot probably just sat there for like five minutes, like, “Come on. Who’s going to blink?”
</p><p>
<strong>Dooley: </strong>Yeah. Yeah. And the AI we’d love to add, we have to catch up with where you guys are at or license some of your vision recognition algorithms because, first, we’re trying to navigate and avoid obstacles. And that’s where all the tech is going into in terms of the design and the tiers of safety that we’re doing. But it’s just like what the user wanted in that case is, if it’s the dog, can you play my voice, say, “Get out” or, “Move,” or whatever, or something, “Go away”? Because she sent me a video of this. It’s like it was happening to her too, is she would send the robot out. The dogs would get all excited, and she’s behind it in her wheelchair. And now the dogs are waiting for her on the other side of the robot, the robot’s wondering what to do, and they’re all in the hall. And so yeah, there’s this sort of complication that gets in there that you have multiple agents going on there.
</p><p>
<strong>Ackerman:</strong> Maybe one more question from each of you guys. Mike, you want to go first?
</p><p>
<strong>Dooley: </strong>I’m trying to think. I have one more. And when you have new engineers start—let’s say they haven’t worked on robots before. They might be experienced. They’re coming out of school or they’re from other industries and they’re coming in. What is some key thing that they learn, or what sort of transformation goes on in their mind when they finally get in the zone of what it means to develop robots? And it’s a really broad question, but there’s sort of a rookie thing.
</p><p>
<strong>Jones: </strong>Yeah. What’s an aha moment that’s common for people new to robotics? And I think this is woven throughout this entire conversation here, which is, macro level, robots are actually hard. They’re difficult to kind of put the entire electromechanical software system together. It’s hard to perceive the world. If a robot’s driving around the home on its own, it needs to have a pretty good understanding of kind of what’s around it. Is something there, is something not there? The richer that understanding can be, the more adaptable or personalized that it can be. But generating that understanding is also hard. They have to be built to deal with all of those unanticipated scenarios that they’re going to encounter when they’re let out into the wild. So it’s that I think it’s surprising to a lot of people how long that long tail of corner cases ends up being that you have to grapple with. If you ignore one of them, it can mean it can end the product, right? It’s a long tail of things. Any one of them ends up, if it rears its head enough for those users, they’ll stop using the product because, “Well, this thing doesn’t work, and this has happened like twice to me now in the year I’ve had it. I’m kind of done with it,” right?
</p><p>
	So you really have to grapple with the very long, long tail of corner cases when the technology hits the real world. I think that’s a super surprising one for people who are new to robotics. It’s more than a hardware consumer product company, consumer electronics company. You do need to deal with those challenges of perception, mobility in the home, the chaos of— specifically, you’re talking about more of the home environment, not the more structured environment and the industrial side. And I think that’s something that everyone has to go through that learning curve of understanding the impact that can have.
</p><p>
<strong>Dooley: </strong>Yeah. Of the dogs and cats.
</p><p>
<strong>Jones: </strong>Yeah, I mean, who would have thought cats are going to jump on the thing or Border Collies are going to try to herd it, right? And you have to just-- and you don’t learn those things until you get products out there. And that’s, Mike, what I was asking you about pilots and what do you hope to learn or the experience there. Is you have to take that step if you’re going to start kind of figuring out what those elements are going to start looking like. It’s very hard to do just intellectually or on paper or in the lab. You have to let them out there. So that’s a learning lesson there. Mike, maybe a similar question for you, but--
</p><p>
<strong>Ackerman: </strong>This is the last one, so make it a good one.
</p><p>
<strong>Jones: </strong>Yep. The last one, it better be a good one, huh? It’s a similar question for you, but maybe cut more on address to an entrepreneur in the robotic space. I’m curious, for a robot company to succeed, there’s a lot of, I’ll call them, ecosystem partners, right, that have to be there. Manufacturing, channel, or go-to-market partners, funding, right, to support a capital-intensive development process, and many more. I’m curious, what have you learned or what do people need to going into a robotics development or looking to be a robotics entrepreneur, what do people miss? What have you learned? What have you seen? What are the partners that are the most important? And I’m not asking for, “Oh, iRobot’s an investor. Speak nicely on the financial investor side.” That’s not what I’m after. But what have you learned, that you better not ignore this set of partners because if one of them falls through or it doesn’t work or is ineffective, it’s going to be hard for all the other pieces to come together?
</p><p>
<strong>Dooley: </strong>Yeah, it’s complex. I think just like you said, robots is hard. I think when we got acquired by iRobot and we were having some of the first meetings over— it’s Mike from software. Halloran.
</p><p>
<strong>Ackerman: </strong>This was Evolution Robotics?
</p><p>
<strong>Dooley: </strong>Evolution. Yeah, but Mike Halloran from iRobot, we came to the office at the Evolution’s office, and he just said, “Robots are hard. They’re really hard.” And it’s like, that’s the point we knew there was harmony. We were sort of under this thing. And so for everything what Chris is saying is that all of that is high stakes. And so you sort of have to be-- you have to be good enough on all those fronts of all those partners. And so some of it is critical path technology. Depth cameras, that function is really critical to us, and it’s critical to work well and then cost and scale. And so just being flexible about how we can deal with that and looking at that sort of chain and how do we sort of start at one level and scale it through? So you look at sort of, okay, what are these key enabling technologies that have to work? And that’s one bucket that are there. Then the partnerships on the business side, we’re in a complex ecosystem. I think the other rude awakening when people look at this is like, “Well, yeah, why doesn’t-- as people get older, they have disabilities. That’s what you have-- that’s your insurance funds.” It’s like, “No, it doesn’t.” It doesn’t for a lot of-- unless you have specific types of insurance. We’re partnering with Nationwide. They have long-term care insurance - and that’s why they’re working with us - that pays for these sorts of issues and things. Or Medicaid will get into these issues depending on somebody’s need.
</p><p>
	And so I think what we’re trying to understand is—this goes back to that original question about customer empathy—is that how do we adjust what we’re doing? That we have this vision. I want to help people like my mom where she is now and where she was 10 years ago when she was experiencing difficulties with mobility initially. And we have to stage that. We have to get through that progression. And so who are the people that we work with now that solves a pain point that can be something that they have control over that is economically viable to them? And sometimes that means adjusting a bit of what we’re doing, because it’s just this step onto the long path as we do it.
</p><p>
<strong>Ackerman:</strong> Awesome. Well, thank you both again. This was a great conversation.
</p><p>
<strong>Jones: </strong>Yeah, thanks for having us and for hosting, Evan and Mike. Great to talk to you.
</p><p>
	Dooley: Nice seeing you again, Chris and Evan. Same. Really enjoyed it.
</p><p><strong>Ackerman: </strong>We’ve been talking with Chris Jones from iRobot and Mike Dooley from Labrador Systems about developing robots for the home. And thanks again to our guests for joining us, for ChatBot and <em>IEEE Spectrum</em>. I’m Evan Ackerman.</p>]]></description><pubDate>Mon, 02 Oct 2023 10:00:03 +0000</pubDate><guid>https://spectrum.ieee.org/domestic-robots</guid><category>Domestic-robots</category><category>Robots</category><category>Robotics</category><category>Irobot</category><category>Type-podcast</category><category>Chatbot-podcast</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/36416762/origin.jpg"/></item><item><title>Making Boston Dynamics’ Robots Dance</title><link>https://spectrum.ieee.org/boston-dynamics-dancing-robots</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/chatbot-podcast-logo-showing-two-robot-heads-facing-each-other.jpg?id=36416749&width=980"/><br/><br/><p class="shortcode-media shortcode-media-youtube">
<span class="rm-shortcode" data-rm-shortcode-id="95fca008cbdb9e0a04a9eaa894ccb8eb" style="display:block;position:relative;padding-top:56.25%;"><iframe frameborder="0" height="auto" lazy-loadable="true" scrolling="no" src="https://www.youtube.com/embed/EpShHKQiKmg?rel=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" width="100%"></iframe></span>
<small class="image-media media-caption" placeholder="Add Photo Caption...">Chatbot Episode 1: Making Boston Dynamics’ Robots Dance</small>
<small class="image-media media-photo-credit" placeholder="Add Photo Credit..."><a href="https://youtu.be/EpShHKQiKmg" target="_blank"><br/>
</a></small>
</p><p style="">
<strong>Evan Ackerman:</strong> I’m Evan Ackerman, and welcome to ChatBot, a robotics podcast from <em>IEEE Spectrum</em>. On this episode of ChatBot, we’ll be talking with Monica Thomas and Amy LaViers about robots and dance. <a href="https://www.madkingthomas.com/" target="_blank">Monica Thomas is a dancer and choreographer</a>. Monica has worked with <a href="https://bostondynamics.com/" target="_blank">Boston Dynamics</a> to choreograph some of their robot videos in which <a href="https://robotsguide.com/robots/atlas2016" target="_blank">Atlas</a>, <a href="https://robotsguide.com/robots/spot" target="_blank">Spot</a>, and even <a href="https://robotsguide.com/robots/handle" target="_blank">Handle</a> dance to songs like Do You Love Me? The <a href="https://www.youtube.com/watch?v=fn3KWM1kuAw" target="_blank">“Do You Love Me?” Video has been viewed 37 million times</a>. And if you haven’t seen it yet, it’s pretty amazing to see how these robots can move. <a href="https://theradlab.xyz/" target="_blank">Amy LaViers is the director of the Robotics, Automation, and Dance Lab</a>, or RAD lab, which she founded in 2013 at the University of Virginia. The RAD Lab is a collective for art making, commercialization, education, outreach, and research at the intersection of dance and robotics, and is now an independent nonprofit in Philadelphia. Amy’s work explores the creative relationships between machines and humans, as expressed through movement. So Monica, can you just tell me-- I think people in the robotics field may not know who you are or why you’re on the podcast at this point, so can you just describe how you initially got involved in Boston Dynamics?</p><p>
<strong>Monica Thomas:</strong> Yeah. So I got involved really casually. I know people who work at Boston Dynamics and <a href="https://spectrum.ieee.org/tag/marc-raibert" target="_self">Marc Raibert</a>, their founder and head. They’d been working on Spot, and they added the arm to Spot. And Marc was kind of like, “I kind of think this could dance.” And they were like, “Do you think this could dance?” And I was like, “It could definitely dance. That definitely could do a lot of dancing.” And so we just started trying to figure out, can it move in a way that feels like dance to people watching it? And the first thing we made was <a href="https://www.youtube.com/watch?v=kHBcVlqpvZ8" target="_blank">Uptown Spot</a>. And it was really just figuring out moves that the robot does kind of already naturally. And that’s when they started developing, I think, <a href="https://dev.bostondynamics.com/docs/concepts/choreography/readme" rel="noopener noreferrer" target="_blank">Choreographer</a>, their tool. But in terms of my thinking, it was just I was watching what the robot did as its normal patterns, like going up, going down, walking this place, different steps, different gates, what is interesting to me, what looks beautiful to me, what looks funny to me, and then imagining what else we could be doing, considering the angles of the joints. And then it just grew from there. And so once that one was out, Marc was like, “What about the rest of the robots? Could they dance? Maybe we could do a dance with all of the robots.” And I was like, “We could definitely do a dance with all of the robots. Any shape can dance.” So that’s when we started working on what turned into Do You Love Me? I didn’t really realize what a big deal it was until it came out and it went viral. And I was like, “Oh—” are we allowed to swear, or—?<a href="#_msocom_1" rel="noopener noreferrer" target="_blank"></a>
</p><p>
<strong>Ackerman:</strong> Oh, yeah. Yeah.
</p><p>
<strong>Thomas: </strong>Yeah. So I was like, “[bleep bleep, bleeeep] is this?” I didn’t know how to deal with it. I didn’t know how to think about it. As a performer, the largest audience I performed for in a day was like 700 people, which is a big audience as a live performer. So when you’re hitting millions, it’s just like it doesn’t even make sense anymore, and yeah. So that was pretty mind-boggling. And then also because of kind of how it was introduced and because there is a whole world of choreo-robotics, which I was not really aware of because I was just doing my thing. Then I realized there’s all of this work that’s been happening that I couldn’t reference, didn’t know about, and conversations that were really important in the field that I also was unaware of and then suddenly was a part of. So I think doing work that has more viewership is really—it was a trip and a half—is a trip and a half. I’m still learning about it. Does that answer your question?
</p><p>
<strong>Ackerman: </strong>Yeah. Definitely.
</p><p>
<strong>Thomas:</strong> It’s a long-winded answer, but.
</p><p>
<strong>Ackerman:</strong> And Amy, so you have been working in these two disciplines for a long time, in the disciplines of robotics and in dance. So what made you decide to combine these two things, and why is that important?
</p><p>
<strong>Amy LaViers: </strong>Yeah. Well, both things, I guess in some way, have always been present in my life. I’ve danced since I was three, probably, and my dad and all of his brothers and my grandfathers were engineers. So in some sense, they were always there. And it was really-- I could tell you the date. I sometimes forget what it was, but it was a Thursday, and I was taking classes and dancing and controlling of mechanical systems, and I was realizing this over. I mean, I don’t think I’m combining them. I feel like they already kind of have this intersection that just exists. And I realized-- or I stumbled into that intersection myself, and I found lots of people working in it. And I was-- oh, my interests in both these fields kind of reinforce one another in a way that’s really exciting and interesting. I also happened to be an almost graduating-- I was in last class of my junior year of college, so I was thinking, “What am I going to do with myself?” Right? So it was very happenstance in that way. And again, I mean, I just felt like— it was like I walked into a room where all of a sudden, a lot of things made sense to me, and a lot of interests of mine were both present.
</p><p>
<strong>Ackerman:</strong> And can you summarize, I guess, the importance here? Because I feel like— I’m sure this is something you’ve run into, is that it’s easy for engineers or roboticists just to be— I mean, honestly, a little bit dismissive of this idea that it’s important for robots to have this expressivity. So why is it important?
</p><p>
<strong>LaViers:</strong> That is a great question that if I could summarize what my life is like, it’s me on a computer going like this, trying to figure out the words to answer that succinctly. But one way I might ask it, earlier when we were talking, you mentioned this idea of functional behavior versus expressive behavior, which comes up a lot when we start thinking in this space. And I think one thing that happens-- and my training and background in <a href="https://www.backstage.com/magazine/article/laban-movement-analysis-guide-50428/" rel="noopener noreferrer" target="_blank">Laban Movement Analysis </a>really emphasizes this duality between function and expression as opposed to the either/or. It’s kind of like the mind-body split, the idea that these things are one integrated unit. Function and expression are an integrated unit. And something that is functional is really expressive. Something that is expressive is really functional.
</p><p>
<strong>Ackerman:</strong> It definitely answers the question. And it looks like Monica is resonating with you a little bit, so I’m just going to get out of the way here. Amy, do you want to just start this conversation with Monica?
</p><p>
<strong>LaViers: </strong>Sure. Sure. Monica has already answered, literally, my first question, so I’m already having to shuffle a little bit. But I’m going to rephrase. My first question was, can robots dance? And I love how emphatically and beautifully you answered that with, “Any shape can dance.” I think that’s so beautiful. That was a great answer, and I think it brings up— you can debate, is this dance, or is this not? But there’s also a way to look at any movement through the lens of dance, and that includes factory robots that nobody ever sees.
</p><p>
<strong>Thomas:</strong> It’s exciting. I mean, it’s a really nice way to walk through the world, so I actually recommend it for everyone, just like taking a time and seeing the movement around you as dance. I don’t know if it’s allowing it to be intentional or just to be special, meaningful, something.
</p><p>
<strong>LaViers:</strong> That’s a really big challenge, particularly for an autonomous system. And for any moving system, I think that’s hard, artificial or not. I mean it’s hard for me. My family’s coming into town this weekend. I’m like, “How do I act so that they know I love them?” Right? That’s dramaticized version of real life, right, is, how do I be welcoming to my guests? And that’ll be, how do I move?
</p><p>
<strong>Thomas:</strong> What you’re saying is a reminder of, one of the things that I really enjoy watching robots move is that I’m allowed to project as much as I want to on them without taking away something from them. When you project too much on people, you lose the person, and that’s not really fair. But when you’re projecting on objects, things that are objects but that we personify— or not even personify, that we anthropomorphize or whatever, it is just a projection of us. But it’s acceptable. So nice for it to be acceptable, a place where you get to do that.
</p><p>
<strong>LaViers: </strong>Well, okay. Then can I ask my fourth question even though it’s not my turn? Because that’s just too perfect to what it is, which is just, what did you learn about yourself working with these robots?
</p><p>
<strong>Thomas:</strong> Well, I learned how much I love visually watching movement. I’ve always watched, but I don’t think it was as clear to me how much I like movement. The work that I made was really about context. It was about what’s happening in society, what’s happening in me as a person. But I never got into that school of dance that really spends time just really paying attention to movement or letting movement develop or explore, exploring movement. That wasn’t what I was doing. And with robots, I was like, “Oh, but yeah, I get it better now. I see it more now.” So much in life right now, for me, is not contained, and it doesn’t have answers. And translating movement across species from my body to a robot, that does have answers. It has multiple answers. It’s not like there’s a yes and a no, but you can answer a question. And it’s so nice to answer questions sometimes. I sat with this thing, and here’s something I feel like is an acceptable solution. Wow. That’s a rarity in life. So I love that about working with robots. I mean, also, they’re cool, I think. And it is also— they’re just cool. I mean, that’s true too. It’s also interesting. I guess the last thing that I really loved—and I didn’t have much opportunity to do this or as much as you’d expect because of COVID—is being in space with robots. It’s really interesting, just like being in space with anything that is different than your norm is notable. Being in space with an animal that you’re not used to being with is notable. And there’s just something really cool about being with something very different. And for me, robots are very different and not acclimatized.
</p><p>
<strong>Ackerman: </strong>Okay. Monica, you want to ask a question or two?
</p><p>
<strong>Thomas: </strong>Yeah. I do. The order of my questions is ruined also. I was thinking about the <a href="https://theradlab.xyz/" rel="noopener noreferrer" target="_blank">RAD Lab</a>, and I was wondering if there are guiding principles that you feel are really important in that interdisciplinary work that you’re doing, and also any lessons maybe from the other side that are worth sharing.
</p><p>
<strong>LaViers:</strong> The usual way I describe it and describe my work more broadly is, I think there are a lot of roboticists that hire dancers, and they make robots and those dancers help them. And there are a lot of dancers that they hire engineers, and those engineers build something for them that they use inside of their work. And what I’m interested in, in the little litmus test or challenge I paint for myself and my collaborators is we want to be right in between those two things, right, where we are making something. First of all, we’re treating each other as peers, as technical peers, as artistic peers, as— if the robot moves on stage, I mean, that’s choreography. If the choreographer asks for the robot to move in a certain way, that’s robotics. That’s the inflection point we want to be at. And so that means, for example, in terms of crediting the work, we try to credit the creative contributions. And not just like, “Oh, well, you did 10 percent of the creative contributions.” We really try to treat each other as co-artistic collaborators and co-technical developers. And so artists are on our papers, and engineers are in our programs, to put it in that way. And likewise, that changes the questions we want to ask. We want to make something that pushes robotics just a inch further, a millimeter further. And we want to do something that pushes dance just an inch further, a millimeter further. We would love it if people would ask us, “Is this dance?” We get, “Is this robotics?” Quite a lot. So that makes me feel like we must be doing something interesting in robotics.
</p><p>
	And every now and then, I think we do something interesting for dance too, and certainly, many of my collaborators do. And that inflection point, that’s just where I think is interesting. And I think that’s where— that’s the room I stumbled into, is where we’re asking those questions as opposed to just developing a robot and hiring someone to help us do that. I mean, it can be hard in that environment that people feel like their expertise is being given to the other side. And then, where am I an expert? And we’ve heard editors at publication venues say, “Well, this dancer can’t be a co-author,” and we’ve had venues where we’re working on the program and people say, “Well, no, this engineer isn’t a performer,” but I’m like, “But he’s queuing the robot, and if he messes up, then we all mess up.” I mean, that’s vulnerability too. So we have those conversations that are really touchy and a little sensitive and a little— and so how do you create that space where people do you feel safe and comfortable and valued and attributed for their work and that they can make a track record and do this again in another project, in another context and— so, I don’t know, if I’ve learned anything, I mean, I’ve learned that you just have to really talk about attribution all the time. I bring it up every time, and then I bring it up before we even think about writing a paper. And then I bring it up when we make the draft. And first thing I put in the draft is everybody’s name in the order it’s going to appear, with the affiliations and with the—subscripts on that don’t get added at the last minute. And when the editor of a very famous robotics venue says, “This person can’t be a co-author,” that person doesn’t get taken off as a co-author; that person is a co-author, and we figure out another way to make it work. And so I think that’s learning, or that’s just a struggle anyway.
</p><p>
<strong>Ackerman:</strong> Monica, I’m curious if when you saw the Boston Dynamics videos go viral, did you feel like there was much more of a focus on the robots and the mechanical capabilities than there was on the choreography and the dance? And if so, how did that make you feel?
</p><p>
<strong>Thomas:</strong> Yeah. So yes. Right. When dances I’ve made have been reviewed, which I’ve always really appreciated, it has been about the dance. It’s been about the choreography. And actually, kind of going way back to what we were talking about a couple things ago, a lot of the reviews that you get around this are about people, their reactions, right? Because, again, we can project so much onto robots. So I learned a lot about people, how people think about robots. There’s a lot of really overt themes, and then there’s individual nuance. But yeah, it wasn’t really about the dance, and it was in the middle of the pandemic too. So there’s really high isolation. I had no idea how people who cared about dance thought about it for a long time. And then every once in a while, I get one person here or one person there say something. So it’s a totally weird experience. Yes.
</p><p>
	The way that I took information about the dance was kind of paying attention to the affective experience, the emotional experience that people had watching this. The dance was— nothing in that dance was— we use the structures of the traditions of dance in it for intentional reason. I chose that because I wasn’t trying to alarm people or show people ways that robots move that totally hit some old part of our brain that makes us absolutely panicked. That wasn’t my interest or the goal of that work. And honestly, at some point, it’d be really interesting to explore what the robots can just do versus what I, as a human, feel comfortable seeing them do. But the emotional response that people got told me a story about what the dance was doing in a backward-- also, what the music’s doing because—let’s be real—that music does— right? We stacked the deck.
</p><p>
<strong>LaViers:</strong> Yeah. And now that brings— I feel like that serves up two of my questions, and I might let you pick which one maybe we go to. I mean, one of my questions, I wrote down some of my favorite moments from the choreography that I thought we could discuss. Another question—and maybe we can do both of these in serie—is a little bit about— I’ll blush even just saying it, and I’m so glad that the people can’t see the blushing. But also, there’s been so much nodding, and I’m noticing that that won’t be in the audio recording. We’re nodding along to each other so much. But the other side—and you can just nod in a way that gives me your—the other question that comes up for that is, yeah, what is the monetary piece of this, and where are the power dynamics inside this? And how do you feel about how that sits now as that video continues to just make its rounds on the internet and establish value for Boston Dynamics?
</p><p>
<strong>Thomas: </strong>I would love to start with the first question. And the second one is super important, and maybe another day for that one.
</p><p>
<strong>Ackerman:</strong> Okay. That’s fair. That’s fair.
</p><p>
<strong>LaViers: </strong>Yep. I like that. I like that. So the first question, so my favorite moments of <a href="https://www.youtube.com/watch?v=fn3KWM1kuAw" rel="noopener noreferrer" target="_blank">the piece that you choreographed to Do You Love Me</a>? For the Boston Dynamics robots, the swinging arms at the beginning, where you don’t fully know where this is going. It looks so casual and so, dare I say it, natural, although it’s completely artificial, right? And the proximal rotation of the legs, I feel like it’s a genius way of getting around no spine. But you really make use of things that look like hip joints or shoulder joints as a way of, to me, accessing a good wriggle or a good juicy moment, and then the Spot space hold, I call it, where the head of the Spot is holding in place and then the robot wiggles around that, dances around that. And then the moment when you see all four complete—these distinct bodies, and it looks like they’re dancing together. And we touched on that earlier—any shape can dance—but making them all dance together I thought was really brilliant and effective in the work. So it’s one of those moments, super interesting, or you have a funny story about, I thought we could talk about it further.
</p><p>
<strong>Thomas: </strong>I have a funny story about the hip joints. So the initial— well, not the initial, but when they do <a href="https://youtu.be/fn3KWM1kuAw?t=49" rel="noopener noreferrer" target="_blank">the mashed potato</a>, that was the first dance move that we started working on, on Atlas. And for folks who don’t know, the mashed potato is kind of the feet are going in and out; the knees are going in and out. So we ran into a couple of problems, which—and the twist. I guess it’s a combo. Both of them like you to roll your feet on the ground like rub, and that friction was not good for the robots. <a href="https://youtu.be/fn3KWM1kuAw?t=49" rel="noopener noreferrer" target="_blank">So when we first started really moving into the twist</a>, which has this torso twisting— the legs are twisting. The foot should be twisting on the floor. The foot is not twisting on the floor, and the legs were so turned out that the shape of the pelvic region looked like a over-full diaper. So, I mean, it was wiggling, but it made the robot look young. It made the robot look like it was in a diaper that needed to be changed. It did not look like a twist that anybody would want to do near anybody else. And it was really amazing how— I mean, it was just hilarious to see it. And the engineers come in. They’re really seeing the movement and trying to figure out what they need for the movement. And I was like, “Well, it looks like it has a very full diaper.” And they were like, “Oh.” They knew it didn’t quite look right, but it was like—because I think they really don’t project as much as I do, I’m very projective that’s one of the ways that I’ve watched work, or you’re pulling from the work that way, but that’s not what they were looking at. And so yeah, then you change the angles of the legs, how turned in it is and whatever, and it resolved to a degree, I think, fairly successfully. It doesn’t really look like a diaper anymore. But that wasn’t really— and also to get that move right took us over a month.
</p><p>
<strong>Ackerman:</strong> Wow.
</p><p>
<strong>LaViers:</strong> Wow.
</p><p>
<strong>Thomas: </strong>We got much faster after that because it was the first, and we really learned. But it took a month of programming, me coming in, naming specific ways of reshifting it before we got a twist that felt natural if amended because it’s not the same way that--
</p><p>
<strong>LaViers:</strong> Yeah. Well, and it’s fascinating to think about how to get it to look the same. You had to change the way it did the movement, is what I heard you describing there, and I think that’s so fascinating, right? And just how distinct the morphologies between our body and any of these bodies, even the very facile human-ish looking Atlas, that there’s still a lot of really nuanced and fine-grained and human work-intensive labor to go into getting that to look the same as what we all think of as the twist or the mashed potato.
</p><p>
<strong>Thomas:</strong> Right. Right. And it does need to be something that we can project those dances onto, or it doesn’t work, in terms of this dance. It could work in another one. Yeah.
</p><p>
<strong>LaViers:</strong> Right. And you brought that up earlier, too, of trying to work inside of some established forms of dance as opposed to making us all terrified by the strange movement that can happen, which I think is interesting. And I hope one day you get to do that dance too.
</p><p>
<strong>Thomas:</strong> Yeah. No, I totally want to do that dance too.
</p><p>
<strong>Ackerman:</strong> Monica, do you have one last question you want to ask?
</p><p>
<strong>Thomas: </strong>I do. And this is— yeah. I want to ask you, kind of what does embodied or body-based intelligence offer in robotic engineering? So I feel like, you, more than anyone, can speak to that because I don’t do that side.
</p><p>
<strong>LaViers:</strong> Well, I mean, I think it can bring a couple of things. One, it can bring— I mean, the first moment in my career or life that that calls up for me is, I was watching one of my lab mates, when I was a doctoral student, give a talk about a quadruped robot that he was working on, and he was describing the crawling strategy like the gate. And someone said— and I think it was roughly like, “Move the center of gravity inside the polygon of support, and then pick up— the polygon of support formed by three of the legs. And then pick up the fourth leg and move it. Establish a new polygon of support. Move the center of mass into that polygon of support.” And it’s described with these figures. Maybe there’s a center of gravity. It’s like a circle that’s like a checkerboard, and there’s a triangle, and there’s these legs. And someone stands up and is like, “That makes no sense like that. Why would you do that?” And I’m like, “Oh, oh, I know, oh, because that’s one of the ways you can crawl.” I actually didn’t get down on the floor and do it because I was not so outlandish at that point.
</p><p>
	But today, in the RAD lab, that would be, “Everyone on all fours, try this strategy out.” Does it feel like a good idea? Are there other ideas that we would use to do this pattern that might be worth exploring here as well? And so truly rolling around on the floor and moving your body and pretending to be a quadruped, which— in my dance classes, it’s a very common thing to practice crawling because we all forget how to crawl. We want to crawl with the cross-lateral pattern and the homo-lateral pattern, and we want to keep our butts down-- or keep the butts up, but we want to have that optionality so that we look like we’re facile, natural crawlers. We train that, right? And so for a quadruped robot talk and discussion, I think there’s a very literal way that an embodied exploration of the idea is a completely legitimate way to do research.
</p><p>
<strong>Ackerman: </strong>Yeah. I mean, Monica, this is what you were saying, too, as you were working with these engineers. Sometimes it sounded like they could tell that something wasn’t quite right, but they didn’t know how to describe it, and they didn’t know how to fix it because they didn’t have that language and experience that both of you have.
</p><p>
<strong>Thomas: </strong>Yeah. Yeah, exactly that.
</p><p>
<strong>Ackerman: </strong>Okay. Well, I just want to ask you each one more really quick question before we end here, which is that, what is your favorite fictional robot and why? I hope this isn’t too difficult, especially since you both work with real robots, but. Amy, you want to go first?
</p><p>
<strong>LaViers:</strong> I mean, I’m going to feel like a party pooper. I don’t like any robots, real or fictional. The fictional ones annoy me because-- the fictional ones annoy me because of the disambiguation issue and WALL-E and Eva are so cute. And I do love cute things, but are those machines, or are those characters? And are we losing sight of that? I mean, my favorite robot to watch move, this one-- I mean, I love the <a href="https://www.youtube.com/watch?v=3g-yrjh58ms" rel="noopener noreferrer" target="_blank">Keepon dancing to Spoon</a>. That is something that if you’re having an off day, you google Keepon dancing to Spoon— Keepon is one word, K-E-E-P-O-N, dancing to Spoon, and you just bop. It’s just a bop. I love it. It’s so simple and so pure and so right.
</p><p>
<strong>Ackerman: </strong>It’s one of my favorite robots of all time, Monica. I don’t know if you’ve seen this, but it’s two little yellow balls like this, and it just goes up and down and rocks back and forth. But it does it so to music. It just does it so well. It’s amazing.
</p><p>
<strong>Thomas:</strong> I will definitely be watching that [crosstalk].
</p><p>
<strong>Ackerman:</strong> Yeah. And I should have expanded the question, and now I will expand it because Monica hasn’t answered yet. Favorite robot, real or fictional?
</p><p>
<strong>Thomas: </strong>So I don’t know if it’s my favorite. This one breaks my heart, and I’m currently having an empathy overdrive issue as a general problem. But there’s a robot installation - and I should know its name, but I don’t— <a href="https://www.youtube.com/watch?v=ZS4Bpr2BgnE" rel="noopener noreferrer" target="_blank">where the robot reaches out, and it grabs the oil that they’ve created it to leak and pulls it towards its body</a>. And it’s been doing this for several years now, but it’s really slowing down now. And I don’t think it even needs the oil. I don’t think it’s a robot that uses oil. It just thinks that it needs to keep it close. And it used to happy dance, and the oil has gotten so dark and the red rust color of, oh, this is so morbid of blood, but it just breaks my heart. So I think I love that robot and also want to save it in the really unhealthy way that we sometimes identify with things that we shouldn’t be thinking about that much.
</p><p>
<strong>Ackerman:</strong> And you both gave amazing answers to that question.
</p><p>
<strong>LaViers:</strong> And the piece is <a href="https://www.youtube.com/watch?v=ZS4Bpr2BgnE" rel="noopener noreferrer" target="_blank">Sun Yuan and Peng Yu’s Can’t Help Myself</a>.
</p><p>
<strong>Ackerman: </strong>That’s right. Yeah.
</p><p>
<strong>LaViers: </strong>And it is so beautiful. I couldn’t remember the artist’s name either, but—you’re right—it’s so beautiful.
</p><p>
<strong>Thomas:</strong> It’s beautiful. The movement is beautiful. It’s beautifully considered as an art piece, and the robot is gorgeous and heartbreaking.
</p><p>
<strong>Ackerman:</strong> Yeah. Those answers were so unexpected, and I love that. So thank you both, and thank you for being on this podcast. This was an amazing conversation. We didn’t have nearly enough time, so we’re going to have to come back to so much.
</p><p>
<strong>LaViers: </strong>Thank you for having me.
</p><p>
<strong>Thomas: </strong>Thank you so much for inviting me. [music]
</p><p><strong>Ackerman: </strong>We’ve been talking with Monica Thomas and Amy LaViers about robots and dance. And thanks again to our guests for joining us for ChatBot and<em> IEEE Spectrum</em>. I’m Evan Ackerman.</p>]]></description><pubDate>Sun, 01 Oct 2023 21:23:34 +0000</pubDate><guid>https://spectrum.ieee.org/boston-dynamics-dancing-robots</guid><category>Dance</category><category>Art-and-technology</category><category>Robots</category><category>Type-podcast</category><category>Boston-dynamics</category><category>Chatbot-podcast</category><dc:creator>Evan Ackerman</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/36416749/origin.jpg"/></item><item><title>Finding Battery Minerals With AI</title><link>https://spectrum.ieee.org/fiding-battery-metals-with-ai</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.webp?id=41674598&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/2bfbfb23" width="100%"></iframe><p><strong>Eliza Strickland:</strong> Hi, I’m Eliza Strickland for <em>IEEE Spectrum</em>‘s <a href="https://spectrum.ieee.org/type/podcast/" target="_self">Fixing the Future</a> podcast. Before we start, I want to tell you that you can get the latest coverage from some of <em>Spectrum’s</em> most important beats, including AI, climate change, and robotics, by signing up for one of our free newsletters. Just go to spectrum.ieee.org/newsletters to subscribe.</p><p>In 2022, more than 10 million <a href="https://spectrum.ieee.org/tag/evs" target="_self">electric cars</a> were sold around the world, up 55 percent over sales in 2021. For this trend to continue, though, mining companies need to find a lot more of the metals used to build electric cars and their batteries. Today I’m talking with <a href="https://www.linkedin.com/in/josh-goldman-819b834/" target="_blank">Josh Goldman</a>. He’s the co-founder and president of <a href="https://www.koboldmetals.com/" rel="noopener noreferrer" target="_blank">KoBold Metals</a>, an AI-powered mineral exploration company working to discover the materials for electric vehicle batteries. Josh, thanks so much for joining me on Fixing the Future.</p><p><strong>Josh Goldman: </strong>It’s a pleasure to be here, Eliza. Thank you.</p><p><strong>Strickland:</strong> So let’s first talk about what minerals and metals we’re discussing here. What metals do we need for electric vehicle batteries and how much do we need of them?</p><p><strong>Goldman:</strong> So there’s a whole suite of different metals that we need, and they each play different roles in the renewable energy system. For a battery that you want to pick up and move around like you want to put in an electric vehicle, <a href="https://spectrum.ieee.org/lithium-battery" target="_self">lithium-ion batteries</a> are by far the winning technology and will remain there for a long time. And to make a lithium-ion battery, you need lithium ions. We need a great deal of lithium, of course. For the cathode of the battery, we need a layered metal oxide. That’s performance cathode structure. And the highest energy density and the greatest cycle life, the greatest durability of a battery as it undergoes many charge and discharge cycles as you fill it up with energy and drive it and recharge it come from batteries that are rich in <a href="https://spectrum.ieee.org/seafloor-cobalt" target="_self">cobalt</a> and nickel. And then for electrical systems broadly, we need electrically conductive materials. And the workhorse electrical conductor, the kind of perfect blend of conductivity and abundance and cost to extract is copper. And so we use copper to move electric power around the vehicle, to move electric power around the energy system in the transmission grid. And then of course we use copper windings in the electric motors as well.</p><p>Those are the four that we are focused on because we think that the supply gap is the greatest and your estimate may vary depending upon your forecast of electric vehicle adoption. But it is almost universally agreed that the supply gap across those four metals to get to a fully electrified vehicle fleet is more than $10 trillion worth of those metals. So the scale of the problem is extraordinary. And the way that we fill that supply gap is by finding new deposits, new sources of those metals around the world.</p><p><strong>Strickland: </strong>So why is there a challenge here? There are a lot of mining companies out there. You’d think that they’d be on top of this business opportunity. What am I missing?</p><p><strong>Goldman: </strong>Yeah, there’re hundreds of companies that are out there looking for metals. And the fundamental problem is that it’s a really difficult problem. What we’re looking for are unusual rocks and we’re looking for them under the ground where we can’t see them. And what do we mean by unusual rocks? What is an ore deposit? An ore deposit is a place where the rocks are unusually enriched in the metals that we’re looking for. All of these metals, copper, for example, copper is present in basically at some quantity, at some concentration, copper is present in every rock. Some rocks that are very abundant are naturally a little bit higher in copper, but nowhere near high enough that you can economically extract the copper. There’s copper in your driveway, but it’s not a great source of copper. It’s too dilute. And so what we’re looking for are the places where natural geological processes have scavenged the copper out of a very large volume of rocks, they’ve concentrated it in a much smaller volume of rocks. And so the natural abundance of copper, think like 50 parts per million, 60 parts per million in the upper continental crust. And an ore deposit containing copper is more like 10,000 parts per million. So the natural processes needed to do that much. And once we’ve got to about 10,000 parts per million, we can do the rest with industrial processes at reasonable cost.</p><p>And so we’re looking for these rocks that are unusual and these are places that occur very infrequently in the crust. We’ve found many such places historically, and those have been the sources of these metals in industry and for the electric vehicles built so far and for other industrial uses of some of these metals. But the places where they’re relatively easy to find, where they’re exposed at the surface or more easily detectable at the surface, we’ve found most of those sources already. And so the parts of the Earth’s crust that are well endowed with these metals, they’re deeper below the surface, they’re concealed, and there are overlying rocks. And so we’re trying to detect rocks that are somewhat different from the rocks around them, and we’re trying to see through tens to hundreds of meters of other rocks that are concealing them. And so that’s just a really difficult problem.</p><p>And this is what we do as scientists all the time. We make inferences about things that we can’t see. And it’s a very noisy problem. Any rock that you look at, you pick up off, you can see the heterogeneity of the rock. When you drive through a road cut on a highway, you can see how all the layers are dipping and folding and intersecting each other. And so you’re dealing with this incredibly heterogeneous system and that creates a lot of noise. And the more rocks that you have to see through, the more weathering processes that have occurred or geologic alteration processes that have occurred, the more different ways the rock can have been modified. And so we’re trying to detect through all of these degrees of complexity.</p><p>And the other kind of fundamental reason why this is so hard is because we live on the surface. And the places that we can easily get around to more or less easily-- sometimes we have to go to quite remote locations. You may have to take a helicopter or a snowcat to get somewhere. But even once you get there, you’re still standing on the surface and so you’re making a measurement of something. It might be you’re making a measurement of the angle at which the rock beds are dipping. You might be making a measurement of the composition of a rock sample that you take at surface or a soil sample. It might be a measurement of the gravitational field at that location, or it might be from an airborne measurement from a helicopter, a fixed-wing aircraft or a <a href="https://spectrum.ieee.org/tag/drones" target="_self">drone</a> or even a <a href="https://spectrum.ieee.org/tag/satellites" target="_self">satellite</a>. All of those are things we can get to constrain our model of what’s under the subsurface, but the data sets that we get are really sparse in general because we can’t sample the whole planet and they’re especially sparse in 3D because the number of places where we actually have samples from underground is really quite small. So that’s what makes the problem really hard.</p><p>And so lots of clever people are working on this problem. There’s the resources that go into exploration. But the success rate in the industry starts from the fact that we’re trying to do something really difficult. And it’s compounded by the increasing difficulty of the problem and the fact that the exploration methodology is just not keeping up with the increased difficulty. There’s been an underinvestment in innovation in exploration for these mineral resources. We are still using methods that were largely developed for and applied to problems where you can detect things closer to the surface. We have conceptual models of how ore deposits form that can be sometimes limiting because we’re looking for things that match the last discovery and not imagining the things that could be the next discovery. And where the sparsity of the data makes it difficult to apply some of these quantitative methods, but that means we just have to work harder to do so.</p><p><strong>Strickland:</strong> Yeah, and I know you are doing fieldwork now in several locations, but let’s talk first about how you decided on those targets, how you decided where you would go. What kind of data sources were you drawing on as you tried to figure out where you’d try and explore first?</p><p><strong>Goldman:</strong> Yeah. So it’s a surprise to many to learn that there’s actually a great deal of geoscience information in the public domain. Most of the information ever collected about the Earth’s crust actually is accessible. It’s just not accessible in any sort of compact format. It’s widely fragmented, tens and hundreds of thousands of geological maps, different geochemical and geophysical surveys. And you can find these things in databases that are kept by the different states and provinces, both of data that was collected at public expense of geologists with a geological survey going out and making maps and taking samples of the chemistry and the sediments at the bottom of lakes and so on. And then also data sets of historic exploration activities that have been conducted by other companies. In some jurisdictions, when you go do work, you have to write a detailed technical report and provide the data and that data becomes public. And this is really good policy because most discoveries are made on ground that many different companies have held. And what’s important is that when one company runs out of steam and they’ve exhausted their ideas, that the next company who picks up the ground picks up where the last one left off and uses all the same information and all the learnings rather than just collecting the same data all over again.</p><p>So we actually know a great deal and we know it at very different length scales and it’s patchy as we talked about. And so we’re starting from a combination of a kind of deep geological understanding and large-length scale data sets that allow us to make models to augment our geological understanding. We’re not starting with a completely blank slate about the world. The fact that these ore deposits are so unusual means they only occur where certain processes were happening and we know enough about the large-scale structure of the Earth’s crust to know that what are some of the broad regions where we either know some of those processes were occurring or where they might be occurring and we can hypothesize that we can find evidence of that.</p><p>And so there’s a kind of initial filtering both on sort of the largest length scale geologic prospectivity and also by where we think we can do business effectively. It has to be a place where you can access it. There’s enough infrastructure to be able to work. And where there’s a good rule of law and where we can operate a business to the highest ethical standards, which is really important to us in everything that we do. We have to know that given that we are never going to engage in corrupt activity, we have to be able to do work and we have to be able to retain interests that we acquire. When we put a lot of capital to work, we have to plausibly be able to earn a return on that. And that means being able to sort of be there--still be in the project when it is realized.</p><p><strong>Strickland:</strong> Excellent. So let’s talk about a real example here. Can you tell me what’s been going on in Quebec for the past few summers?</p><p><strong>Goldman:</strong> I’d be delighted to. So in Quebec, we’re exploring in a province called the <a href="https://cdnsciencepub.com/doi/10.1139/e85-140" rel="noopener noreferrer" target="_blank">Cape Smith Belt</a> in the far north of Quebec in Nunavik. And this is an area where, in particular, we’re looking for a type of deposit called a <a href="https://pubs.geoscienceworld.org/msa/elements/article-abstract/13/2/89/271501/Magmatic-Sulfide-Ore-Deposits?redirectedFrom=fulltext" rel="noopener noreferrer" target="_blank">magmatic sulfide</a>. And magmatic sulfides typically are rich in nickel, often have cobalt and copper, and sometimes some platinum group elements in them as well. And we have a very large area of claims there, more than 250,000 acres. So it’s a vast expanse in a really difficult location to get to. It’s more than an hour’s helicopter ride from the nearest airport to get to the places where we’re working. To get gear in there requires putting it on a boat in September for the following summer. At times, to get our camp supplied this summer, we had some tractors on skids pulling sleds across the tundra in the wintertime so that the camp was well supplied rather than doing a heavy lift operation to get things in.</p><p>So this is a very remote part of the world, and there’s a lot of rock exposure, and it’s a district that has actually a lot of nickel that we know about, but there’s very large expanses of this district that have seen much, much less exploration. And so we’re using a whole suite of different technologies to guide our exploration decisions. We have a team on the ground, who are walking and observing the rocks at the surface and going to places where we have predicted there are interesting rocks that are exposed at surface, where we might be able to see either evidence of the right kind of rocks, the right kind of mineralizing processes, or the mineralization itself in particular. We want to see the nickel and the copper ore minerals there in exposure at the surface. And they’re going to places that we predict, and they’re also going to places where the model is struggling to make a prediction and there’s a very high degree of uncertainty.</p><p>We’ve conducted several generations of airborne surveys to collect information about the conductivity and the magnetic properties of the rocks in the subsurface. And then we’re using those and other pieces of information, like satellite imagery, to make decisions about where there are very specific regions, what we call a target, where there’s evidence of all of the right mineralizing processes and a specific thesis about something that could be there in the subsurface. And then we’re drilling holes in order to see what’s down there and test our hypotheses and constrain our models in 3D at that kind of length scale. And the way that we’re guiding those models in particular is based on all that kind of larger-scale information. And then we’re doing much more localized exploration around those as well. One of the great features about this type of deposit is that it often has a contrast in the conductivity of the rocks in the deposit from the rocks that surround it. And so we can be looking for those anomalies and using electromagnetic methods to probe the conductivity of the subsurface. So one of the things we’ll do is we’ll lay a loop on the ground and pulse it and listen for the echoes from the conductive materials on the subsurface. And then when we drill a hole, we’ll also stick a probe down the hole and pulse that loop on the surface and use the detector at different places down the hole to be able to directly probe the volumes there as well.</p><p>So we have a suite of technologies that we call stochastic inversions that don’t just build one estimate of the subsurface they don’t build our sort of best understanding of the volume that we’re probing with these electromagnetic surveys. They build a whole ensemble of different possibilities that are all consistent with the data. There are many, many configurations of rocks in the subsurface that are equally consistent with the data. And what we need to do instead of kind of coming up with our best one based on what we think the geology is, we need to come up with many of those possibilities. And we need to understand the whole range of different possibilities. We need to understand the probability distribution of the things that matter, like what is the conductivity of this anomaly, and how deep is it, and how large is it, and what direction is it dipping? And we use that to make a decision about how to most effectively test all those possibilities with sequence of holes or another after that.</p><p>And so not only are we deploying this technology, but we’re deploying it in very short cycles. When a hole finishes, we’ll run the probe in the hole and pulse the loop on the surface, and collect these electromagnetic measurements. And then we need to turn around and do something with that information in a very short period of time. The rig is sitting there. It’s waiting to be redeployed. The geologist is standing there on the rig, trying to decide what to do. And the data scientist is kind of furiously trying to get some information out of this data that has just been collected and delivered. And this is a kind of unprecedented cycle time and speed here. It is typical to collect data in a much larger batch. It’s typical to have some time to think about it and process it. It’s also typical for these types of inversions where you get some data on the geophysical response and you use it to predict the physical properties of the rock--it’s typical for those things to take a really long time. You’re trying to do a large 3D finite element model. This is a hard problem. And it’s very computationally expensive.</p><p>And what we’re not just trying to do, but actually doing is turning these things around in hours to a day. It’s like we get the data and then data scientists using the system that our technology team and software engineers have built is producing this whole probability distribution of possible subsurface. And it’s not a fully automated process. It requires scientific context and scientific judgment to get this right. And then is producing this and putting it in context with what we understand about the geology of the region and then using it to make a decision about what to do with that drill rig that’s sitting there. Does it drill another hole at a different angle from the same surface location? Do we need to move the rig a couple hundred meters that way and drill back the opposite direction because now we have a better constraint on which direction the beds are dipping? Or do we need to move it entirely and we’ve learned what there is to learn here and it’s sort of good enough for now and if there’s something really good well it’s not impossible that it’s there, it’s just very unlikely and it doesn’t compete anymore with the whole inventory of other targets that we’ve got. And what’s amazing is that this is working. It’s actually working really well. We’re turning these decisions around in this really short period of time and the results that we’re getting from it are incredibly encouraging.</p><p><strong>Strickland:</strong> Okay, and so you mentioned that you are finding the auras that you were hoping to find in Quebec. What’s the end game there? I mean, do you imagine extracting them yourself, or what happens next?</p><p><strong>Goldman:</strong> Yeah, it’s a great question. And I guess, to clarify, there are sort of many steps along the way from finding evidence that you’ve got mineralization to sort of extraordinary intersections to 3D continuity of those intersections that you can establish to provide a mineral resource then on to the sort of economic viability of a resource. And across our portfolio, we’re in kind of very different stages in very different projects. And our <a href="https://miningforzambia.com/on-the-ground-at-mingomba/" rel="noopener noreferrer" target="_blank">Mingomba</a> project in Zambia is by far the furthest along.</p><p>And where do we go from there? Our goal is to get these projects all the way into production so that they’re actually producing the minerals that we need in order to build electric vehicles, in order to build the electrical systems, the batteries, and all the things that we need. And in our projects, we’re in them for the long term because that’s the way to create the most value. We want to ensure the long-term success of the project. We’re a long-term partner in the communities where we operate. We may need to augment our capabilities by working with the right partners in order to get projects very effectively into production. And we have relationships with large companies who could be potential partners on any of our projects. So exactly how that works kind of project by project. We’ll be making judgment calls on that. But we have long-term interest in projects.</p><p><strong>Strickland: </strong>Is there anything else? Is there anything else you think it’s important for listeners to understand about cobalt and what you’re doing?</p><p><strong>Goldman: </strong>I mentioned it very briefly in terms of our selection about where do we work in terms of being able to run a really ethical business. And that’s not limited to a choice about do we explore in this country or that country. That extends to everything about the way that we operate as a business. We want to create social value in the communities where we operate. We want to be a good long-term partner. We’re committed to environmental protection and high standards of labor practices wherever we work. And there are many decisions that we’ve made already and many decisions that we will make in the future that reflect all of these. And it’s not enough to say we’re looking for these materials because they’re going to help us avoid climate change. It really behooves us to work in really responsible ways in all of the projects that we’re working on and to do so really at every stage. These are not commitments that only matter once you start mining. They’re things that matter a lot from the earliest phases of actually getting on the ground in a community.</p><p><strong>Strickland:</strong> Thank you, Josh, so much for joining us. I really appreciate it.</p><p><strong>Goldman:</strong> Very glad to. Really appreciate it. Thank you, Eliza.</p><p><strong>Strickland:</strong> That was Josh Goldman speaking to me about his company, <a href="https://www.koboldmetals.com/" rel="noopener noreferrer" target="_blank">KoBold Metals</a>, which uses AI to search for the ore deposits needed to build electric vehicles. If you want to learn more, we’ve linked Goldman’s <em>IEEE Spectrum</em> <a href="https://spectrum.ieee.org/ai-mining" target="_self">feature article</a> in the show notes. I’m Eliza Strickland, and I hope you’ll join us next time on Fixing the Future.</p>]]></description><pubDate>Wed, 20 Sep 2023 09:05:02 +0000</pubDate><guid>https://spectrum.ieee.org/fiding-battery-metals-with-ai</guid><category>Type-podcast</category><category>Ai</category><category>Lithium</category><category>Batteries</category><category>Cobalt</category><category>Copper</category><category>Nickel</category><category>Mining</category><category>Minerals</category><category>Artificial-intelligence</category><category>Fixing-the-future</category><dc:creator>Eliza Strickland</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/41674598/origin.webp"/></item><item><title>Intel's Open Source Strategy</title><link>https://spectrum.ieee.org/intels-open-source-strategy</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.webp?id=36206535&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/a5e10282" width="100%">
</iframe><p><strong>Stephen Cass:</strong> Hello and welcome to <em>Fixing the Future</em>, a podcast from <em>IEEE spectrum</em>. I’m your host <a href="https://spectrum.ieee.org/u/stephen-cass" target="_self">Stephen Cass</a>, a senior editor at Spectrum, and before Before we start, I just want to tell you that you can get the latest coverage from some of <em>Spectrum</em>‘s most important beats, including AI, climate change, and robotics by signing up for one of our free newsletters. Just go to <a href="https://spectrum.ieee.org/newsletters/" target="_self">spectrum.ieee.org/newsletters</a> to subscribe. With all that said, today’s guest is <a href="https://www.linkedin.com/in/arunpgupta" rel="noopener noreferrer" target="_blank">Arun Gupta</a>, vice president and general manager of <a href="https://www.intel.com/content/www/us/en/developer/topic-technology/open/overview.html" rel="noopener noreferrer" target="_blank">Open Ecosystem Initiatives at Intel</a> and chair of the <a href="https://www.cncf.io/" rel="noopener noreferrer" target="_blank">Cloud Native Computing Foundation</a>. Hi, Arun, thanks for joining me.</p><p><strong>Arun Gupta: </strong>Hi, I’m very happy to be here.</p><p><strong>Cass:</strong> So, Intel is very famously a hardware company. What does it get out of supporting open-source ecosystems?</p><p><strong>Gupta:</strong> Well, I mean, Pat always says, “Software defined, hardware enabled.” So, you can build the finest piece of hardware, but if the software is not going to run on it it’s not going to be very helpful, right? And that’s honestly the reasons that we contribute to open source all along, and we have been contributing for over two decades. Because our customers they consume our product, which is a silicon using these open-source projects. So, you pick a project <a href="https://openjdk.org/" rel="noopener noreferrer" target="_blank">OpenJDK</a>, <a href="https://pytorch.org/" rel="noopener noreferrer" target="_blank">PyTorch</a>, <a href="https://www.tensorflow.org/" rel="noopener noreferrer" target="_blank">TensorFlow</a>, <a href="https://scikit-learn.org/stable/" rel="noopener noreferrer" target="_blank">scikit-learn</a>, <a href="https://kafka.apache.org/" rel="noopener noreferrer" target="_blank">Kafka</a>, <a href="https://cassandra.apache.org/_/index.html" rel="noopener noreferrer" target="_blank">Cassandra</a>, <a href="https://kubernetes.io/" rel="noopener noreferrer" target="_blank">Kubernetes</a>, <a href="https://www.kernel.org/" rel="noopener noreferrer" target="_blank">Linux kernel,</a> <a href="https://gcc.gnu.org/" rel="noopener noreferrer" target="_blank">GCC</a>. And our customers who want to consume our silicon they want to make sure that these open-source projects are consumed well on the Intel silicon, they behave well, and they are able to leverage all the features that are in the instruction set of the latest edition of the chip.</p><p>So, that’s where over the last two decades Intel has been contributing to open source very actively because it truly aligns with our customer obsession. So, I mean, if you think about it, Intel has been the top contributor to Linux kernel for over 15 years. We are among the top 10 contributors to Kubernetes, and I just learned, I think a couple of days ago, our number is up to number seven now. We are among the top contributors to OpenJDK, number three Contributor to PyTorch. So, if you think in terms of the scale that we are operating, there are hundreds of people, thousands of developers at Intel that are contributing to these open-source projects.</p><p><strong>Cass:</strong> I know Intel probably doesn’t have a formal opinion, but you yourself, what do you find the most exciting project?</p><p><strong>Gupta: </strong>Oh, several. I mean, and I’ve been in the open-source community for over two decades as well. And I find excitement all over the place really. So, some of the names that I shared earlier, think in terms of OpenJDK, right? OpenJDK is the reference implementation of Java. We are talking about 12 million developers they need to use OpenJDK. And a large number of them continue to use Java on Intel architecture. And as they are continuing to use on Intel architecture, with Sapphire Rapids we have accelerators that have been attached to the silicon as well. Now, we want to make sure customers are able to leverage those accelerators whether you are using crypto or hashing or security, that’s where we are making contributions in OpenJDK that can leverage that acceleration in the Intel silicon, and not just upstream. The fact the way we do the upstream contribution it goes to the main branch. And because it goes to the main branch, that means it’s available in all the downstream distros.</p><p>So, it doesn’t matter whether you’re using Oracle JDK or <a href="https://aws.amazon.com/corretto/" rel="noopener noreferrer" target="_blank">Amazon Corretto</a> or <a href="https://adoptium.net/" rel="noopener noreferrer" target="_blank">Eclipse Adoptium</a>, it’s available in the downstream distro. So, that pervasive nature of our upstream optimizations available all over the board I think is a key factor why we are excited about it. And that’s sort of the philosophy we take for other projects as well. PyTorch for example, has their default <a href="https://www.intel.com/content/www/us/en/developer/tools/oneapi/onednn.html" rel="noopener noreferrer" target="_blank">oneDNN network</a> on how you do optimization. And that’s again done by the oneAPI team at Intel. And we do this in a very upstream manner because people will take the PyTorch distribution. PyTorch 2.0 was done a few weeks ago, and that’s where a lot of our optimizations are available. So, you pick a project. Linux kernel, again, we do this in the upstream main branch so that it doesn’t matter whether you’re using <a href="https://www.debian.org/" rel="noopener noreferrer" target="_blank">Debian</a> or <a href="https://canonical.com/" rel="noopener noreferrer" target="_blank">Canonical</a> or <a href="https://ubuntu.com/" rel="noopener noreferrer" target="_blank">Ubuntu</a> or what you’re using, those optimizations are available for you over there. I mean, overall, if you think about it, Intel has been committed to driving collaboration, standardization, and interoperability in open-source software from the very beginning.</p><p><strong>Cass:</strong> So, that actually leads me to my next question, which is about that issue of interoperability and standardization and so on. I have a feeling of dread whenever the word is, oh, just compile it from source comes up or just use it from source comes up. Because unless the project has reached a level of maturity that there are nice binaries that have been being packaged up from my specific version of my operating system, using open-source software in that way is just a nightmare. How do I replicate the environment? Have I got this going on? Have I understood that and so on? It’s really difficult to use unless I’m really deeply embedded in the community where that software comes from. So, can you talk a little bit about what are some of the solutions to that problem? Because standardization seems to be a very imaginary phantom when I’m doing this because I end up having to almost duplicate the exact reference setup that that particular community has used.</p><p><strong>Gupta:</strong> Well, you can go down the rabbit hole very fast actually. So, as you said very rightly, I think that’s where it’s important that the contributions are done in such a manner where they have the biggest impact. So, as a developer, let’s say you’re building on a Linux machine, you want to be able to say <a href="https://linux.die.net/man/8/apt-get" rel="noopener noreferrer" target="_blank">apt-get</a> or <a href="https://access.redhat.com/solutions/9934" rel="noopener noreferrer" target="_blank">Yum install</a>, and that’s sort of all that you should have to do. And that’s where the impetus lies on Intel and their partners that once this gets into upstream, if there is a <a href="https://en.wikipedia.org/wiki/Common_Vulnerabilities_and_Exposures" rel="noopener noreferrer" target="_blank">CVE</a>, if there is a vulnerability, if there is a problem, if there is a patch that needs to be applied, it should just go straight up in the upstream contribution. And from there upstream it gets delivered in the right patches and then it goes into the right packages essentially.</p><p>So that end of the day you can just say Yum update and voila, you have the right configuration in for you. And compile from the source only works for people who are brave at heart, right? Because you don’t know what the dependencies are, etc. So, I think within Intel we really think in terms of what contributions are we making upstream, how is it available in downstream distributions, and then how are the customers using it? And then the customer is really giving us feedback, “Hey, this is sort of the next set of the investment that you need to do in the open-source project.” And that kind of makes a full circle, essentially. So, that’s how we look at it. So, really Intel really contribute every layer of the stack and all the way from silicon to the app where we are creating an environment where open-source developers can deploy their solutions to any corner of the globe. And that’s sort of the main element here.</p><p><strong>Cass: </strong>Turning to open source and security, you recently tweeted, “Automation is the only path to open-source security.” Can you explain what you meant by that?</p><p>Yeah, absolutely. This was actually by one of the keynotes that I attended at Open Source Summit North America and Vancouver. And <a href="https://www.linkedin.com/in/eric-brewer-1031254/" rel="noopener noreferrer" target="_blank">Eric Brewer</a> was giving that talk. So, that was not my quote so it will be attributed to Eric Brewer from Google. And really, I fundamentally believe in that. So, every tweet that I do, I believe in that element. And really, if you think about why automation is the key, it is the only way to improve security. Because humans are meant to err, machines less likely because that’s where machines are really good at. They are very good at repetitive, boring task. If you say, here is a tool that is integrated as part of the <a href="https://about.gitlab.com/topics/ci-cd/" rel="noopener noreferrer" target="_blank">CI/CD bill</a>, here is a CVE vulnerability scanning part, here is the static code analysis part. So, once you start putting those processes in place, once you start putting those tools in place, nobody is saying that the process is going to be perfect, but at least you have the process in place and then you start catching those bugs early as opposed to leaking it out.</p><p>And then once you find out where the process is failing, then you improve the process, then you inject a mold tool over there or you figure out what needs to be done. So, the whole point is make it to the point of it’s super boring where everything is automated. As they say, automation in this boring infrastructure is the exciting times. So, that is really the key on how you can improve the security. And then of course, open source as Linus’s law says, “Given the number of eyeballs, all bugs are shallow.” So, more people are looking at the source code. They all bring that unique diverse perspective that really allows you to kind of counter that what’s going on here and that, oh, this doesn’t serve my use case and maybe I tweak it this way but yet make sure it goes through the regression test. And for the regression test, again, the performance test, all of that automation is the key. So, think in terms of push to prod, right? Every time I’m making a new commit to the GitHub repo, what all is happening after that? Is there a static code analysis? Is there a pull review request? Is there a regression test? Is there a performance test? Is there a scalability test? What all tests are happening automatically because that improves your confidence in pushing into putting it into production.</p><p><strong>Cass: </strong>You talked recently about developing a software bill of materials as part of the way to attack this problem. Could you tell a bit more about that?</p><p><strong>Gupta: </strong>Yeah, absolutely. Now, the software bill of materials is sort of where it’s coming from the executive order that was issued by the Biden government. This really happened after the <a href="https://spectrum.ieee.org/tag/log4j" target="_self">Log4j incident</a> that happened a couple of years ago. So essentially, when Log4Shell happened, people were like, “Where are Log4js used? We don’t even know that.” And it took companies a long time to figure out. We understand that this is a vulnerability, but how do we track where it is? And as part of that, that’s where the executive order came about to be. And so the idea here is that the executive order says if you want to operate with federal government, which everybody wants to, if you want to sell to federal government, then we need to have a software bill of materials. Now, Intel is primarily a silicon company. It is a silicon company. So, in that sense, we have done the hardware bill of materials for a number of years, and that’s always been the case. We’re just extending that knowledge and domain to software bill of materials.</p><p>So, essentially what you could do is you can take a look at software bill of materials, then you understand how the software is made of. You understand the dependencies, you understand the libraries, you understand the version number, you understand their licenses. So, there are tools by which you can look at an SBOM or software bill of materials and understand. So, tomorrow if Log4Shell happens, then inside you can say, “Hey, where is my SBOM database?” And if Log4j is happening, tell me all the softwares across Intel, for example, that are using Log4j this particular version and then hopefully I can nip it right in the bud itself. So, that’s sort of the whole premise of SBOM. And of course, Intel works with the federal government all the time. The executive order requires any new orders, any new business with the government starting, I believe, June 15th, to have an SBOM. And I think there is a retrofit window for the next few months. So, we are ready for that as we launch out.</p><p><strong>Cass: </strong>I want to talk a little bit more about humans and open source as virtually all major open-source projects have accompanying large human communities. What are some of the other human problems you see recurring in those communities and what are some of the best ways you’ve seen to address or avoid those problems?</p><p><strong>Gupta:</strong> Yeah, no, absolutely. First of all, never use humans for the job of a machine. This is a quote that was <a href="https://clip.cafe/the-matrix-1999/never-send-a-human-do-a-machines-job-s1/" rel="noopener noreferrer" target="_blank">made by Agent Smith</a> in the movie <em>Matrix</em>, and I really believe in that. And that’s where automation is the key. The humans are honestly what makes the projects that much more interesting. Particularly if you are in an open-source project, you really need to think about— I won’t name the company. One of my previous companies. We submitted a pull request. We were trying to get into a brand-new community. We submitted a pull request for a very fundamental change in a very popular open-source project. The pull request was denied within 30 minutes because the team did not do a good job of understanding the social dynamics, understanding the people, understanding the needs of the project. They just rolled in that nope, we want this [to be?] happen. Everybody just flipped on the table completely. Nope, not going to work.</p><p>And then eventually you start building trust because trust doesn’t happen day one. Particularly in this open-source world, if you are co-opting where you are all working in sort of the OpenJDK implementation but you have your own product distribution as well. Similarly, if you’re all working on Kubernetes, but you have your own managed service or your own distribution around Kubernetes. So, that’s where the people problems happen, actually, because humans are squishy, right? As they say, they have feelings and those feelings get hurt. And they have their corporates who are paying their bills, and those corporates have sometimes competing priorities. So, that’s where I’ve seen constantly all along. But I would say I’m part of the Cloud Native Computing Foundation and I definitely would highly give very high points to CNCF in terms of how they have been very diverse, very inclusive, and all sorts of efforts that are happening within CNCF to minimize the people problem. But humans are humans, that happens all the time.</p><p><strong>Cass:</strong> I want to turn now to green software and sort of open source’s place in it. And you’ve done a little bit of work in this area and commentary on this area. Can you tell people what green software is and why is open source important there?</p><p><strong>Gupta:</strong> Yeah, absolutely. Well, green software is— think in terms of sustainability of the software, right? And that’s what the <a href="https://greensoftware.foundation/" rel="noopener noreferrer" target="_blank">Green Software Foundation</a> is an open-source foundation under <a href="https://www.linuxfoundation.org/" rel="noopener noreferrer" target="_blank">Linux Foundation</a>. So, they have defined what are the Green Software Foundation principles. And when you think in terms of green software, what you’re thinking in terms of when I’m writing the software, is it the most optimal software in terms of CPU, in terms of memory consumption, in terms of execution time? So, those are the tenets that are coming to your mind, essentially. When I am running my containers, for example, where I’m running my containers, are they run in a data center that is purely powered by electricity or are they powered by renewable electricity? Can I move my workloads around across the globe? Do I have that flexibility where I’m only running my workloads where the data centers are powered by the natural electricity? So, New Zealand to India to Europe to America back to New Zealand. So, if you can go around the world moving your workloads and if that is what your customer demands are, those are some of the elements that people talk about in terms of Green Software Foundation.</p><p>More recently, I think I tweeted about this as well. More recently, there was a report that came out from Green Software Foundation and there they were really talking about what is the state of green software essentially? And some of the highlights if you think about it were there, that the green software really requires a holistic approach. You can’t just say, “Because I’m using such and such programming language, I’m green. Or because I’m deploying in such and such data center, I’m green.” That’s an important element. Then there is software legislation that is super important as well because the government’s requiring it on how it needs to be done. And if you think about the emissions from software, how much tech-centric we have become over the years, the software emissions are equivalent to air, rail, and shipping combined. I think those are the key elements that we need to think about that how do we make sure that this is an important element? So, how do we cut it down?</p><p>And you talked about open source. Open-source solutions are really essential to greening the software essentially. And also there are lots of different tools available. There is an open-source Carbon Aware SDK that helps you build the carbon aware software solutions with the intelligence to use the greenest energy sources. That’s the part that I was talking about. Then there is cloud carbon footprint is one example of open-source tooling that is impacting the speed and quality of decarbonization approaches. So, there’s a lot of work that is happening. There is <a href="https://lfenergy.org/" rel="noopener noreferrer" target="_blank">LF Energy</a>, a foundation. She wrote in a December article that, “one company cannot build the technologies needed to mitigate climate change and traditional black box approaches to proprietary software will only inhibit progress.” So, that only emphasizes the importance of open software. So, I would highly recommend people to go to Green Software Foundation website, which is basically greensoftware.foundation, look at their principles essentially, and see what needs to be done.</p><p><strong>Cass:</strong> So, that leads me to my next question and this is sort of in your role as part of that Cloud Native Computing Foundation where one of the criticisms with sort of cloud computing and this model, I mean, you talk about, okay, it’s great, you can shift your computing basically to follow the sun or the wind. But on a personal coding level, the low marginal cost of spinning up another virtual server, does that remove the incentives for efficiency? Because it’s like, why do I have to be efficient? I’ll just spin up another server. It would lose that efficiency. How do you really get it in the way that I need to be efficient because this is going to mean something to me personally, very directly, not in the abstract global sense?</p><p><strong>Gupta:</strong> No, absolutely. And I think you are absolutely right. To some extent what we have done is the ease of spinning up a <a href="https://en.wikipedia.org/wiki/Virtual_machine" rel="noopener noreferrer" target="_blank">VM</a> without giving enough information about it that, “Hey, by the way, when you spin up this VM, the carbon footprint of that VM is going to be such and such.” Not necessarily metric ton, but 0.006 metric ton. So, I think that transparency needs to come out. What I would love to see is when I walk into Costco or Safeway, right, I pick up a product and I see here is the label of that product. I know how much proteins, sugars, carbohydrates it has. I would love to see that I want to buy an application that has its green footprint on that application where it says, “Hey, by the way, when you are consuming this website or when you’re consuming this API, here is the label on it.” And I think that level of transparency is going to be fundamental. I would love to walk into Costco and say, by the time this milk got here, it has made the way all the way through such and such farm, and really route it back to that was the farm really done in a green manner? The truck that traveled, what does it cost? So, what is the cumulative footprint? Because once we start raising awareness, and that’s where the legislation angle would really help, and that’s what is rapidly increasing. So, I think it really requires that holistic approach at policy level, at software level, at data center level, at visibility level. That once you are aware, hopefully you are becoming more and more conscious, essentially.</p><p><strong>Cass:</strong> Turning back to the technical for the moment. You talked at the start about, hey, one of the reasons we’re involved with these ecosystems is that we want to make sure people are using the full feature set, they’re using all the tools available in our silicon. Have there been examples though where you’ve looked at the open-source community’s needs and that has led to specific features being put into future revs of the silicon?</p><p><strong>Gupta: </strong>Well, it’s always a two-way cycle, right? Because silicon is typically a longer development cycle. So, in that sense, when we start working on a silicon it could take two to five years essentially. And so right about that time when we are creating that silicon feature is when the discussion needs to happen as well. Contributing a feature to Linux kernel could take about the same time. By the time you conceive the idea, by the time you propose the idea, by the time you write the code, it’s reviewed, and by the time it’s merged into the main branch and available in the downstream distro. Because our goal really here is by the time silicon is released and is made available in the CSPs and the data center and your client devices, we want to have all that work to be available in the downstream distros. So, that work happens hand in hand in terms of what is the feature that community is telling us that is important and what is the feedback that we’re giving back to the community.</p><p><strong>Cass: </strong>So, what kind of things does Intel have planned ahead for its roadmap in the next year or two with regard to open source?</p><p><strong>Gupta:</strong> Yeah, no, I mean, my team is the open ecosystem team essentially, and we are constantly working on— my team is responsible for open ecosystem strategy across all of Intel. So, we work with all the BUs, business units, within Intel and helping them define their open ecosystem strategy. So, my team also runs the <a href="https://www.intel.com/content/www/us/en/developer/topic-technology/open/overview.html" rel="noopener noreferrer" target="_blank">open.intel.com</a> website. So, I would highly encourage people go and find out what are the latest and the greatest things that we are doing over there. We recently launched OpenFL or <a href="https://openfl.readthedocs.io/en/latest/" rel="noopener noreferrer" target="_blank">Open Federated Learning</a> as a project that was just contributed to LF AI & Data Foundation. So, that’s an exciting project where we talk about how Intel and UPenn or Penn Medical actually worked with their partners to create this federated learning platform. So, that’s an exciting element. We continue to sponsor a lot of open-source conferences, and whether it’s <a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/" rel="noopener noreferrer" target="_blank">KubeCon</a> or <a href="https://events.linuxfoundation.org/archive/2022/open-source-summit-north-america/about/about-oss/" rel="noopener noreferrer" target="_blank">Open Source Summit</a> or any other high profile developer events.</p><p>So, telling developers that whether you are operating at a silicon level or at an app level, Intel is relevant all around the stack. So, think about us, tell us we have that— and again, think of us from, we’re not really creating a new language here, per se, but what we are really doing is giving you that leg up on your competition, giving you that performance, that optimization that you really need. Because oftentimes when customers run their application in the stack, they would think, “Oh, Intel is so far down below the stack, it doesn’t matter.” No, it does matter. And that’s exactly what the point we’re trying to tell you. That because the fact that your Java application is running in a serverless environment, because the memory footprint is small, because it’s operating a lot more efficiently, that brings down the cost of your serverless function that much lower. So, I think that’s where customers, the developers need to think about the relevance of Intel, and those are the areas we’re going to keep pushing and telling the story. I really call myself as a chief storytelling officer around the efforts that Intel is doing and we would love to hear what else the developers would like to hear.</p><p><strong>Cass: </strong>So, well that was fantastic, Arun. I really enjoyed talking with you today. And so on today in <em>Fixing the Future</em>, we were talking with Arun Gupta of Intel. And for <em>IEEE Spectrum</em>, I’m Stephen Cass.</p><p><strong>Gupta:</strong> Stephen, thank you for having me.</p>]]></description><pubDate>Wed, 06 Sep 2023 16:19:50 +0000</pubDate><guid>https://spectrum.ieee.org/intels-open-source-strategy</guid><category>Linux</category><category>Cybersecurity</category><category>Green-computing</category><category>Intel</category><category>Open-source</category><category>Type-podcast</category><category>Fixing-the-future</category><dc:creator>Stephen Cass</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/36206535/origin.webp"/></item><item><title>Finding The Wisest Ways To Global AI Regulation</title><link>https://spectrum.ieee.org/wisest-ai-regulation</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.webp?id=35040164&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/8ed5eab4" width="100%"></iframe><p><strong>Welcome to <em>Fixing the Future</em>, an <em>IEEE Spectrum</em> podcast. I’m senior editor <a href="https://spectrum.ieee.org/u/eliza-strickland" target="_self">Eliza Strickland</a>, and today I’m talking with Stanford University’s <a href="https://law.stanford.edu/directory/russell-wald/" rel="noopener noreferrer" target="_blank">Russell Wald</a> about efforts to regulate artificial intelligence. Before we launch into this episode, I’d like to let listeners know that the cost of membership in the IEEE is currently 50 percent off for the rest of the year, giving you access to perks, including Spectrum Magazine and lots of education and career resources. Plus, you’ll get an excellent IEEE-branded Rubik’s Cube when you enter the code CUBE online. So go to <a href="https://www.ieee.org/membership/join/index.html" rel="noopener noreferrer" target="_blank">IEEE.org/join</a> to get started.</strong></p><p>Over the past few years, people who pay attention to research on artificial intelligence have been astounded by the pace of developments, both the rapid gains in AI’s capabilities and the accumulating risks and dark sides. Then, in November, <a href="https://openai.com/" rel="noopener noreferrer" target="_blank">OpenAI</a> launched the remarkable chatbot ChatGPT, and the whole world started paying attention. Suddenly, policymakers and pundits were talking about the power of AI companies and whether they needed to be regulated. With so much chatter about AI, it’s been hard to understand what’s really happening on the policy front around the world. So today, I’m talking with Russell Wald, managing director for policy and society at <a href="https://hai.stanford.edu/" rel="noopener noreferrer" target="_blank">Stanford’s Institute for Human-Centered Artificial Intelligence</a>. Today on <em>Fixing the Future</em>, I’m talking with Russell Wald, managing director for policy and society at Stanford’s Institute for Human-Centered Artificial Intelligence. Russell, thank you so much for joining me today.</p><p>Russell Wald: Thanks so much. It’s great to be here.</p><p>We’re seeing a lot of calls for regulation right now for artificial intelligence. And interestingly enough, some of those calls are coming from the CEOs of the companies involved in this technology. The heads of OpenAI and Google have both openly discussed the need for regulations. What do you make of these calls for regulations coming from inside the industry?</p><p>Wald: Yeah. It’s really interesting that the inside industry calls for it. I think it demonstrates that they are in a race. There is a part here where we look at this and say they can’t stop and collaborate because you start to get into antitrust issues if you were to go down those lines. So I think that for them, it’s trying to create a more balanced playing field. But of course, what really comes from this, as I see it, is they would rather work now to be able to create some of those regulations versus avoiding reactive regulation. So it’s an easier pill to swallow if they can try to shape this now at this point. Of course, the devil’s in the details on these things, right? It’s always, what type of regulation are we talking about when it comes down to it? And the reality is we need to ensure that when we’re shaping regulations, of course, industry should be heard and have a seat at the table, but others need to have a seat at the table as well. Academia, civil society, people who are really taking the time to study what is the most effective regulation that still will hold industry’s feet to the fire a bit but allow them to innovate.</p><p>Yeah. And that brings us to the question, what most needs regulating? In your view, what are the social ills of AI that we most need to worry about and constrain?</p><p>Wald: Yeah. If I’m looking at it from an urgency perspective, for me, the most concerning thing is synthetic media right now. And the question on that, though, is what is the regulatory area here? I’m concerned about synthetic media because of what will ultimately happen to society if no one has any confidence in what they’re seeing and the veracity of it. So of course, I’m very worried about deep fakes, elections, and things like this, but I’m just as worried about the Pope in a puffy coat. And the reason I’m worried about that is because if there’s a ubiquitous amount of synthetic media out there, what are ultimately going to do is create a moment where no one’s going to have confidence in the veracity of what they see digitally. And when you get into that situation, people will choose to believe what they want to believe, whether it’s an inconvenient truth or not. And that is really concerning.</p><p>So just this week, an <a href="https://commission.europa.eu/index_en" rel="noopener noreferrer" target="_blank">EU Commission</a> vice president noted that they think that the platform should be disclosing whether something is AI-generated. I think that’s the right approach because you’re not going to be able to necessarily stop the creation of a lot of synthetic media, but at a minimum, you can stop the amplification of it, or at least, put on some level of disclosure that there is something that signals that it may not be in reality what it says it is and that you are at least informed about that. That’s one of the biggest areas. The other thing that I think, in terms of overall regulation that we need to look at is more transparency regarding foundation models. There’s just so much data that’s been hovered up into these models. They’re very large. What’s going into them? What’s the architecture of the compute? Because at least if you are seeing harms come out of the back end, by having a degree of transparency, you’re going to be able to say, “Aha.” You can go back to what that very well may have been.</p><p>That’s interesting. So that’s a way to maybe get at a number of different end-user problems by starting at the beginning.</p><p>Wald: Well, it’s not just starting at the beginning, which is a key part, but the primary part is the transparency aspect. That is what is significant because it allows others to validate. It allows others to understand where some of these models are going and what ultimately can happen with them. It ensures that we have a more diverse group of people at the table, which is something I’m very passionate about. And that includes academia, which historically has had a very vibrant role in this field, but since 2014, what we’ve seen is this slow decline of academia in the space in comparison to where industry’s really taking off. And that’s a concern. We need to make sure that we have a diverse set of people at the table to be able to ensure that when these models are put out there, there’s a degree of transparency that we can help review and be part of that conversation.</p><p>And do you also worry about algorithmic bias and automated decision-making systems that may be used in judicial systems, or legal systems, or medical contexts, things like that?</p><p>Wald: Absolutely. And so much so in the judicial systems, I’m so concerned about that that I think that if we are going to talk about where there could be pauses, less so, I guess, on research and development, but very much so on deployment. So without question, I am very concerned about some of these biases and biases in high-risk areas. But again, coming back to the transparency side, that is one area of where you can have a much richer ecosystem of being able to chase these down and understand why that might be happening in order to try to limit that or mitigate these type of risk.</p><p>Yeah. So you mentioned a pause. Most of our listeners will probably know about the pause letter, as people call it, which was <a href="https://spectrum.ieee.org/ai-pause-letter-stokes-fear" target="_self">calling for a six-month pause in experiments</a> with giant AI systems. And then, a couple months after that, there was an open statement by a number of AI experts and industry insiders saying that we must take seriously the existential risk posed by AI. What do you make of those kind of concerns? Do you take seriously the concerns that AI might pose as existential threat to our species? And if so, do you think that’s something that can be regulated or should be thought about in regulatory context?</p><p>Wald: So first, I think, like all things in our society these days, everything seems to get so polarized so quickly. So when I look at this and I see people concerned about either existential risk or saying you’re not focused on the immediacy of the immediate harms, I take people for their word in terms of they come at this from good faith and from differing perspectives. When I look at this, though, I do worry about this polarization of these sides and our inability to have a genuine, true conversation. In terms of existential risk, is it the number one thing on my mind? No. I’m more worried about human risk being applied with some of these things now. But to say that existential risk is a 0% probability, I would say no. And so, therefore, of course, we should be having strong and thoughtful dialogs about this, but I think we need to come at it from a balanced approach. If we look at it this way, the positive of the technology is pretty significant. If we look at <a href="https://spectrum.ieee.org/alphafold-proves-that-ai-can-crack-fundamental-scientific-problems" target="_self">what AlphaFold has done with protein folding</a>, that in itself, could have such significant impact on health and targeting of rare diseases with therapies that would not have been available before. However, at the same time, there’s the negative of one area that I am truly concerned about in terms of existential risk, and that is where the human comes into play with this technology. And that’s things like synthetic bio, right? Synthetic bio could create agents that we cannot control and there can be a lab leak or something that could be really terrible. So it’s how we think about what we’re going to do in a lot of these particular cases.</p><p>At the Stanford Institute for Human-Centered AI, we are a grant-making organization internally for our faculty. And before they even can get started with a project that they want to have funded, they have to go through an ethics and society review statement. And you have to go and you have to say, “This is what I think will happen and these are the dual-use possibilities.” And I’ve been on the receiving end of this, and I’ll tell you, it’s not just a walk in the park with a checklist. They’ve come back and said, “You didn’t think about this. How would you ameliorate this? What would you do?” And just by taking that holistic aspect of understanding the full risk of things, this is one step that we could do to be able to start to learn about this as we build this out. But again, just to get back to your point, I think we really have to just look at this and the broad risk of this and have genuine conversations about what this means and how we can address this, and not have this hyperpolarization that I’m starting to see a little bit and it’s concerning.</p><p>Yeah. I’ve been troubled by that too, especially the sort of vitriol that seems to come out in some of these conversations.</p><p>Wald: Everyone can be a little bit over the top here. And I think it’s great that people are passionate about what they’re worried about, but we have to be constructive if we’re going to get towards things here. So it’s something I very much feel.</p><p>And when you think about how quickly the technology is advancing, what kind of regulatory framework can keep up or can work with that pace of change? I was talking to one computer scientist here in the US who was involved in crafting the blueprint for the AI Bill of Rights who said, “It’s got to be a civil rights framework because that focuses more on the human impact and less on the technology itself.” So he said it can be an Excel spreadsheet or a neural network that’s doing the job, but if you just focus on the human impact, that’s one way to keep up with the changing technology. But yeah, just curious about your ideas about what would work in this way.</p><p>Wald: Yeah. I’m really glad you asked this question. What I have is a greater concern that even if we came up with the optimal regulations tomorrow, that really were ideal, it would be incredibly difficult for government to enforce this right now. My role is really spending more time with policymakers than anything else. And when I spend a lot of time with them, the first thing that I hear is, “I see this X problem, and I want to regulate it with Y solution.” And oftentimes, I’ll sit there and say, “Well, that will not actually work in this particular case. You’re not solving or ameliorating the particular harm that you want to regulate.” And what I see that needs to be done first before we can fully go thinking about regulations is a pairing of this with investment, right? So we don’t have a structure that really looks at this, and if we said, “Okay, we’ll just put out some regulations,” I have concern that we wouldn’t be able to effectively achieve those. So what do I mean by this? First, largely, I think we need more of a national strategy. And part of that national strategy is ensuring that we have policymakers as informed as possible on this. I spend a lot of time with briefings with policymakers. You can tell the interest is growing, but we need more formalized ways and making sure that they understand all of the nuance here.</p><p>The second part of this is we need infrastructure. We absolutely need a degree of infrastructure that ensures that we have a wider degree of people at the table. That includes the National AI Research Resource, which I have been personally passionate about for quite a few years. The third part of this is talent. We’ve got to recruit talent. And that means we need to really look at STEM immigration and see what we can do because we do show plenty of data, at least within the US. The path for those students who can’t stay here, the visa hurdles are just too terrible. They pick up and go, for example, to Canada. We need to expand programs like the Intergovernmental Personnel Act that can allow people who are in academia or other nonprofit research to go in and out of government and inform government so that they’re more clear on this.</p><p>Then, finally, we need to, in a systematic way, bring in regulation into this space. And on the regulatory front, I see there’s two parts here. First, there is new novel regulations that will need to be applied. And again, the transparency part would be one that I would get into mandated disclosures on some things. But the second part of this is there’s a lot of low-hanging fruit with existing regulations in place. And I am heartened to see that the FTC and DOJ have at least put out some statements that if you are using AI for nefarious purposes or deceptive practices, or you are claiming something is AI when it’s not, we’re going to come after you. And the reason why I think this is so important is right now we’re shaping an ecosystem. And when you’re shaping that ecosystem, what you really need is to ensure that there is trust and validity in that ecosystem. And so I frankly think FTC and DOJ should bring the hammer down on anybody that’s using this for any deceptive practice so that we can actually start to deal with some of those issues. And under that entire regime, you’re more likely to have the most effective regulations if you can staff up some of these agencies appropriately to help with this. And that’s what I find to be one of the most urgent areas. So when we’re talking about regulation, I’m so for it, but we’ve got a pair it up with that level of government investment to back it up.</p><p>Yeah. That would be a really good step to see what is already covered before we go making new rules, I suppose.</p><p>Wald: Right. Right. And there is a lot of existing areas that are, it’s easily covered in some of these things, and it’s a no-brainer, but I think AI scares people and they don’t understand how that applies. I’m also very for federal data privacy law. Let’s start early with some of that type of work of what goes into these systems at the very beginning.</p><p>So let’s talk a little bit about what’s going on around the world. The <a href="https://spectrum.ieee.org/ai-regulation-worldwide" target="_self">European Union seemed to get the first start on AI regulations</a>. They’ve been working on the AI Act since, I think, April 2021, the first proposal was issued, and it’s been winding its way through various committees, and there have been amendments proposed. So what’s the current status of the AI Act? What does it cover? And what has to happen next for that to become enforceable legislation?</p><p>Wald: The next step in this is you have the European Parliament’s version of this, you have the council, and you have the commission. And essentially, what they need to look at is how they’re going to merge and what areas of these will go into the actual final law. So in terms of overall timeline, I would say we’re still about another good year off from anything probably coming into enforcement. I would say a very good year off if not more. But to that end, what is interesting is, again, this rapid pace that you noted and the change of this. So what is in the council and the commission versions really doesn’t cover foundation models to the same level that the European Parliament does. And the European Parliament, because it was a little bit later in this, has this area of foundation models that they’re going to have to look at, which will have a lot of more key aspects on generative AI. So it’s going to be really interesting what ultimately happens here. And this is the problem of some of this rapid moving technology. I was just talking about this recently with some federal officials. We did a virtual training last year where we had some of our Stanford faculty come in and record these videos. They’re available for thousands of people in the federal workforce. And they’re great. They barely touched on generative AI. Because it was last summer, and no one really got into the deep end of that and started addressing the issues related to generative AI. Obviously, they knew generative AI was a thing then. These are brilliant faculty members. But it wasn’t as broad or ubiquitous. And now here we are, and it is like the issue du jour. So the interesting thing is how fast the technology is moving. And that gets back to my earlier point of why you really need a workforce that gets this so that they can quickly adapt and make changes that might be needed in the future.</p><p>And does Europe have anything to gain really by being the first mover in this space? Is it just a moral win if they’re the ones who’ve started the regulatory conversation?</p><p>Wald: I do think that they have some things to gain. I do think a moral win is a big win, if you ask me. Sometimes I do think that Europe can be that good conscious side and force the rest of the world to think about these things, as some of your listeners might be familiar with. There’s the Brussels Effect. And what essentially the Brussels Effect is for those that don’t know, it is the concept that Europe has such a large market share that they’re able to force through their rules and regulations that being the most stringent and becomes the model for the rest of the world. And so a lot of industries just base their entire type of managing regulation related to the most stringent set and that generally comes from Europe. The challenge for Europe is the degree to which they are investing in the innovation itself. So they have that powerful market share, and it’s really important, but where is Europe going to be in the long run is a little to be determined. I will say a former part of the EU, the UK, is actually doing some really, really interesting work here. They are speaking almost to that level of, “Let’s have some degree of regulation, look at existing regulations,” but they’re really invested in the infrastructure piece of giving the tools broadly. So the Brits have a proposal for an Exascale computing system that’s £900 million. So the UK is really trying to do this, let’s double down on the innovation side and where possible do a regulatory side because they really want to see themselves as the leader. I think Europe might need to look into as much as possible a degree of fostering an environment that will allow for that same level of innovation.</p><p>Europe seemed to get the first start, but am I right in thinking that the Chinese government may be moving the quickest? There have been a number of regulations, not just proposed in the past few years, but I think actually put into force.</p><p>Wald: Yeah. Absolutely. So there’s the Brussels Effect, but what happens now when you have the Beijing Effect? Because in Beijing’s case, they just don’t have market share, but they also have a very strong innovative base. What has happened in China was last year, it was around March of 2022, there was some regulations that came about that were related to recommender systems. And in some of these, you could call for redress or a human to audit this. It’s hard to get the same level of data out of China, but I’m really interested in looking at how they apply some of these regulations. Because what I’m really find fascinating is the scale, right? So when you say you allow for for a human review, I can’t help but think of this analogy. A lot of people apply for a job, and most people who apply for a job think that they are qualified or they’re not going to waste their time applying for the job. And what happens if you never get that interview and what happens if a lot of people don’t get that interview and you go and say, “Wait a minute, I deserved an interview. Why didn’t I get one? Go lift the hood of your system so I can have a human review.” I think that there’s a degree of legitimacy for that. The concern is that what level cannot be scaled to be able to meet that moment? And so I’m really watching that one. They also had last year the deep synthesis [inaudible] thing that came into effect in January of 2023 that spends a lot of time looking at deep fakes. And this year, it related to generative AI. There is some initial guidance. And what this really demonstrates is a concern that the state has. So the People’s Republic of China, or the Communist Party in this case, because one thing is they refer to a need for social harmony and that generative AI should not be used for purposes that disrupt that social harmony. So I think you can see concern from the Chinese government about what this could mean for the government itself.</p><p>It’s interesting. Here in the US, you often hear people arguing against regulations by saying, “Well, if we slow down, China’s going to surge ahead.” But I feel like that might actually be a false narrative.</p><p>Wald: Yeah. I have an interesting point on that, though. And I think it refers back to that last point on the recommender systems and the ability for human redress or a human audit of that. I don’t want to say that I’m not for regulations. I very much am for regulations. But I always want to make sure that we’re doing the right regulations because oftentimes regulations don’t harm the big player, they harm the smaller player because the big player can afford to manage through some of this work. But the other part is there could be a sense of false comfort that can come from some of these regulations because they’re not solving for what you want them to solve for. And so I don’t want to call the US at a Goldilocks moment. But if you really can see what the Chinese do in this particular space and how it’s working, and whether it will work and there might be other variables that would come to place that would say, “Okay, well, this clearly would work in China, but it could not work in the US.” It’s almost like a test bed. You know how they always say that the states are the incubators for democracy? It’s kind of interesting how the US can see what happens in New York. But what happened with New York City’s hiring algorithm law? Then from there, we can start to say, “Wow, it turns out that regulation doesn’t work. Here’s one that we could have here.” My only concern is the rapid pace of this might necessitate that we need some regulation soon.</p><p>Right. And in the US, there have been earlier bills on the federal level that have sought to regulate AI. The Algorithmic Accountability Act last year, which went pretty much nowhere. The word on the street is now that <a href="https://www.schumer.senate.gov/" rel="noopener noreferrer" target="_blank">Senator Chuck Schumer</a> is working on a legislative framework and is circulating that around. Do you expect to see real concrete action here in the US? Do you think there’ll actually be a bill that gets introduced and gets passed in the coming year or two?</p><p>Wald: Hard to tell, I would say, on that. What I would say is first, it is unequivocal. I have been working with policymakers for over almost four years now on this specific subject. And it is unequivocal right now that since ChatGPT came out, there is this awakening of AI. Whereas before, I was trying to back down their doors and say, “Hey, let’s have a conversation about this,” and now I cannot ever remotely keep up with the inbound that is coming in. So I am heartened to see that policymakers are taking this seriously. And I have had conversations with numerous policymakers without divulging which ones, but I will say that Senator Schumer’s office is eager, and I think that’s great. They’re still working out the details. I think what’s important about Schumer’s office is it’s one office that can pull together a lot of senators and pull together a lot of people to look at this. And one thing that I do appreciate about Schumer is that he thinks big and bold. And his level of involvement says to me, “If we get something, it’s not going to be small. It’s going to think big. It’s going to be really important.” So to that end, I would urge the office, as I’ve noted, to not just think about regulations, but also the crucial need for public investment in AI. And so those two things don’t necessarily need to be paired into one big mega bill, but they should be considered in every step that they take together. That for every regulatory idea you’re thinking about, you should have a degree of public investment that you’re thinking about with it as well. So that we can make sure we have this really more balanced ecosystem.</p><p>I know we’re running short on time. So maybe one last question and then I’ll ask if I missed anything. But for our last question, how might a consumer experience the impact of AI regulations? I was thinking about the GDPR in Europe and how the impact for consumers was they basically had to click an extra button every time they went to a website to say, “Yes, I accept these cookies.” Would AI regulations be visible to the consumer, do you think, and would they change people’s lives in obvious ways? Or would it be much more subtle and behind the scenes?</p><p>Wald: That’s a great question. And I would probably posit back another question. The question is, how much do people see AI in their daily lives? And I don’t think you see that much of it, but that doesn’t mean it’s not there. That doesn’t mean that there are not municipalities that are using systems that will deny benefits or allow for benefits. That doesn’t mean banks aren’t using this for underwriting purposes. So it’s really hard to say whether consumers will see this, but the thing is consumers, I don’t think, see AI in their daily lives, and that’s concerning as well. So I think what we need to ensure is that there is a degree of disclosure related to automated systems. And people should be made aware of when this is being applied, and they should be informed when that’s happening. That could be a regulation that they do see, right? But for the most part, no, I don’t think it’s as front and center in people’s minds and not as a concern because it’s not to say that it’s not there. It is there. And we need to make sure we get this right. Are people are going to be harmed throughout this process? The first man, I think it was in 2020, [Juan?] Williams, I believe his name was who was arrested falsely for facial recognition technology and what that meant to his reputation, all of that kind of stuff, for literally having no association with the crime.</p><p>So before we go, is there anything else that you think it’s really important for people to understand about the state of the conversation right now around regulating AI or around the technology itself? Anything that the policymakers you talk with seem to not get that you wish they did?</p><p>Wald: The general public should be aware that what we’re starting to see is the tip of the iceberg. I think there’s been a lot of things that have been in labs, and I think there’s going to be just a whole lot more coming. And with that whole lot more coming, I think that we need to find ways to adhere to some kind of balanced arguments. Let’s not go to the extreme of, “This is going to kill us all.” Let’s also not go and allow for a level of hype that says, “AI will fix this.” And so I think we need to be able to have a neutral view of saying, “There are some unique benefits this technology will offer humanity and make a significant impact for the better, and that’s a good thing, but at the same time there are some very serious dangers from this. How is it that we can manage that process?”</p><p>To policymakers, what I want them to most be aware of when they’re thinking about this and trying to educate themselves, they don’t need to know how to use <a href="https://www.tensorflow.org/" rel="noopener noreferrer" target="_blank">TensorFlow</a>. No one’s asking them to understand how to develop a model. What I recommend that they do is they understand what the technology can do, what it cannot do, and what its societal impacts will be. I oftentimes talk to people, “I need to know about the deep parts of the technology.” Well, we also need policymakers to be policymakers. And particularly, elected officials have to be in inch deep but a mile wide. They need to know about Social Security. They need to know about Medicare. They need to know about foreign affairs. So we can’t have the expectation for policymakers to know everything about AI. But at a minimum, they need to know what it can and cannot do and what that impact on society will be.</p><p>Russell, thank you so much for taking the time to talk all this through with me today. I really appreciate it.</p><p>Oh, it’s my pleasure. Thank you so much for having me, Eliza.</p><p>That was Stanford’s Russell Wald, speaking to us about efforts to regulate AI around the world. I’m Eliza Strickland, and I hope you’ll join us next time on <em>Fixing the Future</em>.</p>]]></description><pubDate>Wed, 23 Aug 2023 15:44:14 +0000</pubDate><guid>https://spectrum.ieee.org/wisest-ai-regulation</guid><category>Stanford-university</category><category>Chuck-schumer</category><category>Generative-ai</category><category>Us-congress</category><category>Ai-regulation</category><category>European-union</category><category>United-kingdom</category><category>Type-podcast</category><category>Fixing-the-future</category><dc:creator>Eliza Strickland</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/35040164/origin.webp"/></item><item><title>Why Cyberwarfare Is Overhyped</title><link>https://spectrum.ieee.org/why-cyberwar-is-overhyped</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.webp?id=34317905&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/1f90a93a" width="100%">
</iframe><p><strong>David Schneider:</strong> Hi, I’m David Schneider for <em>IEEE Spectrum</em>‘s <em>Fixing the Future</em> podcast. Before we launch into this episode, I’d like to let listeners know that the cost of membership in IEEE is currently 50% off for the rest of the year. Giving you access to perks, including <em>Spectrum</em> magazine and many education and career resources. Plus, you’ll get a cool IEEE-branded Rubik’s Cube when you enter the code CUBE online. Simply go to <a href="https://www.ieee.org/membership/join/index.html" rel="noopener noreferrer" target="_blank">IEEE.org/join</a> to get started. I’m talking with <a href="https://law.yale.edu/scott-j-shapiro" rel="noopener noreferrer" target="_blank">Scott J. Shapiro</a>. I’m very excited to talk to him about his new book which is titled <em><a href="https://us.macmillan.com/books/9780374601171/fancybeargoesphishing" rel="noopener noreferrer" target="_blank">Fancy Bear Goes Phishing: The Dark History of the Information Age in Five Extraordinary Hacks</a></em>. So, Scott, if I can call you that rather than addressing you as professor?</p><p><strong>Scott Shapiro:</strong> Please do. Please do.</p><p><strong>Schneider:</strong> Before we talk about your book, tell me a little bit about yourself.</p><p><strong>Shapiro: </strong>So I’m a professor of law and philosophy at Yale University. My primary appointment is at the law school where I teach legal philosophy. But like so many people my age, I grew up in the ‘70s and ‘80s where I got hooked on personal computers. My parents bought me an Apple II when they first came out. Used a <a href="https://spectrum.ieee.org/the-consumer-electronics-hall-of-fame-tandyradioshack-trs80-model-1" target="_self">TRS-80</a> at school in biology class and got really into coding and really into computers. And I was a computer science major at Columbia University. And I had a small database construction company, but then gave it up when I went to law school and then graduate school on philosophy. And I just kind of forgot that I had ever done that.</p><p><strong>Schneider:</strong> And from our earlier conversations, you told me about a class that you were teaching. Can you tell me a little bit about that since that, I think, leads into the book about what this class was?</p><p><strong>Shapiro:</strong> What happened was the book before <em>Fancy Bear</em> was called <em><a href="https://www.simonandschuster.com/books/The-Internationalists/Oona-A-Hathaway/9781501109874" rel="noopener noreferrer" target="_blank">The Internationalists</a></em>, and it was a history of the regulation of war over 400 years. So it was from 1600 to 2014, about whether you’re allowed legally to go to war. And a lot of people were asking when the book came out in 2017, “What about cyber war? What about cyber war?” And so I got interested in, “What about cyber war?” And so at the time, my colleague <a href="https://law.yale.edu/oona-hathaway" rel="noopener noreferrer" target="_blank">Oona Hathaway</a> and I and <a href="https://www.cs.yale.edu/homes/jf/" rel="noopener noreferrer" target="_blank">Joan Feigenbaum</a> from the computer science department, who’s a very famous mathematical cryptographer, we applied to the <a href="https://hewlett.org/" rel="noopener noreferrer" target="_blank">Hewlett Foundation</a> to get a grant to teach an interdisciplinary course on I think it was called The Law and Technology of Cyber Conflict. And so it was going to be half computer science undergrad majors and half law students, and we would teach both of them the technology and the law. And one of the things about the class was it was the worst class I had ever taught. I don’t think anybody learned anything. I certainly didn’t learn anything. At any given point, half the class is bored and the other half was confused. And what I realized was that law and computer science, those are both very technical subjects and the intersection is very difficult. And so I thought, “How would I teach students about this new world of hacking and cybersecurity? And how does it relate to legal and ethical questions we have? And how should we regulate it and respond to it?”</p><p><strong>Schneider:</strong> The particular hacks that you go over in the book, they are things that you and your students looked at in depth while you were teaching this course, I take it.</p><p><strong>Shapiro:</strong> Actually, no. What happened was when I taught the course, I really taught the students how to hack. I taught this, by the way, with two other of my colleagues, both with extensive network experience and cybersecurity experience. No, we taught them the Linux command line, how the internet works, how its [packing?] switching works, how <a href="https://www.wireshark.org/" rel="noopener noreferrer" target="_blank">Wireshark</a> works, how to do network reconnaissance, how to crack passwords. We taught them practical skills and kind of theoretical conceptual ideas about how our digital ecosystem works, how encryption works, yada yada yada. I was doing research on those stories as I was teaching the course. And so the book doesn’t teach you how to hack. That’s not the point of the book. The point of the book is to teach you how hacking works, how hackers have hacked the internet, and what various types of legal, ethical, psychological, technical, historical considerations go into this practice of hacking and how might we try to reverse the trend towards safer digital ecosystem?</p><p><strong>Schneider:</strong> So you and I have worked now on <a href="https://spectrum.ieee.org/mirai-botnet" target="_self">your article in <em>Spectrum</em> which is based on a section of the book that covers the Mirai malware</a>. Maybe you could just take a second to mention the other extraordinary hacks that are in the book.</p><p><strong>Shapiro: </strong>So the book lays out five hacks. The first one is the Robert Morris hack, the Morris worm, the first hack that’s kind of brought down the public internet in 1988. And the next is the Bulgarian virus factory of the early 1990s and the mysterious virus writer, Dark Avenger, who created the first polymorphic virus engine which genetically scrambles, so to speak, the code of every virus, making it very difficult for antivirus software to detect. The third is the hack of Paris Hilton in 2005 when her sidekick was hacked and nude photos were leaked onto the internet. The fourth is where Fancy Bear comes in— <em>Fancy Bear Goes Phishing</em>. Fancy Bear is the name of a lead hacking unit in the Russian military intelligence, the GRU, which hacked the Democratic National Committee in 2016 and leaked the emails and various documents that were found and caused real chaos and turmoil in the 2016 election between Hillary Clinton and Donald Trump. And finally, the Mirai botnet, which was created by three teenagers in order to basically get more market share for their Minecraft servers but ended up knocking the internet off for many people in the United States.</p><p><strong>Schneider: </strong>I’d like really to focus on the conclusion of the book which you title as “The Death of Solutionism.” So I’m going to ask you to explain a little bit what you mean by the death of solutionism and also maybe you could tell us or define for our listeners the terms you use throughout the book of upcode and downcode.</p><p><strong>Shapiro:</strong> So let me first say what solutionism is. Solutionism is a term coined by the social critic <a href="https://en.wikipedia.org/wiki/Evgeny_Morozov" rel="noopener noreferrer" target="_blank">Evgeny Morozov</a> to kind of capture this idea that is part of the culture, that all social problems can have technological solutions. It’s the famous example of solutionism as when <em>Wired UK</em> famously wrote, “You want to help Africa? There’s an app for that.” It’s just like an app is going to reverse centuries of colonialism and blah blah blah. Cybersecurity is particularly prone to solutionism because we’re always kind of looking for the next-generation firewall, the next-generation intrusion detection system, all these types of technological solutions. The argument of the book is that this is a mistaken way to think about cybersecurity. Cybersecurity is not primarily a technical problem that requires an engineering solution, but it primarily is a political problem which requires a human solution. And so one way I try to get at this idea, which you might think initially is counterintuitive because what could be more technical than cybersecurity, is the idea of a fundamental distinction that I draw between what I call downcode and upcode. Downcode are literally all the code below your fingertips when you’re typing on a computer keyboard, see your operating system, the application, network protocols, yada yada yada. Upcode is anything above your fingertips. So the rules that I follow, my personal ethics, social norms, legal norms, all those types of things, industrial standards, terms of service, these are all the norms that regulate our action and give us different incentives to behave in certain ways.</p><p><strong>Schneider:</strong> You give some concrete examples of where you see, to use the metaphor, patching the upcode would be useful. Maybe you could give our listeners some examples of this kind of tweaking the upcode.</p><p><strong>Shapiro: </strong>One of the things that you want to do from a criminological perspective is you want to tailor whatever policy solution you’re going to offer to the kind of problem that you’re trying to solve. And in particular, when it comes to crime, you want to see what are the motivations of the offenders. Young boys, in particular, get into hacking through gaming culture and through a process of escalation, start engaging in first cheat sheets and then small little hacks and then they can transmogrify, grow, metastasize into real, very serious criminality. And so the idea to do in the United States what law enforcement has done in the United Kingdom, in the Netherlands which is to try to engage in diversion programs to try to divert people who might have skills to be, so to speak, on the blue team, on defense but because of various types of social pressures, get pushed to the red team, get pushed to being attackers and to try to change that. Another thing I’ll just very quickly mention is as a legal matter, there’s no software liability for security vulnerabilities. So you can’t sue Microsoft for putting out really bad code resulting in your being hacked. And the Biden administration just released their National Cybersecurity Strategy where they are finally proposing software liability for security vulnerabilities. And I think that’s a very important move.</p><p><strong>Schneider:</strong> Why is that? I mean, when I go and I buy a ladder at the big-box hardware store, if I fall off of it because it’s faulty, there’s somebody I can sue. But why is it a piece of software that’s faulty that can do something much more devastating to me, there’s nobody to sue?</p><p><strong>Shapiro:</strong> In American law, and actually, Anglophone legal systems, typically what will happen is when you sue somebody, you can only sue for physical damage or pain or suffering that happens to you through physical destruction. But you can’t sue for purely economic damages for, let’s say, negligence or recklessness in creating bad software because economic damages are not generally recoverable in American courts. There’s also— I mean, that’s a technical reason, but the larger kind of cultural reason, economic and political reason is that the United States takes a certain view about technology. In the United States, we have this idea that we don’t want to regulate new technologies for fear of choking off innovation. The same story was with the car. There’s very, very little regulation on the automobile because the power of the United States was as an industrial behemoth, and the idea is like, “We don’t want to stop that.” I think we’ve gotten to the— we got to the point in the 1960s with Ralph Nader and <em><a href="https://en.wikipedia.org/wiki/Unsafe_at_Any_Speed" rel="noopener noreferrer" target="_blank">Unsafe at Any Speed</a></em> where he came out with reports saying, “Look, this is a really, really dangerous technology. It needs to be regulated.” And that’s how we got seat belts. I think the same thing is true for the internet now, I think, where a book has suggested various ways to try to regulate it.</p><p><strong>Schneider:</strong> Tell us more about kind of the upcode tweaks that you’d see around cyber espionage.</p><p><strong>Shapiro: </strong>There’s almost nothing you can do about cyber espionage is the point. The point is that it is part of the upcode of the world. I mean, it’s amazing. It is part of global upcode that nations are allowed to spy on each other. In fact, it’s almost encouraged, and you can imagine why it might be encouraged, that it’s probably good for nations to know about each other’s military intentions. But whereas you might be able to get law enforcement to really crack down on cybercrime, it’s very, very difficult to crack down on cyber espionage when the United States is the largest spying country on the planet.</p><p><strong>Schneider:</strong> But there was a suggestion there that there might be things to be done about economic espionage.</p><p><strong>Shapiro: </strong>Right. So when we say espionage, we have to distinguish between, let’s say, national security-focused espionage and financial, corporate, or economic espionage. So the United States is the largest national security hacker on the planet, but it almost never engages in corporate espionage. That is, it doesn’t actually hack into Chinese companies, let’s say, and steal their blueprints. China hacked into defense contractor and stole the entire blueprints for the F-35. Now, there had been a talk between Xi and President Obama, and they signed an agreement limiting economic espionage. And that worked out decently till Trump came into office and started a trade war with China, and then the economic and political relationship with China kind of fell apart. But there is room to cut down on espionage through international agreements because it isn’t the case that financial espionage is legal. So there are things we can do, but the core national security, kind of hacking into leaders and their intelligence agencies to learn about the military and strategic intentions of a country, that’s never going away.</p><p><strong>Schneider: </strong>I mean, your book basically has a kind of optimistic message. You seem to be telling us, if I’ve interpreted you correctly, cyber war is going to be a kind of a simmering thing rather than a complete boiling over.</p><p><strong>Shapiro: </strong>Right. Yeah. So in a way, this kind of surprised me just because of the hype associated with cyber war. But in a way, I think studying the history of war before I came to this project made me see things, I think, slightly differently because of that background. And so the first thing is just the technical challenges associated with trying to hack a digital infrastructure like the United States which has so many different kinds of operating systems, so many different kinds of applications, so many different versions, so many different network configurations. They’re very, very difficult to hack across platforms like that. But secondly, and I think more importantly, cyberweapons are not great weapons. I mean, it’s very hard to hold territory with cyberweapons. It’s very hard to blow things up with cyberweapons. If you really want to blow things up, use bombs. So when Russia was going to invade Ukraine, which it did, people were saying, “Oh, no. This is going to be the cyber war, cyber war, cyber war.” And I thought to myself, “Why would you burn exploits if you’re Russia when you actually have bombs?” And that’s what happened. Russia had been harassing Ukraine for seven years with cyberattacks. And then when they really wanted to get real, when they really wanted to capture territory or decapitate Ukraine, they sent in the tanks, the troops, the planes, the bombs. That hasn’t worked out so well for them, but a cyber war wasn’t going to be the answer. So what I try to say is that cyberweapons are weapons of the weak. They are used by weak nations to harass stronger nations. But when nations really want to compete and go against each other, they use kinetic weapons like bombs and tanks.</p><p><strong>Schneider: </strong>You make a very nice, I guess, analogy with peasant revolts or rebellions.</p><p><strong>Shapiro:</strong> Yeah. So there’s a very well-known book written by the anthropologist <a href="https://politicalscience.yale.edu/people/james-scott" rel="noopener noreferrer" target="_blank">James Scott</a> called <em><a href="https://yalebooks.yale.edu/book/9780300036411/weapons-of-the-weak/" rel="noopener noreferrer" target="_blank">Weapons of the Weak</a></em>. He used to teach at Yale. He was a brilliant, brilliant person. And what happened during his fieldwork, he went in the late ‘70s to Indonesia to a rice village because he was really interested why do peasants not revolt more often. And the Marxists had said, “Oh, they have false consciousness. They really buy into what their lords tell them.” And what Jim Scott hypothesized was that in fact, that’s not at all the case. The peasants hate their lords, and they strike back at them all the time but in this kind of low-level, covert way, ways that he called weapons of the weak because it’s too dangerous to strike at them directly. And I think that’s what cyberweapons are. Cyberweapons are weapons of the weak. It’s when, well, you can’t afford to go all out on another adversary but you really want to cause the other person pain but not too much pain so that they retaliate and escalate. So I think that Russia, North Korea, Iran, they’re the geopolitical peasants, so to speak. Russia is actually a tricky situation because Russia is an intermediate power. It has very strong kinetic capabilities, although much less than it did, and very strong cyberweapons. But ultimately, if they wanted to attack an equal, they would probably go with cyberweapons. And if they really wanted to go into a large war, they would use kinetic weapons.</p><p><strong>Schneider: </strong>I like to end with a kind of philosophical question—you’re a professor of philosophy - so I would venture to say that a lot of our listeners and readers of <em>Spectrum</em> are people who are, what you’d call, solutionists. They gravitate towards technical fixes to problems. And I’m wondering how someone with that mindset could have his or her consciousness raised to realize that maybe the solution isn’t a technical solution.</p><p><strong>Shapiro: </strong>Yeah. So I think that lawyers and engineers are at root the same. We’re both coders. Engineers are downcoders. Lawyers are upcoders. We’re both trying to solve problems using instructions, and we hold ourselves to standards of rationality. Yeah. So that’s what I would say.</p><p><strong>Schneider: </strong>Well, that sounds good. Well, I should thank you. And I hope you have great success with this book because it certainly deserves to be read. That was Scott J. Shapiro speaking to us about his new book <em>Fancy Bear Goes Phishing</em>. I’m David Schneider, and I hope you’ll join us next time on <em>Fixing the Future</em>.</p>]]></description><pubDate>Wed, 26 Jul 2023 15:52:20 +0000</pubDate><guid>https://spectrum.ieee.org/why-cyberwar-is-overhyped</guid><category>Type-podcast</category><category>National-security</category><category>Cyberwar</category><category>Phishing</category><category>Cybersecurity</category><category>Fixing-the-future</category><dc:creator>David Schneider</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/34317905/origin.webp"/><enclosure length="600458" type="application/pdf" url="https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf"/><itunes:explicit/><itunes:subtitle>David Schneider: Hi, I’m David Schneider for IEEE Spectrum‘s Fixing the Future podcast. Before we launch into this episode, I’d like to let listeners know that the cost of membership in IEEE is currently 50% off for the rest of the year. Giving you access to perks, including Spectrum magazine and many education and career resources. Plus, you’ll get a cool IEEE-branded Rubik’s Cube when you enter the code CUBE online. Simply go to IEEE.org/join to get started. I’m talking with Scott J. Shapiro. I’m very excited to talk to him about his new book which is titled Fancy Bear Goes Phishing: The Dark History of the Information Age in Five Extraordinary Hacks. So, Scott, if I can call you that rather than addressing you as professor? Scott Shapiro: Please do. Please do. Schneider: Before we talk about your book, tell me a little bit about yourself. Shapiro: So I’m a professor of law and philosophy at Yale University. My primary appointment is at the law school where I teach legal philosophy. But like so many people my age, I grew up in the ‘70s and ‘80s where I got hooked on personal computers. My parents bought me an Apple II when they first came out. Used a TRS-80 at school in biology class and got really into coding and really into computers. And I was a computer science major at Columbia University. And I had a small database construction company, but then gave it up when I went to law school and then graduate school on philosophy. And I just kind of forgot that I had ever done that. Schneider: And from our earlier conversations, you told me about a class that you were teaching. Can you tell me a little bit about that since that, I think, leads into the book about what this class was? Shapiro: What happened was the book before Fancy Bear was called The Internationalists, and it was a history of the regulation of war over 400 years. So it was from 1600 to 2014, about whether you’re allowed legally to go to war. And a lot of people were asking when the book came out in 2017, “What about cyber war? What about cyber war?” And so I got interested in, “What about cyber war?” And so at the time, my colleague Oona Hathaway and I and Joan Feigenbaum from the computer science department, who’s a very famous mathematical cryptographer, we applied to the Hewlett Foundation to get a grant to teach an interdisciplinary course on I think it was called The Law and Technology of Cyber Conflict. And so it was going to be half computer science undergrad majors and half law students, and we would teach both of them the technology and the law. And one of the things about the class was it was the worst class I had ever taught. I don’t think anybody learned anything. I certainly didn’t learn anything. At any given point, half the class is bored and the other half was confused. And what I realized was that law and computer science, those are both very technical subjects and the intersection is very difficult. And so I thought, “How would I teach students about this new world of hacking and cybersecurity? And how does it relate to legal and ethical questions we have? And how should we regulate it and respond to it?” Schneider: The particular hacks that you go over in the book, they are things that you and your students looked at in depth while you were teaching this course, I take it. Shapiro: Actually, no. What happened was when I taught the course, I really taught the students how to hack. I taught this, by the way, with two other of my colleagues, both with extensive network experience and cybersecurity experience. No, we taught them the Linux command line, how the internet works, how its [packing?] switching works, how Wireshark works, how to do network reconnaissance, how to crack passwords. We taught them practical skills and kind of theoretical conceptual ideas about how our digital ecosystem works, how encryption works, yada yada yada. I was doing research on those stories as I was teaching the course. And so the book doesn’t teach you how to hack. That’s not the point of the book. The point of the book is to teach you how hacking works, how hackers have hacked the internet, and what various types of legal, ethical, psychological, technical, historical considerations go into this practice of hacking and how might we try to reverse the trend towards safer digital ecosystem? Schneider: So you and I have worked now on your article in Spectrum which is based on a section of the book that covers the Mirai malware. Maybe you could just take a second to mention the other extraordinary hacks that are in the book. Shapiro: So the book lays out five hacks. The first one is the Robert Morris hack, the Morris worm, the first hack that’s kind of brought down the public internet in 1988. And the next is the Bulgarian virus factory of the early 1990s and the mysterious virus writer, Dark Avenger, who created the first polymorphic virus engine which genetically scrambles, so to speak, the code of every virus, making it very difficult for antivirus software to detect. The third is the hack of Paris Hilton in 2005 when her sidekick was hacked and nude photos were leaked onto the internet. The fourth is where Fancy Bear comes in— Fancy Bear Goes Phishing. Fancy Bear is the name of a lead hacking unit in the Russian military intelligence, the GRU, which hacked the Democratic National Committee in 2016 and leaked the emails and various documents that were found and caused real chaos and turmoil in the 2016 election between Hillary Clinton and Donald Trump. And finally, the Mirai botnet, which was created by three teenagers in order to basically get more market share for their Minecraft servers but ended up knocking the internet off for many people in the United States. Schneider: I’d like really to focus on the conclusion of the book which you title as “The Death of Solutionism.” So I’m going to ask you to explain a little bit what you mean by the death of solutionism and also maybe you could tell us or define for our listeners the terms you use throughout the book of upcode and downcode. Shapiro: So let me first say what solutionism is. Solutionism is a term coined by the social critic Evgeny Morozov to kind of capture this idea that is part of the culture, that all social problems can have technological solutions. It’s the famous example of solutionism as when Wired UK famously wrote, “You want to help Africa? There’s an app for that.” It’s just like an app is going to reverse centuries of colonialism and blah blah blah. Cybersecurity is particularly prone to solutionism because we’re always kind of looking for the next-generation firewall, the next-generation intrusion detection system, all these types of technological solutions. The argument of the book is that this is a mistaken way to think about cybersecurity. Cybersecurity is not primarily a technical problem that requires an engineering solution, but it primarily is a political problem which requires a human solution. And so one way I try to get at this idea, which you might think initially is counterintuitive because what could be more technical than cybersecurity, is the idea of a fundamental distinction that I draw between what I call downcode and upcode. Downcode are literally all the code below your fingertips when you’re typing on a computer keyboard, see your operating system, the application, network protocols, yada yada yada. Upcode is anything above your fingertips. So the rules that I follow, my personal ethics, social norms, legal norms, all those types of things, industrial standards, terms of service, these are all the norms that regulate our action and give us different incentives to behave in certain ways. Schneider: You give some concrete examples of where you see, to use the metaphor, patching the upcode would be useful. Maybe you could give our listeners some examples of this kind of tweaking the upcode. Shapiro: One of the things that you want to do from a criminological perspective is you want to tailor whatever policy solution you’re going to offer to the kind of problem that you’re trying to solve. And in particular, when it comes to crime, you want to see what are the motivations of the offenders. Young boys, in particular, get into hacking through gaming culture and through a process of escalation, start engaging in first cheat sheets and then small little hacks and then they can transmogrify, grow, metastasize into real, very serious criminality. And so the idea to do in the United States what law enforcement has done in the United Kingdom, in the Netherlands which is to try to engage in diversion programs to try to divert people who might have skills to be, so to speak, on the blue team, on defense but because of various types of social pressures, get pushed to the red team, get pushed to being attackers and to try to change that. Another thing I’ll just very quickly mention is as a legal matter, there’s no software liability for security vulnerabilities. So you can’t sue Microsoft for putting out really bad code resulting in your being hacked. And the Biden administration just released their National Cybersecurity Strategy where they are finally proposing software liability for security vulnerabilities. And I think that’s a very important move. Schneider: Why is that? I mean, when I go and I buy a ladder at the big-box hardware store, if I fall off of it because it’s faulty, there’s somebody I can sue. But why is it a piece of software that’s faulty that can do something much more devastating to me, there’s nobody to sue? Shapiro: In American law, and actually, Anglophone legal systems, typically what will happen is when you sue somebody, you can only sue for physical damage or pain or suffering that happens to you through physical destruction. But you can’t sue for purely economic damages for, let’s say, negligence or recklessness in creating bad software because economic damages are not generally recoverable in American courts. There’s also— I mean, that’s a technical reason, but the larger kind of cultural reason, economic and political reason is that the United States takes a certain view about technology. In the United States, we have this idea that we don’t want to regulate new technologies for fear of choking off innovation. The same story was with the car. There’s very, very little regulation on the automobile because the power of the United States was as an industrial behemoth, and the idea is like, “We don’t want to stop that.” I think we’ve gotten to the— we got to the point in the 1960s with Ralph Nader and Unsafe at Any Speed where he came out with reports saying, “Look, this is a really, really dangerous technology. It needs to be regulated.” And that’s how we got seat belts. I think the same thing is true for the internet now, I think, where a book has suggested various ways to try to regulate it. Schneider: Tell us more about kind of the upcode tweaks that you’d see around cyber espionage. Shapiro: There’s almost nothing you can do about cyber espionage is the point. The point is that it is part of the upcode of the world. I mean, it’s amazing. It is part of global upcode that nations are allowed to spy on each other. In fact, it’s almost encouraged, and you can imagine why it might be encouraged, that it’s probably good for nations to know about each other’s military intentions. But whereas you might be able to get law enforcement to really crack down on cybercrime, it’s very, very difficult to crack down on cyber espionage when the United States is the largest spying country on the planet. Schneider: But there was a suggestion there that there might be things to be done about economic espionage. Shapiro: Right. So when we say espionage, we have to distinguish between, let’s say, national security-focused espionage and financial, corporate, or economic espionage. So the United States is the largest national security hacker on the planet, but it almost never engages in corporate espionage. That is, it doesn’t actually hack into Chinese companies, let’s say, and steal their blueprints. China hacked into defense contractor and stole the entire blueprints for the F-35. Now, there had been a talk between Xi and President Obama, and they signed an agreement limiting economic espionage. And that worked out decently till Trump came into office and started a trade war with China, and then the economic and political relationship with China kind of fell apart. But there is room to cut down on espionage through international agreements because it isn’t the case that financial espionage is legal. So there are things we can do, but the core national security, kind of hacking into leaders and their intelligence agencies to learn about the military and strategic intentions of a country, that’s never going away. Schneider: I mean, your book basically has a kind of optimistic message. You seem to be telling us, if I’ve interpreted you correctly, cyber war is going to be a kind of a simmering thing rather than a complete boiling over. Shapiro: Right. Yeah. So in a way, this kind of surprised me just because of the hype associated with cyber war. But in a way, I think studying the history of war before I came to this project made me see things, I think, slightly differently because of that background. And so the first thing is just the technical challenges associated with trying to hack a digital infrastructure like the United States which has so many different kinds of operating systems, so many different kinds of applications, so many different versions, so many different network configurations. They’re very, very difficult to hack across platforms like that. But secondly, and I think more importantly, cyberweapons are not great weapons. I mean, it’s very hard to hold territory with cyberweapons. It’s very hard to blow things up with cyberweapons. If you really want to blow things up, use bombs. So when Russia was going to invade Ukraine, which it did, people were saying, “Oh, no. This is going to be the cyber war, cyber war, cyber war.” And I thought to myself, “Why would you burn exploits if you’re Russia when you actually have bombs?” And that’s what happened. Russia had been harassing Ukraine for seven years with cyberattacks. And then when they really wanted to get real, when they really wanted to capture territory or decapitate Ukraine, they sent in the tanks, the troops, the planes, the bombs. That hasn’t worked out so well for them, but a cyber war wasn’t going to be the answer. So what I try to say is that cyberweapons are weapons of the weak. They are used by weak nations to harass stronger nations. But when nations really want to compete and go against each other, they use kinetic weapons like bombs and tanks. Schneider: You make a very nice, I guess, analogy with peasant revolts or rebellions. Shapiro: Yeah. So there’s a very well-known book written by the anthropologist James Scott called Weapons of the Weak. He used to teach at Yale. He was a brilliant, brilliant person. And what happened during his fieldwork, he went in the late ‘70s to Indonesia to a rice village because he was really interested why do peasants not revolt more often. And the Marxists had said, “Oh, they have false consciousness. They really buy into what their lords tell them.” And what Jim Scott hypothesized was that in fact, that’s not at all the case. The peasants hate their lords, and they strike back at them all the time but in this kind of low-level, covert way, ways that he called weapons of the weak because it’s too dangerous to strike at them directly. And I think that’s what cyberweapons are. Cyberweapons are weapons of the weak. It’s when, well, you can’t afford to go all out on another adversary but you really want to cause the other person pain but not too much pain so that they retaliate and escalate. So I think that Russia, North Korea, Iran, they’re the geopolitical peasants, so to speak. Russia is actually a tricky situation because Russia is an intermediate power. It has very strong kinetic capabilities, although much less than it did, and very strong cyberweapons. But ultimately, if they wanted to attack an equal, they would probably go with cyberweapons. And if they really wanted to go into a large war, they would use kinetic weapons. Schneider: I like to end with a kind of philosophical question—you’re a professor of philosophy - so I would venture to say that a lot of our listeners and readers of Spectrum are people who are, what you’d call, solutionists. They gravitate towards technical fixes to problems. And I’m wondering how someone with that mindset could have his or her consciousness raised to realize that maybe the solution isn’t a technical solution. Shapiro: Yeah. So I think that lawyers and engineers are at root the same. We’re both coders. Engineers are downcoders. Lawyers are upcoders. We’re both trying to solve problems using instructions, and we hold ourselves to standards of rationality. Yeah. So that’s what I would say. Schneider: Well, that sounds good. Well, I should thank you. And I hope you have great success with this book because it certainly deserves to be read. That was Scott J. Shapiro speaking to us about his new book Fancy Bear Goes Phishing. I’m David Schneider, and I hope you’ll join us next time on Fixing the Future.</itunes:subtitle><itunes:summary>David Schneider: Hi, I’m David Schneider for IEEE Spectrum‘s Fixing the Future podcast. Before we launch into this episode, I’d like to let listeners know that the cost of membership in IEEE is currently 50% off for the rest of the year. Giving you access to perks, including Spectrum magazine and many education and career resources. Plus, you’ll get a cool IEEE-branded Rubik’s Cube when you enter the code CUBE online. Simply go to IEEE.org/join to get started. I’m talking with Scott J. Shapiro. I’m very excited to talk to him about his new book which is titled Fancy Bear Goes Phishing: The Dark History of the Information Age in Five Extraordinary Hacks. So, Scott, if I can call you that rather than addressing you as professor? Scott Shapiro: Please do. Please do. Schneider: Before we talk about your book, tell me a little bit about yourself. Shapiro: So I’m a professor of law and philosophy at Yale University. My primary appointment is at the law school where I teach legal philosophy. But like so many people my age, I grew up in the ‘70s and ‘80s where I got hooked on personal computers. My parents bought me an Apple II when they first came out. Used a TRS-80 at school in biology class and got really into coding and really into computers. And I was a computer science major at Columbia University. And I had a small database construction company, but then gave it up when I went to law school and then graduate school on philosophy. And I just kind of forgot that I had ever done that. Schneider: And from our earlier conversations, you told me about a class that you were teaching. Can you tell me a little bit about that since that, I think, leads into the book about what this class was? Shapiro: What happened was the book before Fancy Bear was called The Internationalists, and it was a history of the regulation of war over 400 years. So it was from 1600 to 2014, about whether you’re allowed legally to go to war. And a lot of people were asking when the book came out in 2017, “What about cyber war? What about cyber war?” And so I got interested in, “What about cyber war?” And so at the time, my colleague Oona Hathaway and I and Joan Feigenbaum from the computer science department, who’s a very famous mathematical cryptographer, we applied to the Hewlett Foundation to get a grant to teach an interdisciplinary course on I think it was called The Law and Technology of Cyber Conflict. And so it was going to be half computer science undergrad majors and half law students, and we would teach both of them the technology and the law. And one of the things about the class was it was the worst class I had ever taught. I don’t think anybody learned anything. I certainly didn’t learn anything. At any given point, half the class is bored and the other half was confused. And what I realized was that law and computer science, those are both very technical subjects and the intersection is very difficult. And so I thought, “How would I teach students about this new world of hacking and cybersecurity? And how does it relate to legal and ethical questions we have? And how should we regulate it and respond to it?” Schneider: The particular hacks that you go over in the book, they are things that you and your students looked at in depth while you were teaching this course, I take it. Shapiro: Actually, no. What happened was when I taught the course, I really taught the students how to hack. I taught this, by the way, with two other of my colleagues, both with extensive network experience and cybersecurity experience. No, we taught them the Linux command line, how the internet works, how its [packing?] switching works, how Wireshark works, how to do network reconnaissance, how to crack passwords. We taught them practical skills and kind of theoretical conceptual ideas about how our digital ecosystem works, how encryption works, yada yada yada. I was doing research on those stories as I was teaching the course. And so the book doesn’t teach you how to hack. That’s not the point of the book. The point of the book is to teach you how hacking works, how hackers have hacked the internet, and what various types of legal, ethical, psychological, technical, historical considerations go into this practice of hacking and how might we try to reverse the trend towards safer digital ecosystem? Schneider: So you and I have worked now on your article in Spectrum which is based on a section of the book that covers the Mirai malware. Maybe you could just take a second to mention the other extraordinary hacks that are in the book. Shapiro: So the book lays out five hacks. The first one is the Robert Morris hack, the Morris worm, the first hack that’s kind of brought down the public internet in 1988. And the next is the Bulgarian virus factory of the early 1990s and the mysterious virus writer, Dark Avenger, who created the first polymorphic virus engine which genetically scrambles, so to speak, the code of every virus, making it very difficult for antivirus software to detect. The third is the hack of Paris Hilton in 2005 when her sidekick was hacked and nude photos were leaked onto the internet. The fourth is where Fancy Bear comes in— Fancy Bear Goes Phishing. Fancy Bear is the name of a lead hacking unit in the Russian military intelligence, the GRU, which hacked the Democratic National Committee in 2016 and leaked the emails and various documents that were found and caused real chaos and turmoil in the 2016 election between Hillary Clinton and Donald Trump. And finally, the Mirai botnet, which was created by three teenagers in order to basically get more market share for their Minecraft servers but ended up knocking the internet off for many people in the United States. Schneider: I’d like really to focus on the conclusion of the book which you title as “The Death of Solutionism.” So I’m going to ask you to explain a little bit what you mean by the death of solutionism and also maybe you could tell us or define for our listeners the terms you use throughout the book of upcode and downcode. Shapiro: So let me first say what solutionism is. Solutionism is a term coined by the social critic Evgeny Morozov to kind of capture this idea that is part of the culture, that all social problems can have technological solutions. It’s the famous example of solutionism as when Wired UK famously wrote, “You want to help Africa? There’s an app for that.” It’s just like an app is going to reverse centuries of colonialism and blah blah blah. Cybersecurity is particularly prone to solutionism because we’re always kind of looking for the next-generation firewall, the next-generation intrusion detection system, all these types of technological solutions. The argument of the book is that this is a mistaken way to think about cybersecurity. Cybersecurity is not primarily a technical problem that requires an engineering solution, but it primarily is a political problem which requires a human solution. And so one way I try to get at this idea, which you might think initially is counterintuitive because what could be more technical than cybersecurity, is the idea of a fundamental distinction that I draw between what I call downcode and upcode. Downcode are literally all the code below your fingertips when you’re typing on a computer keyboard, see your operating system, the application, network protocols, yada yada yada. Upcode is anything above your fingertips. So the rules that I follow, my personal ethics, social norms, legal norms, all those types of things, industrial standards, terms of service, these are all the norms that regulate our action and give us different incentives to behave in certain ways. Schneider: You give some concrete examples of where you see, to use the metaphor, patching the upcode would be useful. Maybe you could give our listeners some examples of this kind of tweaking the upcode. Shapiro: One of the things that you want to do from a criminological perspective is you want to tailor whatever policy solution you’re going to offer to the kind of problem that you’re trying to solve. And in particular, when it comes to crime, you want to see what are the motivations of the offenders. Young boys, in particular, get into hacking through gaming culture and through a process of escalation, start engaging in first cheat sheets and then small little hacks and then they can transmogrify, grow, metastasize into real, very serious criminality. And so the idea to do in the United States what law enforcement has done in the United Kingdom, in the Netherlands which is to try to engage in diversion programs to try to divert people who might have skills to be, so to speak, on the blue team, on defense but because of various types of social pressures, get pushed to the red team, get pushed to being attackers and to try to change that. Another thing I’ll just very quickly mention is as a legal matter, there’s no software liability for security vulnerabilities. So you can’t sue Microsoft for putting out really bad code resulting in your being hacked. And the Biden administration just released their National Cybersecurity Strategy where they are finally proposing software liability for security vulnerabilities. And I think that’s a very important move. Schneider: Why is that? I mean, when I go and I buy a ladder at the big-box hardware store, if I fall off of it because it’s faulty, there’s somebody I can sue. But why is it a piece of software that’s faulty that can do something much more devastating to me, there’s nobody to sue? Shapiro: In American law, and actually, Anglophone legal systems, typically what will happen is when you sue somebody, you can only sue for physical damage or pain or suffering that happens to you through physical destruction. But you can’t sue for purely economic damages for, let’s say, negligence or recklessness in creating bad software because economic damages are not generally recoverable in American courts. There’s also— I mean, that’s a technical reason, but the larger kind of cultural reason, economic and political reason is that the United States takes a certain view about technology. In the United States, we have this idea that we don’t want to regulate new technologies for fear of choking off innovation. The same story was with the car. There’s very, very little regulation on the automobile because the power of the United States was as an industrial behemoth, and the idea is like, “We don’t want to stop that.” I think we’ve gotten to the— we got to the point in the 1960s with Ralph Nader and Unsafe at Any Speed where he came out with reports saying, “Look, this is a really, really dangerous technology. It needs to be regulated.” And that’s how we got seat belts. I think the same thing is true for the internet now, I think, where a book has suggested various ways to try to regulate it. Schneider: Tell us more about kind of the upcode tweaks that you’d see around cyber espionage. Shapiro: There’s almost nothing you can do about cyber espionage is the point. The point is that it is part of the upcode of the world. I mean, it’s amazing. It is part of global upcode that nations are allowed to spy on each other. In fact, it’s almost encouraged, and you can imagine why it might be encouraged, that it’s probably good for nations to know about each other’s military intentions. But whereas you might be able to get law enforcement to really crack down on cybercrime, it’s very, very difficult to crack down on cyber espionage when the United States is the largest spying country on the planet. Schneider: But there was a suggestion there that there might be things to be done about economic espionage. Shapiro: Right. So when we say espionage, we have to distinguish between, let’s say, national security-focused espionage and financial, corporate, or economic espionage. So the United States is the largest national security hacker on the planet, but it almost never engages in corporate espionage. That is, it doesn’t actually hack into Chinese companies, let’s say, and steal their blueprints. China hacked into defense contractor and stole the entire blueprints for the F-35. Now, there had been a talk between Xi and President Obama, and they signed an agreement limiting economic espionage. And that worked out decently till Trump came into office and started a trade war with China, and then the economic and political relationship with China kind of fell apart. But there is room to cut down on espionage through international agreements because it isn’t the case that financial espionage is legal. So there are things we can do, but the core national security, kind of hacking into leaders and their intelligence agencies to learn about the military and strategic intentions of a country, that’s never going away. Schneider: I mean, your book basically has a kind of optimistic message. You seem to be telling us, if I’ve interpreted you correctly, cyber war is going to be a kind of a simmering thing rather than a complete boiling over. Shapiro: Right. Yeah. So in a way, this kind of surprised me just because of the hype associated with cyber war. But in a way, I think studying the history of war before I came to this project made me see things, I think, slightly differently because of that background. And so the first thing is just the technical challenges associated with trying to hack a digital infrastructure like the United States which has so many different kinds of operating systems, so many different kinds of applications, so many different versions, so many different network configurations. They’re very, very difficult to hack across platforms like that. But secondly, and I think more importantly, cyberweapons are not great weapons. I mean, it’s very hard to hold territory with cyberweapons. It’s very hard to blow things up with cyberweapons. If you really want to blow things up, use bombs. So when Russia was going to invade Ukraine, which it did, people were saying, “Oh, no. This is going to be the cyber war, cyber war, cyber war.” And I thought to myself, “Why would you burn exploits if you’re Russia when you actually have bombs?” And that’s what happened. Russia had been harassing Ukraine for seven years with cyberattacks. And then when they really wanted to get real, when they really wanted to capture territory or decapitate Ukraine, they sent in the tanks, the troops, the planes, the bombs. That hasn’t worked out so well for them, but a cyber war wasn’t going to be the answer. So what I try to say is that cyberweapons are weapons of the weak. They are used by weak nations to harass stronger nations. But when nations really want to compete and go against each other, they use kinetic weapons like bombs and tanks. Schneider: You make a very nice, I guess, analogy with peasant revolts or rebellions. Shapiro: Yeah. So there’s a very well-known book written by the anthropologist James Scott called Weapons of the Weak. He used to teach at Yale. He was a brilliant, brilliant person. And what happened during his fieldwork, he went in the late ‘70s to Indonesia to a rice village because he was really interested why do peasants not revolt more often. And the Marxists had said, “Oh, they have false consciousness. They really buy into what their lords tell them.” And what Jim Scott hypothesized was that in fact, that’s not at all the case. The peasants hate their lords, and they strike back at them all the time but in this kind of low-level, covert way, ways that he called weapons of the weak because it’s too dangerous to strike at them directly. And I think that’s what cyberweapons are. Cyberweapons are weapons of the weak. It’s when, well, you can’t afford to go all out on another adversary but you really want to cause the other person pain but not too much pain so that they retaliate and escalate. So I think that Russia, North Korea, Iran, they’re the geopolitical peasants, so to speak. Russia is actually a tricky situation because Russia is an intermediate power. It has very strong kinetic capabilities, although much less than it did, and very strong cyberweapons. But ultimately, if they wanted to attack an equal, they would probably go with cyberweapons. And if they really wanted to go into a large war, they would use kinetic weapons. Schneider: I like to end with a kind of philosophical question—you’re a professor of philosophy - so I would venture to say that a lot of our listeners and readers of Spectrum are people who are, what you’d call, solutionists. They gravitate towards technical fixes to problems. And I’m wondering how someone with that mindset could have his or her consciousness raised to realize that maybe the solution isn’t a technical solution. Shapiro: Yeah. So I think that lawyers and engineers are at root the same. We’re both coders. Engineers are downcoders. Lawyers are upcoders. We’re both trying to solve problems using instructions, and we hold ourselves to standards of rationality. Yeah. So that’s what I would say. Schneider: Well, that sounds good. Well, I should thank you. And I hope you have great success with this book because it certainly deserves to be read. That was Scott J. Shapiro speaking to us about his new book Fancy Bear Goes Phishing. I’m David Schneider, and I hope you’ll join us next time on Fixing the Future.</itunes:summary><itunes:keywords>Type-podcast, National-security, Cyberwar, Phishing, Cybersecurity, Fixing-the-future</itunes:keywords></item><item><title>Explainer: Why No-Code Software Isn't Just for Developers</title><link>https://spectrum.ieee.org/no-code-explainer</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.webp?id=33803075&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/f0a1002a" width="100%">
</iframe><p><strong>Dina Genkina:</strong> Hi. I’m Dina Genkina for <em>IEEE Spectrum</em>‘s<em> Fixing the Future</em>. This episode is brought to you by IEEE Explore. The digital library with over 6 million pieces of the world’s best technical content. In the November issue of <em>IEEE Spectrum</em>, one of our <a href="https://spectrum.ieee.org/ai-code-generation-language-models" target="_self">most popular stories was about code that writes its own code</a>. Here to probe a little deeper is the author of that article, <a href="https://www.linkedin.com/in/craig-s-smith-58680010/" rel="noopener noreferrer" target="_blank">Craig Smith</a>. Craig is a former <em>New York Times</em> correspondent and host of his own podcast, <em><a href="https://www.eye-on.ai/about" rel="noopener noreferrer" target="_blank">Eye On AI</a></em>. Welcome to the podcast, Craig.</p><p>Craig Smith: Hi.</p><p>Genkina: Thank you for joining us. So you’ve been doing a lot of reporting on these new artificial intelligence models that can write their own code to whatever capacity that they can do that. So maybe we can start by highlighting a couple of your favorite examples, and you can explain a little bit about how they work.</p><p>Smith: Yeah. Absolutely. First of all, the reason I find this so interesting is that I don’t code myself. And I’ve been talking to people for a couple of years now about when artificial intelligence systems will get to the point that I can talk to them, and they’ll write a computer program based on what I’m asking them to do, and it’s an idea that’s been around for a long time. And one thing is a lot of people think this exists already because they’re used to talking to Siri or Alexa or Google Assistant on some other virtual assistant. And you’re not actually writing code when you talk to Siri or Alexa or Google Assistant. That changed when they built GPT-3, the successor to GPT-2, which was a much larger language model. And these large language models are trained on huge corpuses of data and based primarily on something called a transformer algorithm. They were really focused on text. On human natural language.</p><p>But kind of a side effect was that there’s a lot of HTML code out on the internet. And GPT-3 it turns out learned how HTML code just as it learned English natural language. The first application of these large language models’ ability to write code has been first by GitHub. Together with <a href="https://openai.com/" rel="noopener noreferrer" target="_blank">OpenAI</a> and <a href="https://www.microsoft.com/en-us/" rel="noopener noreferrer" target="_blank">Microsoft</a>, they created a product called <a href="https://github.com/features/copilot" rel="noopener noreferrer" target="_blank">Copilot</a>. And it’s <a href="https://en.wikipedia.org/wiki/Pair_programming" rel="noopener noreferrer" target="_blank">pair programming</a>. I mean, oftentimes when programmers are writing code, they have someone— they work in teams. In pairs. And one person writes kind of the initial code and the other person cleans it up or checks it and tests it. And if you don’t have someone to work with, then you have to do that yourself, and it takes twice as long. So GitHub created this thing based on GPT-3 called Copilot, and it acts as that second set of hands. And so when you begin to write a line of code, it’ll autocomplete that line, just as it happens with Microsoft Word now or any Word processing program. And then the coder can either accept or modify or delete that suggestion. GitHub recently did a survey and found that coders can code twice as fast using Copilot to help autocomplete their code than if they were working on their own.</p><p>Genkina: Yeah. So maybe we could put a bit of a framework to this. So I guess programming in its most basic form like back in the old days used to be with these punch cards, right? And when you get down to what you’re telling the computer to do, it’s all ones and zeros. So the base way to talk to a computer is with ones and zeros. But then people developed more complicated tools so that programmers don’t have to sit around and type ones and zeros all day long. And programming languages and their simpler programming languages are slightly more sophisticated, higher-level programming languages so to speak. And they’re kind of closer to words, although definitely not natural language. But they will use some words, but they still have to follow this somewhat rigid logical structure. So I guess one way to think about it is that these tools are kind of moving on to the next level of abstraction above that, or trying to do so.</p><p>Smith: That’s right. And that started really in the forties, or I guess in the fifties at a company called Remington Rand. Remington Rand. A woman named <a href="https://www.womenshistory.org/education-resources/biographies/grace-hopper" rel="noopener noreferrer" target="_blank">Grace Hopper</a> introduced <a href="https://en.wikipedia.org/wiki/FLOW-MATIC" rel="noopener noreferrer" target="_blank">a programming language that used English language</a> vocabulary. So that instead of having to write in symbols, mathematic symbols, the programmers could write import, for example, to ingest some other piece of code. And that has started this ladder of increasingly efficient languages to where we are today with things like <a href="https://www.python.org/" rel="noopener noreferrer" target="_blank">Python</a>. I mean, they’re primarily English language words and different kinds of punctuation. There isn’t a lot of mathematical notation in them.</p><p>So what’s happened with these large language models, what happened with HTML code and is now happening with other programming languages, is that you’re able to speak to them instead of— as with <a href="https://aws.amazon.com/codewhisperer/" rel="noopener noreferrer" target="_blank">CodeWhisperer</a> or Copilot, where you write in computer code or programming language and the system autocompletes what you started writing, you can write in natural language and the computer will interpret that and write the code associated with it. And that opens up this vista of what I’m dreaming of, of being able to talk to a computer and have it write a program.</p><p>The problem with that is that, as I was saying, natural language is so imprecise that you either need to learn to speak or write in a very constrained way for the computer to understand you. Even then, there’ll be ambiguities. So there’s a group at Microsoft that has come up with this system called T coder. It’s just a research paper now. It hasn’t been productized. But the computer, you tell it that you want it to do something in very spare, <a href="https://www.akkio.com/" rel="noopener noreferrer" target="_blank">imprecise</a> language. And the computer will see that there are several ways to code that phrase, and so the computer will come back and ask for clarification of what you mean. And that interaction, that back-and-forth, then refines the meaning or the intent of the person who’s talking or writing instructions to the computer to the point that it’s adequately precise, and then the computer generates the code.</p><p>So I think eventually there will be very high-level data scientists that learn coding languages, but it opens up software development to a large swath of people who will no longer need to know a programming language. They’ll just need to understand how to interact with these systems. And that will require them to understand, as you were saying at the onset, the logical flow of a program and the syntax of programs, of programming languages and be aware of the ambiguities in natural language.</p><p>And some of that’s already finding its way into products. There’s a company called <a href="https://www.akkio.com/" rel="noopener noreferrer" target="_blank">Akkio</a> that has a no-code platform. It’s primarily a drag-and-drop interface. And it works on tabular data primarily. But you drag in a spreadsheet and drop it into their interface, and then you click a bunch of buttons on what you want to train the program on. What you want the program to predict. These are predictive models. And then you hit a button, and it trains the program. And then you feed it your untested data, and it will make the predictions on that data. It’s used for a lot of fascinating things. Right now, it’s being used in the political sphere to predict who in a list of 20,000 contacts will donate to a particular party or campaign. Contacts will donate to a particular political party or campaign. So it’s really changing political fundraising.</p><p>And Akkio has just come out with a new feature which I think you’ll start seeing in a lot of places. One of the issues in working with data is cleaning it up. Getting rid of outliers. Rationalizing the language. You may have a column where some things are written out in words. Other things are numbers. You need to get them all into numbers. Things like that. That kind of clean-up is extremely time-consuming and tedious. And Akkio has a large— well, they’ve actually tapped into a large language model. So they’re using a large language model. It’s not their model. But you just write in natural language into the interface what you want done. You want to combine three columns that give the date, the time, and the month and year. I mean, the day of the week, the month, the year. The month and the year. You want to combine that into a single number so that the computer can deal with it more easily. You can just tell the interface by writing in simple English what you want. And you can be fairly imprecise in your English, and the large language model will understand what you mean. So it’s an example of how this new ability is being implemented in products. I think it’s pretty amazing. And I think you’ll see that spread very quickly. I mean, this is all a long way from my talking to a computer and having it create a complicated program for me. These are still very basic.</p><p>Genkina: Yeah. So you mention in your article that this isn’t actually about to put coders out of a job, right? So is it just because you think it’s not there yet. The technologies not at that level? Or is that fundamentally not what’s happening in your view?</p><p>Smith: Well, the technology certainly isn’t there yet. It’s going to be a very long time before— well, I don’t know that it’s going to be a long time because things have moved so quickly. But it’ll be a while yet, before you’ll be able to speak to a computer and have it write complex programs. But what will happen and will happen, I think, fairly quickly is with things like <a href="https://alphacode.deepmind.com/" rel="noopener noreferrer" target="_blank">AlphaCode</a> in the background, things like T coder that interacts with the user, that people won’t need to learn computer programming languages any longer in order to code. They will need to understand the structure of a program, the logic and syntax, and they’ll have to understand the nuances and ambiguities in natural language. I mean, if you turned it over to someone who wasn’t aware of any of those things, I think it would not be very effective.</p><p>But I can see that computer science students will learn <a href="https://isocpp.org/" rel="noopener noreferrer" target="_blank">C++</a> and Python because you learn the basics in any field that you’re going into. But the actual application will be through natural language working with one of these interactive systems. And what that allows is just a much broader population to get involved in programming and developing software. And we really need that because there is a real shortage of capable computer programmers and coders out there. The world is going through this digital transformation. Every process is being turned into software. And there just aren’t enough people to do that. That’s what’s holding that transformation back. So as you broaden the population of people that can do that, more software will be developed in a shorter period of time. I think it’s very exciting.</p><p>Genkina: So maybe we can get into a little bit of the copyright issues surrounding this because for example, GitHub Copilot sometimes spits out bits of code that are found in the training data that it was trained on. So there’s a pool of training data from the internet like you mentioned in the beginning and the output of this program the auto-completer suggests is some combination of all the inputs maybe put together in a creative way, but sometimes just straight copies of bits of code from the input. And some of these input bits of code have copyright licenses.</p><p>Yeah. Yeah. That’s interesting. I remember when sampling started in the music industry. And I thought it would be impossible to track down the author of every bit of music that was sampled and work out some kind of a licensing deal that would compensate the original artist. But that’s happened, and people are very quick to spot samples that use their original music if they haven’t been compensated. In this realm, to me, it’s a little different. It’ll be interesting to see what happens. Because the human mind ingests data and then produces theoretically original thought, but that thought is really just a jumble of everything that you’ve ingested. Yeah. I had this conversation recently about whether the human mind is really just a large language model that has trained on all of the information that it’s been exposed to.</p><p>And it seems to me that, on the one hand, it’s impossible to trace every input for any particular output as these systems get larger. And I just think it’s an unreasonable to expect every piece of human creative output to be copyrighted and tracked through all of the various iterations that it goes through. I mean, you look at the history of art. Every artist in the visual arts is drawing on his predecessors and using ideas and things to create something new. I haven’t looked in any particular cases where it’s glaring that the code or the language is clearly identifiable is coming from one source. I don’t know how to put it. I think the world is getting so complex that creative output, once it’s out there unless something like sampling for music where it’s clearly identifiable, that it’s going to be impossible to credit and compensate everyone whose output became an input to that computer program.</p><p>Genkina: My next question was about who should get paid for code by these big AIs, but I guess you kind of suggested a model where all the training data get a little bit of— everyone responsible for the training data would get a little bit of royalties for every use. I guess, long term that’s probably not super viable because a few generations from now there’s going to be no one that contributed to the training data.</p><p>Smith: Yeah. But that is interesting, who owns these models that are written by a computer. It’s something I really haven’t thought about. And I don’t know if you’ll cut this out, but have you read anything about that topic? About who will own— if AlphaCode becomes a product, deep mines AlphaCode, and it writes a program that becomes extremely useful and is used around the world and generates potentially a lot of revenue, who owns that model? I don’t know.</p><p>Genkina: So what is your expectation for what do you think will happen in this arena in the coming 5 to 10 years or so?</p><p>Smith: Well, in terms of auto-generated code, I think it’s going to progress very quickly. I mean, transformers came out in 2017, I think. And two years later, you have AlphaCode writing complete programs from natural language. And now you have T coder in the same year with a system that refines the natural language intent. I think in five years, yeah, we’ll be able to write basic software programs from speech. It’ll take much longer to write something like GPT-3. That’s a very, very complicated program. But the more that these algorithms are commoditized, the more I think combining them will be easier. So In 10 years, yeah, I think it’s possible that you’ll be able to talk to a computer. And again, not an untrained person, but a person that understands how programming works and program a fairly complex program. It kind of builds on itself this cycle because the more people that can participate in development that on the one hand creates more software, but it also frees up sort of the high-level data scientists to develop novel algorithms and new systems. And so I see it as accelerating and it’s an exciting time. [music]</p><p>Genkina: Today on <em>Fixing the Future</em>, we spoke to Craig Smith about AI-generated code. I’m Dina Genkina for <em>IEEE Spectrum</em> and I hope you’ll join us next time on Fixing the Future.<br/></p>]]></description><pubDate>Mon, 05 Jun 2023 20:00:02 +0000</pubDate><guid>https://spectrum.ieee.org/no-code-explainer</guid><category>Type-podcast</category><category>Fixing-the-future</category><category>Coding</category><category>Software</category><category>Software-engineering</category><dc:creator>Dina Genkina</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/33803075/origin.webp"/></item><item><title>The Electrome: The Next Great Frontier For Biomedical Technology</title><link>https://spectrum.ieee.org/electrome-new-biomedical-frontier</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.webp?id=33729438&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/fd7ee467" width="100%">
</iframe><p><strong>Stephen Cass:</strong> Welcome to <em>Fixing the Future</em>, an <em>IEEE Spectrum</em> podcast. This episode is brought to you by <a href="https://ieeexplore.ieee.org/Xplore/home.jsp" target="_blank">IEEE Xplore</a>, the digital library with over 6 million technical documents and free search. I’m senior editor Stephen Cass, and today I’m talking with a former <em>Spectrum</em> editor, Sally Adee, about her new book, <em><a href="https://www.hachettebookgroup.com/titles/sally-adee/we-are-electric/9780306826641/?lens=hachette-books" target="_blank">We Are Electric: The New Science of Our Body’s Electrome</a></em>. Sally, welcome to the show.</p><p><strong>Sally Adee:</strong> Hi, Stephen. Thank you so much for having me.</p><p><strong>Cass:</strong> It’s great to see you again, but before we get into exactly what you mean by the body’s electrome and so on, I see that in researching this book, you actually got yourself zapped quite a bit in a number of different ways. So I guess my first question is: are you okay?</p><p><strong>Adee:</strong> I mean, as okay as I can imagine being. Unfortunately, there’s no experimental sort of condition and control condition. I can’t see the self I would have been in the multiverse version of myself that didn’t zap themselves. So I think I’m saying yes.</p><p><strong>Cass:</strong> The first question I have then is what is an electrome?</p><p><strong>Adee: </strong>So the electrome is this word, I think, that’s been burbling around the bioelectricity community for a number of years. The first time it was committed to print is a 2016 paper by this guy called <a href="https://bio.kuleuven.be/faculty/00008007" target="_blank">Arnold De Loof</a>, a researcher out in Europe. But before that, a number of the researchers I spoke to for this book told me that they had started to see it in papers that they were reviewing. And I think it wasn’t sort of defined consistently always because there’s this idea that seems to be sort of bubbling to the top, bubbling to the surface, that there are these electrical properties that the body has, and they’re not just epiphenomena, and they’re not just in the nervous system. They’re not just action potentials, but that there are electrical properties in every one of our cells, but also at the organ level, potentially at the sort of entire system level, that people are trying to figure out what they actually do.</p><p>And just as action potentials aren’t just epiphenomena, but actually our control mechanisms, they’re looking at how these electrical properties work in the rest of the body, like in the cells, membrane voltages and skin cells, for example, are involved in wound healing. And there’s this idea that maybe these are an epigenetic variable that we haven’t been able to conscript yet. And there’s such promise in it, but a lot of the research, the problem is that a lot of the research is being done across really far-flung scientific communities, some in developmental biology, some of it in oncology, a lot of it in neuroscience, obviously. But what this whole idea of the electrome is— I was trying to pull this all together because the idea behind the book is I really want people to just develop this umbrella of bioelectricity, call it the electrome, call it bioelectricity, but I kind of want the word electrome to do for bioelectricity research what the word genome did for molecular biology. So that’s basically the spiel.</p><p><strong>Cass:</strong> So I want to surf back to a couple points you raised there, but first off, just for people who might not know, what is an action potential?</p><p><strong>Adee:</strong> So the action potential is the electrical mechanism by which the nervous signal travels, either to actuate motion at the behest of your intent or to gain sensation and sort of perceive the world around you. And that’s the electrical part of the electrochemical nervous impulse. So everybody knows about neurotransmitters at the synapse and— well, not everybody, but probably <em>Spectrum</em> listeners. They know about the serotonin that’s released and all these other little guys. But the thing is you wouldn’t be able to have that release without the movement of charged particles called ions in and out of the nerve cell that actually send this impulse down and allow it to travel at a rate of speed that’s fast enough to let you yank your hand away from a hot stove when you’ve touched it, before you even sort of perceive that you did so.</p><p><strong>Cass:</strong> So that actually brings me to my next question. So you may remember in some <em>Spectrum</em>‘s editorial meetings when we were deciding if a tech story was for us or not, that literally, we would often ask, “Where is the moving electron? Where is the moving electron?” But bioelectricity is not really based on moving electrons. It’s based on these ions.</p><p>Yeah. So let’s take the neuron as an example. So what you’ve got is— let me do like a— imagine a spherical cow for a neuron, okay? So you’ve got a blob and it’s a membrane, and that separates the inside of your cell from the outside of your cell. And this membrane is studded with tens of thousands, I think, little pores called ion channels. And the pores are not just sieve pores. They’re not inert. They’re really smart. And they decide which ions they like. Now, let’s go to the ions. Ions are suffusing your extracellular fluid, all the stuff that bathes you. It’s basically the reason they say you’re 66 percent water or whatever. This is like sieve water. It’s got sodium, potassium, calcium, etc., and these ions are charged particles.</p><p>So when you’ve got a cell, it likes potassium, the neuron, it likes potassium, it lets it in. It doesn’t really like sodium so much. It’s got very strong preferences. So in its resting state, which is its happy place, those channels allow potassium ions to enter. And those are probably where the electrons are, actually, because an ion, it’s got a plus-one charge or a minus-one charge based on— but let’s not go too far into it. But basically, the cell allows the potassium to come inside, and its resting state, which is its happy place, the separation of the potassium from the sodium causes, for all sorts of complicated reasons, a charge inside the cell that is minus 70 degree— sorry, minus 70 millivolts with respect to the extracellular fluid.</p><p><strong>Cass:</strong> Before I read your book, I kind of had the idea that how neurons use electricity was, essentially, settled science, very well understood, all kind of squared away, and this was how the body used electricity. But even when it came to neurons, there’s a lot of fundamentals, kind of basic things about how neurons use electricity that we really only established relatively recently. Some of the research you’re talking about is definitely not a century-old kind of basic science about how these things work.</p><p><strong>Adee:</strong> No, not at all. In fact, there was a paper released in 2018 that I didn’t include, which I’m really annoyed by. I just found it recently. Obviously, you can’t find all the papers. But it’s super interesting because it blends that whole sort of ionic basis of the action potential with another thing in my book that’s about how cell development is a little bit like a battery getting charged. Do you know how cells assume an electrical identity that may actually be in charge of the cell fate that they meet? And so we know abou— sorry, the book goes into more detail, but it’s like when a cell is stem or a fertilized egg, it’s depolarized. It’s at zero. And then when it becomes a nerve cell, it goes to that minus 70 that I was talking about before. If it becomes a fat cell, it’s at minus 50. If it’s musculoskeletal tissue, it goes to minus 90. Liver cells are like around minus 40. And so you’ve got real identitarian diversity, electrical diversity in your tissues, which has something to do with what they end up doing in the society of cells. So this paper that I was talking about, the 2018 paper, they actually looked at neurons. This was work from Denis Jabaudon at the University of Geneva, and they were looking at how neurons actually differentiate. Because when baby neurons are born-- your brain is made of all kinds of cells. It’s not just cortical cells. There’s staggering variety of classes of neurons. And as cells actually differentiate, you can watch their voltage change, just like you can do in the rest of the body with these electrosensitive dyes. So that’s an aspect of the brain that we hadn’t even realized until 2018.</p><p><strong>Cass: </strong>And that all leads me to my next point, which is if you think bioelectricity, we think, okay, nerves zapping around. But neurons are not the only bioelectric network in the body. So talk about some of the other sorts of electrical networks we have, completely, or are largely separate from our neural networks?</p><p><strong>Adee:</strong> Well, so <a href="https://as.tufts.edu/biology/people/faculty/michael-levin" target="_blank">Michael Levin</a> is a professor at Tufts University. He does all kinds of other stuff, but mainly, I guess, he’s like the <a href="https://en.wikipedia.org/wiki/Paul_Erd%C5%91s" target="_blank">Paul Erdos</a> of bioelectricity, I like to call him, because he’s sort of the central node. He’s networked into everybody, and I think he’s really trying to, again, also assemble this umbrella of bioelectricity to study this all in the aggregate. So his idea is that we are really committed to this idea of bioelectricity being in charge of our sort of central communications network, the way that we understand the environment around us and the way that we understand our ability to move and feel within it. But he thinks that bioelectricity is also how— that the nervous system kind of hijacked this mechanism, which is way older than any nervous system. And he thinks that we have another underlying network that is about our shape, and that this is bioelectrically mediated in really important ways, which impacts development, of course, but also wound healing. Because if you think about the idea that your body understands its own shape, what happens when you get a cut? How does it heal it? It has to go back to some sort of memory of what its shape is in order to heal it over. In animals that regenerate, they have a completely different electrical profile after they’ve been—so after they’ve had an arm chopped off.</p><p>So it’s a very different electrical— yeah, it’s a different electrical process that allows a starfish to regrow a limb than the one that allows us to scar over. So you’ve got this thing called a wound current. Your skin cells are arranged in this real tight wall, like little soldiers, basically. And what’s important is that they’re polarized in such a way that if you cut your skin, all the sort of ions flow out in a certain way, which creates this wound current, which then generates an electric field, and the electric field acts like a beacon. It’s like a bat signal, right? And it guides in these little helper cells, the macrophages that come and gobble up the mess and the keratinocytes and the guys who build it back up again and scar you over. And it starts out strong, and as you scar over, as the wound heals, it very slowly goes away. By the time the wound is healed, there’s no more field. And what was super interesting is this guy, <a href="https://www.linkedin.com/in/nuccitelli/" target="_blank">Richard Nuccitelli</a>, invented this thing called the Dermacorder that’s able to sense and evaluate the electric field. And he found that in people over the age of 65, the wound field is less than half of what it is in people under 25. And that actually goes in line with another weird thing about us, which is that our bioelectricity— or sorry, our regeneration capabilities are time-dependent and tissue-dependent.</p><p>So you probably know that the intestinal tissue regenerates all the time. You’re going to digest next week’s food with totally different cells than this morning’s food. But also, we’re time-dependent because when we’re just two cells, if you cleave that in half, you get identical twins. Later on during fetal development, it’s totally scarless, which is something we found out, because when we started being able to do fetal surgery in the womb, it was determined that we heal, basically, scarlessly. Then we’re born, and then between the ages of 7 and 11— until we are between the ages of 7 and 11, you chop off a fingertip, it regenerates perfectly, including the nail, but we lose that ability. And so it seems like the older we get, the less we regenerate. And so they’re trying to figure out now how— various programs are trying to figure out how to try to take control of various aspects of our sort of bioelectrical systems to do things like radically accelerate healing, for example, or how to possibly re-engage the body’s developmental processes in order to regenerate preposterous things like a limb. I mean, it sounds preposterous now. Maybe in 20 years, it’ll just be.</p><p><strong>Cass:</strong> I want to get into some of the technologies that people are thinking of building on this sort of new science. Part of it is that the history of this field, both scientifically and technologically, has really been plagued by the shadow of quackery. And can you talk a little bit about this and how, on the one hand, there’s been some things we’re very glad that we stopped doing some very bad ideas, but it’s also had this shadow on sort of current research and trying to get real therapies to patients?</p><p><strong>Adee:</strong> Yeah, absolutely. That was actually one of my favorite chapters to write, was the spectacular pseudoscience one, because, I mean, that is so much fun. So it can be boiled down to the fact that we were trigger happy because we see this electricity, we’re super excited about it. We start developing early tools to start manipulating it in the 1700s. And straight away, it’s like, this is an amazing new tool, and there’s all these sort of folk cures out there that we then decide that we’re going to take— not into the clinic. I don’t know what you’d call it, but people just start dispensing this stuff. This is separate from the discovery of endogenous electrical activity, which is what <a href="https://www.unibo.it/en/university/who-we-are/our-history/famous-people-and-students/luigi-galvani" target="_blank">Luigi Galvani</a> famously discovered in the late 1700s. He starts doing this. He’s an anatomist. He’s not an electrician. Electrician, by the way, is what they used to call the sort of literati who were in charge of discovery around electricity. And it had a really different connotation at the time, that they were kind of like the rocket scientists of their day.</p><p>But Galvani’s just an anatomist, and he starts doing all of these experiments using these new tools to zap frogs in various ways and permutations. And he decides that he has answered a whole different old question, which is how does man’s will animate his hands and let him feel the world around him? And he says, “This is electrical in nature.” This is a long-standing mystery. People have been bashing their heads against it for the past 100, 200 years. But he says that this is electrical, and there’s a big, long fight. I won’t get into too much between <a href="https://en.wikipedia.org/wiki/Alessandro_Volta" target="_blank">Volta</a>, the guy who invented the battery, and Galvani. Volta says, “No, this is not electrical.” Galvani says, “Yes, it is.” But owing to events, when Volta invents the battery, he basically wins the argument, not because Galvani was wrong, but because Volta had created something useful. He had created a tool that people could use to advance the study of all kinds of things. Galvani’s idea that we have an endogenous electrical sort of impulse, it didn’t lead to anything that anybody could use because we didn’t have tools sensitive enough to really measure it. We only sort of had indirect measurements of it.</p><p>And his nephew, after he dies in ignominy, his nephew decides to bring it on himself to rescue, single-handedly, his uncle’s reputation. The problem is, the way he does it is with a series of grotesque, spectacular experiments. He very famously reanimated— well, zapped until they shivered, the corpses of all these dead guys, dead criminals, and he was doing really intense things like sticking electrodes connected to huge voltaic piles, Proto batteries, into the rectums of dead prisoners, which would make them sit up halfway and point at the people who are assembled, this very titillating stuff. Many celebrities of the time would crowd around these demonstrations.</p><p>Anyway, so Galvani basically—or sorry, Aldini, the nephew, basically just opens the door to everyone to be like, “Look what we can do with electricity.” Then in short order, there’s a guy who creates something called the Celestial Bed, which is a thing— they’ve got rings, they’ve got electric belts for stimulating the nethers. The Celestial Bed is supposed to help infertile couples. This is how sort of just wild electricity is in those days. It’s kind of like— you know how everybody went crazy for crypto scams last year? Electricity was like the crypto of 1828 or whatever, 1830s. And the Celestial Bed, so people would come and they would pay £9,000 to spend a night in it, right? Well, not at the time. That’s in today’s money. And it didn’t even use electricity. It used the idea of electricity. It was homeopathy, but electricity. You don’t even know where to start. So this is the sort of caliber of pseudoscience, and this is really echoed down through the years. That was in the 1800s. But when people submit papers or grant applications, I heard more than one researchers say to me— people would look at this electric stuff, and they’d be like, “Does anyone still believe this shit?” And it’s like, this is rigorous science, but it’s been just tarnished by the association with this.</p><p><strong>Cass:</strong> So you mentioned wound care, and the book talks about some of the ways [inaudible] would care. But we’re also looking at other really ambitious ideas like regenerating limbs as part of this extension of wound care. And also, you make the point of certainly doing diagnostics and then possibly treatments for things like cancer. In thinking about cancer in a very different way than the really very, very tightly-focused genetic view we have of cancer now, and thinking about it kind of literally in a wider context. So can you talk about that a little bit?</p><p><strong>Adee: </strong>Sure. And I want to start by saying that I went to a lot of trouble to be really careful in the book. I think cancer is one of those things that— I’ve had cancer in my family, and it’s tough to talk about it because you don’t want to give people the idea that there’s a cure for cancer around the corner when this is basic research and intriguing findings because it’s not fair. And I sort of struggled. I thought for a while, like, “Do I even bring this up?” But the ideas behind it are so intriguing, and if there were more research dollars thrown at it or pounds or whatever, Swiss francs, you might be able to really start moving the needle on some of this stuff. The idea is, there are two electrical— oh God, I don’t want to say avenues, but it is unfortunately what I have to do. There are two electrical avenues to pursue in cancer. The first one is something that a researcher called Mustafa Djamgoz at Imperial College here in the UK, he has been studying this since the ‘90s. Because he used to be a neurobiologist. He was looking at vision. And he was talking to some of his oncologist Friends, and they gave him some cancer cell lines, and he started looking at the behavior of cancer cells, the electrical behavior of cancer cells, and he started finding some really weird behaviors.</p><p>Cancer cells that should not have had anything to do with action potentials, like from prostate cancer lines, when he looked at them, they were oscillating like crazy, as if they were nerves. And then he started looking at other kinds of cancer cells, and they were all oscillating, and they were doing this oscillating behavior. So he spent like seven years sort of bashing his head against the wall. Nobody wanted to listen to him. But now, way more people are now investigating this. There’s going to be an ion channel at Cancer Symposium I think later this month, actually, in Italy. And he found, and a lot of other researchers like this woman, Annarosa Arcangeli, they have found that the reason that cancer cells may have these oscillating properties is that this is how they communicate with each other that it’s time to leave the nest of the tumor and start invading and metastasizing. Separately, there have been very intriguing-- this is really early days. It’s only a couple of years that they’ve started noticing this, but there have been a couple of papers now. People who are on certain kinds of ion channel blockers for neurological conditions like epilepsy, for example, they have cancer profiles that are slightly different from normal, which is that if they do get cancer, they are slightly less likely to die of it. In the aggregate. Nobody should be starting to eat ion channel blockers.</p><p>But they’re starting to zero in on which particular ion channels might be responsible, and it’s not just one that you and I have. These cancer kinds, they are like a expression of something that normally only exists when we’re developing in the womb. It’s part of the reason that we can grow ourselves so quickly, which of course, makes sense because that’s what cancer does when it metastasizes, it grows really quickly. So there’s a lot of work right now trying to identify how exactly to target these. And it wouldn’t be a cure for cancer. It would be a way to keep a tumor in check. And this is part of a strategy that has been proposed in the UK a little bit for some kinds of cancer, like the triple-negative kind that just keep coming back. Instead of subjecting someone to radiation and chemo, especially when they’re older, sort of just really screwing up their quality of life while possibly not even giving them that much more time. What if instead you sort of tried to treat cancer more like a chronic disease, keep it managed, and maybe that gives a person like 10 or 20 years? That’s a huge amount of time. And while not messing up with their quality of life.</p><p>This is a whole conversation that’s being had, but that’s one avenue. And there’s a lot of research going on in this right now that may yield fruit sort of soon. The much more sci-fi version of this, the studies have mainly been done in tadpoles, but they’re so interesting. So Michael Levin, again, and his postdoc at the time, I think, Brook Chernet, they were looking at what happens— so it’s uncontroversial that as a cancer cell-- so let’s go back to that society of cells thing that I was talking about. You get fertilized egg, it’s depolarized, zero, but then its membrane voltage charges, and it becomes a nerve cell or skin cell or a fat cell. What’s super interesting is that when those responsible members of your body’s society decide to abscond and say, “Screw this. I’m not participating in society anymore. I’m just going to eat and grow and become cancer,” their membrane voltage also changes. It goes much closer to zero again, almost like it’s having a midlife crisis or whatever.</p><p>So what they found, what Levin and Chernet found is that you can manipulate those cellular electrics to make the cell stop behaving cancerously. And so they did this in tadpoles. They had genetically engineered the tadpoles to express tumors, but when they made sure that the cells could not depolarize, most of those tadpoles did not express the tumors. And when they later took tadpoles that already had the tumors and they repolarized the voltage, those tumors, that tissue started acting like normal tissue, not like cancer tissue. But again, this is the sci-fi stuff, but the fact that it was done at all is so fascinating, again, from that epigenetic sort of body pattern perspective, right?</p><p><strong>Cass:</strong> So sort of staying with that sci-fi stuff, except this one, even more closer to reality. And this goes back to some of these experiments which you zapped yourself. Can you talk a little bit about some of these sort of device that you can wear which appear to really enhance certain mental abilities? And some of these you [inaudible].</p><p><strong>Adee:</strong> So the kit that I wore, I actually found out about it while I was at <em>Spectrum</em>, when I was a DARPATech. And this program manager told me about it, and I was really stunned to find out that just by running two milliamps of current through your brain, you would be able to improve your-- well, it’s not that your ability is improved. It was that you could go from novice to expert in half the time that it would take you normally, according to the papers. And so I really wanted to try it. I was trying to actually get an expert feature written for <em>IEEE Spectrum</em>, but they kept ghosting me, and then by the time I got to <em><a href="https://www.newscientist.com/" target="_blank">New Scientist</a></em>, I was like, fine, I’m just going to do it myself. So they let me come over, and they put this kit on me, and it was this very sort of custom electrodes, these things, they look like big daisies. And this guy had brewed his own electrolyte solution and sort of smashed it onto my head, and it was all very slimy.</p><p>So I was doing this video game called <em><a href="https://en.wikipedia.org/wiki/DARWARS" target="_blank">DARWARS Ambush!</a></em>, which is just like a training— it’s a shooter simulation to help you with shooting. So it was a Gonzo stunt. It was not an experiment. But he was trying to replicate the conditions of me not knowing whether the electricity was on as much as he could. So he had it sort of behind my back, and he came in a couple of times and would either pretend to turn it on or whatever. And I was practicing and I was really bad at it. That is not my game. Let’s just put it that way. I prefer driving games. But it was really frustrating as well because I never knew when the electricity was on. So I was just like, “There’s no difference. This sucks. I’m terrible.” And that sort of inner sort of buzz kept getting stronger and stronger because I’d also made bad choices. I’d taken a red-eye flight the night before. And I was like, “Why would I do that? Why wouldn’t I just give myself one extra day to recover before I go in and do this really complicated feature where I have to learn about flow state and electrical stimulation?” And I was just getting really tense and just angrier and angrier. And then at one point, he came in after my, I don’t know, 5th or 6th, I don’t know, 400th horrible attempt where I just got blown up every time. And then he turned on the electricity, and I could totally feel that something had happened because I have a little retainer in my mouth just at the bottom. And I was like, “Whoa.” But then I was just like, “Okay. Well, now this is going to suck extra much because I know the electricity is on, so it’s not even a freaking sham condition.” So I was mad.</p><p>But then the thing started again, and all of a sudden, all the sort of buzzing little angry voices just stopped, and it was so profound. And I’ve talked about it quite a bit, but every time I remember it, I get a little chill because it was the first time I’d ever realized, number one, how pissy my inner voices are and just how distracting they are and how abusive they are. And I was like, “You guys suck, all of you.” But somebody had just put a bell jar between me and them, and that feeling of being free from them was profound. At first, I didn’t even notice because I was just busy doing stuff. And all of a sudden, I was amazing at this game and I dispatched all of the enemies and whatnot, and then afterwards, when they came in, I was actually pissed because I was just like, “Oh, now I get it right and you come in after three minutes. But the last times when I was screwing it up, you left me in there to cook for 20 minutes.” And they were like, “No, 20 minutes has gone by,” which I could not believe. But yeah, it was just a really fairly profound experience, which is what led me down this giant rabbit hole in the first place. Because when I wrote the feature afterwards, all of a sudden I started paying attention to the whole TDCS thing, which I hadn’t yet. I had just sort of been focusing [crosstalk].</p><p><strong>Cass:</strong> And that’s transcranial—?</p><p><strong>Adee:</strong> Oh sorry, transcranial direct current stimulation.</p><p><strong>Cass: </strong>There you go. Thank you. Sorry.</p><p><strong>Adee: </strong>No. Yeah, it’s a mouthful. But then that’s when I started to notice that quackery we were talking about before. All that history was really informing the discussion around it because people were just like, “Oh, sure. Why don’t you zap your brain with some electricity and you become super smart.” And I was like, “Oh, did I like fall for the placebo effect? What happened here?” And there was this big study from Australia where the guy was just like, “When we average out all of the effects of TDCS, we find that it does absolutely nothing.” Other guys stimulated a cadaver to see if it would even reach the brain tissue and included it wouldn’t. But that’s basically what started me researching the book, and I was able to find answers to all those questions. But of course, TDCS, I mean, it’s finicky just like the electrome. It’s like your living bone is conductive. So when you’re trying to put an electric field on your head, basically, you have to account for things like how thick is that person’s skull in the place that you want to stimulate. They’re still working out the parameters.</p><p>There have been some really good studies that show sort of under which particular conditions they’ve been able to make it work. It does not work for all conditions for which it is claimed to work. There is some snake oil. There’s a lot left to be done, but a better understanding of how this affects the different layers of the sort of, I guess, call it, electrome, would probably make it something that you could use replicability. Is that a word? But also, that applies to things like deep brain stimulation, which, also, for Parkinson’s, it’s fantastic. But they’re trying to use it for depression, and in some cases, it works so—I want to use a bad word—amazingly. Just Helen Mayberg, who runs these trials, she said that for some people, this is an option of last resort, and then they get the stimulation, and they just get back on the bus. That’s her quote. And it’s like a switch that you flip. And for other people, it doesn’t work at all.</p><p><strong>Cass:</strong> Well the book is packed with even more fantastic stuff, and I’m sorry we don’t have time to go through it, because literally, I could sit here and talk to you all day about this.</p><p><strong>Adee:</strong> I didn’t even get into the frog battery, but okay, that’s fine. Fine, fine skip the frog. Sorry, I’m just kidding. I’m kidding, I’m kidding.</p><p><strong>Cass: </strong>And thank you so much, Sally, for chatting with us today.</p><p><strong>Adee: </strong>Oh, thank you so much. I really love talking about it, especially with you.</p><p><strong>Cass: </strong>Today on <em>Fixing the Future</em>, we’re talking with Sally Adee about her new book on the body’s electrome. For <em>IEEE Spectrum</em> I’m Stephen Cass.</p>]]></description><pubDate>Tue, 23 May 2023 21:00:04 +0000</pubDate><guid>https://spectrum.ieee.org/electrome-new-biomedical-frontier</guid><category>Type-podcast</category><category>Cancer</category><category>Regenerative-medicine</category><category>Fixing-the-future</category><category>Electrome</category><dc:creator>Stephen Cass</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/33729438/origin.webp"/></item><item><title>Linking Chips With Light For Faster AI</title><link>https://spectrum.ieee.org/photonics-and-ai</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.webp?id=33465189&width=980"/><br/><br/><div class="rm-embed embed-media"><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/d67a26bb" width="100%">
</iframe></div><div>
<p><strong>Stephen Cass:</strong> Hi, I’m Stephen Cass, for <em>IEEE Spectrum’s</em> Fixing the Future. This episode is brought to you by IEEE Xplore, the digital library with over 6 million pieces of the world’s best technical content. Today I have with me our own <a href="https://spectrum.ieee.org/u/samuel-k-moore" target="_self">Samuel K. Moore</a>, who has been covering the semiconductor beat pretty intensely for <em>Spectrum</em> for— well, how many years has it been, Sam?<br/></p>
<p>
<strong>Sam Moore: </strong>7 years, I would say.
	</p>
<p>
<strong>Cass: </strong>So Sam knows computers down at the level most of us like to ignore, hidden underneath all kinds of digital abstractions. This is down where all the physics and material science that make the magic possible lurk. And recently, <a href="https://spectrum.ieee.org/optical-interconnects" target="_self">you wrote an article about the race to replace electricity with light</a> inside computers, which is letting chips talk to each other with fiber optics rather than just using fiber optics to communicate between computers. I guess my first question is, what’s wrong with electricity, Sam?
	</p>
<p>
<strong>Moore:</strong> I have nothing against electricity, Stephen. Wow… It knows what it did. But really, this all comes down to inputs and outputs. There just aren’t enough coming off of processors for what they want to do in the future. And electronics can only push signals so far before they kind of melt away, and they consume quite a bit of power. So the hope is that you will have better bandwidth between computer chips, consuming less power.
	</p>
<p>
<strong>Cass:</strong> So it’s not just a question of raw speed, though, when you talk about these signals and melting away, because I think the signal speed of copper is about, what, two-thirds the speed of light in a vacuum. But then I was kind of surprised to see that, in a fiber optic cable, the speed of light is about two-thirds of that in a vacuum. So what’s going on? What’s kind of the limitations of pushing a signal down a wire?
	</p>
<p>
<strong>Moore:</strong> Sure. A wire is not an ideal conductor. It’s really resistance, inductance, and capacitance, all of which will reduce the size and speed of a signal. And this is particularly a problem at high frequencies, which are more susceptible, particularly to the capacitance side of things. So you might start with a beautiful 20 GHz square wave at the edge of the chip, and by the time it gets to the end of the board, it will be an imperceptible bump. Light, on the other hand, doesn’t work like that. It has things that— there are things that mess with signals in optical fibers, but they work at much, much, much longer length scales.
	</p>
<p>
<strong>Cass: </strong>Okay, great. So you talked about there are two companies that are in this sort of race to put light inside computers. So we can talk a little bit? Who are they, and what are their different approaches?
	</p>
<p>
<strong>Moore:</strong> Sure, these are two startups, and they’re not alone. There are very likely other startups in stealth mode, and there are giants like Intel that are also in this race as well. But what these two startups, <a href="https://ayarlabs.com/" rel="noopener noreferrer" target="_blank">Ayar Labs</a>, that’s A-Y-A-R—and I’m probably pronouncing it a little weird—and <a href="https://avicena.tech/" rel="noopener noreferrer" target="_blank">Avicena</a>, those are the two that I profiled in the January issue. And they’re representative of two very different sort of takes on this same idea. Let me start with Ayar, which is really sort of the— it’s sort of what we’re using right now but on steroids. Like the links that you find already in data centers, it uses infrared laser light, kind of breaks it into several bands. I can’t remember if it’s 8 or 16, but so they’ve got multiple channels kind of in each fiber. And it uses silicon photonics to basically modulate and detect the signals. And what they bring to the table is they have, one, a really good laser that can sit on a board next to the chip, and also they’ve managed to shrink down the silicon photonics, the modulation and the detection and the associated electronics that makes that actually happen, quite radically compared to what’s out there right now. So really they are sort of just— I mean, it’s weird to call them a conservative play because they really do have great technology, but it is just sort of taking what we’ve got and making it work a lot better.
	</p>
<p>
		Avicena is doing something completely different. They aren’t using lasers at all. They’re using 
		<a href="https://spectrum.ieee.org/search/?q=microleds" target="_self">microLEDs</a>, and they’re blue. These are made of gallium nitride. And why this might work is that there is a rapidly growing microLED display industry with big backers like Meta and Apple. So the problems within that you might find with a new industry are kind of getting solved by other people. And so what Avicena does is they basically make a little microLED display on a chiplet, and they stick a particular kind of fiber. It’s sort of like an imaging fiber. It’s similar to if you’ve ever had an endoscopy exam, you’ve had a close encounter with one of these. And basically, it has a bunch of fiber channels in it. The one that they use has like 300 in this half a millimeter channel. And they stick the end of that fiber on top of the display so that each microLED in the display has its own channel. And so you have this sort of parallel path for light to come off of the chip. And they modulate the microLEDs, just flicker them. And they found a way to do that a lot faster than other people. People thought they were going to be real hard limits to this. But they’ve gotten as high as ten gigabits per second. Their first product will probably be in the three gigabytes-- gigabits, sorry, kind of area, but it’s really surprisingly rapid. People weren’t thinking that microLEDs could do this, but they can. And so that should provide a very powerful pathway between microprocessors.
	</p>
<p>
<strong>Cass:</strong> So what’s the market for this technology? I mean, I presume we’re not looking to see it in our phones anytime soon. So who really is spending the money for this?
	</p>
<p>
<strong>Moore:</strong> It’s funny you should mention phones—and I’ll get back to it—because it’s definitely not the first adopter, but there may actually be a role for it in there. Your likely first adopter are actually companies like <a href="https://spectrum.ieee.org/inverse-lithography" target="_self">Nvidia</a>, which I know are very interested in this sort of thing. They are trying to tie together their really super powerful GPUs as tightly as possible so that they can— in the end, ideally, they want something that will bind their chips together so tightly that it’s as if it was one gigantic chip. Even though it’s physically spread across eight racks with each server having four or eight of these chips. So that’s what they’re looking for. They need to reduce the distance, both in energy and in sort of time, to their other processor units and to and from memory so that they kind of wind up with this really tightly bound computing machine. And when I say tightly bound, the ideal is to bind them all together as one. But the truth is the way people use computing resources, what you want to do is just pull together what you need. And so this is a technology that will allow them to do that.
	</p>
<p>
		So it’s really the big iron people that are going to be the early adopters for this sort of thing. But in your phone, there’s actually a sort of bandwidth-limited pathway between your camera and the processor. And Avicena in particular is actually kind of interested in putting these together, which would mean that your camera can be in a different place than it is right now with regard to the processor. Or you could come up with completely different configurations of a mobile device.
	</p>
<p>
<strong>Cass:</strong> Well, it almost sounds like when you were talking about this idea of building essentially a computer, even kind of a CPU, even with many cores, but on the size of racks, I was thinking that reminded me of <a href="https://www.computerhistory.org/revolution/birth-of-the-computer/4/78" rel="noopener noreferrer" target="_blank">ENIAC days</a> or even IBM, the IBM 360s where the computer would take up several racks. And then we invented this cool microprocessor technology. So I guess it’s sort of one of these great <a href="https://ftp.informatik.rwth-aachen.de/jargon300/cycleofreincarnation.html" rel="noopener noreferrer" target="_blank">technological cycles</a>. But you mentioned there the idea about giant chips. That is an approach that some people are trying, these massive chips to solve this bandwidth communication problem.
	</p>
<p>
		Moore: That’s right. They are trying to solve the exact same problem at 
		<a href="https://spectrum.ieee.org/cerebrass-giant-chip-will-smash-deep-learnings-speed-barrier" target="_self">Cerebras</a>. I shouldn’t say trying. They have their solution. Their solution is to never go off the chip. They made the biggest chip you could possibly make by just making it all on one wafer, and so the signals never have to leave the chip. You get to keep that really broad pathway all the way along, and then your limit is just—a chip can only be, oh, the size of a wafer.
	</p>
<p>
<strong>Cass:</strong> How big is a wafer?
	</p>
<p>
		Moore: Oh man, it’s 300 millimeters across, but then they have to cut off the edges so you get a square. So a dinner plate, your face if you have a big head.
	</p>
<p>
<strong>Cass: </strong>So what are some of the other approaches out there to solving this issue?
	</p>
<p>
<strong>Moore:</strong> Sure. Well, if you look at— Ayar and Intel are actually a good contrast in that they’re really doing kind of the same thing. They’ve got silicon photonics designed to modulate and detect infrared laser light. And they’ve got-- each of their lasers has 8 channels or colors rather, or sometimes 16, I think, is where they’re moving to. The difference is that Ayar keeps its laser outside of the package with the GPU. And I should kind of explain something else that is indicative of why this is the right time of it. And I’ll get back to that, but my point is, Ayar keeps its laser separate. It’s almost like a utility. You wouldn’t think of putting your power converter in the same package with your GPU. Electricity is sort of like a utility. They use laser light like a utility kind of. Intel, on the other hand, is really gung ho on integrating the laser with their silicon photonics chips, and they have their own reasons for doing that. And they’ve been working on this for a while. And so you wind up with a slightly different-looking configurations. Intel’s just one connection. Ayar will always have a connection from the laser to the chip and then out again once it’s been modulated. And they each have sort of their own reasons for doing that. It’s kind of hard sometimes to keep, for instance, the laser stable if you don’t tightly control the temperature it’s at. And if you’re in the package with the GPU, do you have control over the temperature? Because the GPU is doing its own thing until it feels fine about this clearly. And Ayar is just a startup, and they are just trying to get in with somebody who wants to integrate it into their own stuff. Other—
	</p>
<p>
<strong>Cass:</strong> Because that’s something you’ve reported before on the challenge of integrating photonics with silicon so you don’t have to go off-chip. But there’s kind of been a long and somewhat—don’t want to say troubled—but a challenging history there.
	</p>
<p>
<strong>Moore:</strong> Yeah, and the reason it’s become suddenly less challenging, actually, is that the world is moving towards <a href="https://spectrum.ieee.org/single-chip-processors-have-reached-their-limits" target="_self">chiplets</a>, as opposed to monolithic silicon system on chips. So even just a few years ago, everybody was just making the biggest chip they could, filling it up. Moore’s Law has been not delivering, you know, quite as much as it has in the past.
	</p>
<p>
		And so there’s a new solution. You can add silicon by finding a way to bind two separate pieces of silicon together almost as tightly as if they were one chip. And this is a packaging technology. Packaging is something that people didn’t really care about so much 10 years ago, but now it’s actually super important. So there’s 3D-packaging-type situations where you’ve got chips stacked on chips. You’ve got what are called 2-and-a-half-D, which is really— it’s 2D. But they’re within less than a millimeter of each other, and the number of connections that you can make at that scale is much closer to what you have on the chip. And then so you put these chiplets of silicon together, and you package them all in one. And that is sort of the way advanced processors are being made right now. One of those chiplets, then, can be silicon photonics, which is a completely different— it’s a different manufacturing process than you would have for your main processor and stuff. And because of these packaging technologies, you can put chips made with different technologies together and sort of bind them electrically, and they will work just fine. And so because there is this sort of chiplet landing pad now, companies like Avicena and Ayar, they have a place to go that’s kind of easy to get to.
	</p>
<p>
<strong>Cass:</strong> So you mentioned Nvidia and GPUs there, which are really now associated with sort of machine learning. So is that’s what’s driving a lot of this is these machine learning, deep learning things that are just chewing through enormous amounts of data?
	</p>
<p>
<strong>Moore:</strong> Yeah, the real driver is that things like ChatGPT and all of these natural language processors, which are sort of a class that are called transformer neural networks. I’m a little unclear as to why, but they are just huge. They have just ridiculous, trillions of parameters like the weights and the activations that actually sort of make up the guts of a neural network. And there’s, unfortunately, sort of no end in sight. It seems like if you just make it bigger, you can make it better. And in order to train these— so it’s not the actual— it’s not so much the running of the inferencing, the getting your answer, it’s the training them that is really the problem. In order to train something that big and have it done this year, you really need a lot of computing power. That was sort of‑ that was the reason for companies like Cerebras where instead of something taking weeks, taking hours, or instead of something taking months and months, taking it a couple of days means that you can actually learn to use and train one of these giant neural networks in a reasonable amount of time and frankly, do experiments so that you can make better ones. I mean, if your experiment takes four months, it really slows down the pace of development. So that’s a real driver is training these gigantic transformer models.
	</p>
<p>
<strong>Cass:</strong> So what kind of time frame are we talking about in terms of when might we see these kind of things popping up in data centers? And then, I guess, when might we see them coming to our phone?
	</p>
<p>
<strong>Moore:</strong> Okay, so I know that Ayar Labs, that’s the startup that uses the infrared lasers, is actually working on prototype computers with partners this year. It’s unlikely that we will actually see the results of those from them. They’re just not likely to be made public. But when pressed, 2025-’26 kind of time frame, the CEO of Ayar thought was an okay estimate. It might take a little longer for others. Obviously, their first product is actually going to be just sort of a low-watt replacement for the between-the-racks kind of connections. But they promised a chiplet for in-package with the processor sort of hot on its heels. But again, the customers are gigantic. And they really have to— they really have to feel that this is a technology that is going to be good for them in the long term. So there aren’t that many. There’s Nvidia, there’s some of the giant AI computer makers, and some supercomputer makers, I imagine. So the customer list is not enormous. But it has deep pockets, and it’s probably kind of conservative. So it may be a little bit--
	</p>
<p>
<strong>Cass:</strong> Cool, and so to the phone? Ten years?
	</p>
<p>
<strong>Moore:</strong> Oh, yeah. I don’t actually know. Right now, I think that’s just sort of an idea. But we’ll see. Things could develop faster in that field than others. Who knows?
	</p>
<p>
<strong>Cass: </strong>So is there anything else you’d like to add?
	</p>
<p>
<strong>Moore:</strong> Yeah, I just want to kind of bring back that those two startups are indicative of what’s likely a larger group, some of that are— some of which are probably in stealth mode. And there’s plenty of academic research on doing this in totally different ways like using surface plasmons, which are sort of waves of electrons that occur when light strikes a metal surface, with the idea of being able to basically use smaller, less fiddly components to get the same-- to get the same thing done because you’re using the waves of electrons rather than the light itself. But yeah, I look forward to honestly seeing what else people come up with because there’s clearly more than one way to skin this cat.
	</p>
<p>
<strong>Cass: </strong>And they can follow your coverage in the pages of Spectrum or online.
	</p>
<p>
<strong>Moore:</strong> Yes, indeed.
	</p>
<p>
<strong>Cass:</strong> So that was great, Sam. Thank you. So today in Fixing the Future, we were talking with Sam Moore about the competition to build a next-generation of high-speed interconnects. I’m Stephen Cass for <em>IEEE Spectrum</em>, and I hope you’ll join us next time.
	</p>
</div>]]></description><pubDate>Thu, 13 Apr 2023 19:40:03 +0000</pubDate><guid>https://spectrum.ieee.org/photonics-and-ai</guid><category>Type-podcast</category><category>Ai</category><category>Photonics</category><category>Fixing-the-future</category><dc:creator>Samuel K. Moore</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/33465189/origin.webp"/></item><item><title>Functional Programming: The Biggest Change Since We Killed the Goto?</title><link>https://spectrum.ieee.org/functional-programming-biggest-change</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.webp?id=33385326&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/da246c70" width="100%"></iframe><h3>Transcript</h3><p><strong>Stephen Cass:</strong> Welcome to <em>Fixing the Future</em>, an <em>IEEE Spectrum</em> podcast. I’m senior editor Stephen Cass, and this episode is brought to you by IEEE Explorer, your gateway to trusted engineering and technology research with nearly 6 million documents with research and abstracts. Today we are talking with <a href="https://www.linkedin.com/in/cscalfani/" target="_blank">Charles Scalfani</a>, CTO of <a href="https://panosoft.com/" target="_blank">Panoramic Software</a>, about how adopting functional programming could lead to cleaner and more maintainable code. Charles, welcome to <em>Fixing the Future</em>.</p><p><strong>Charles Scalfani</strong>: Thank you.</p><p><strong>Cass: </strong>So you recently wrote an <a href="https://spectrum.ieee.org/functional-programming" target="_self">expert feature for us that turned out to be incredibly popular</a> with readers. That argued that we should be adopting this thing called functional programming. Can you briefly explain what that is?</p><p><strong>Scalfani:</strong> Okay. Functional programming is an older version of programming, actually, than what we do today. It is basically, as it says, it’s basically based around functions. So where object oriented programming is has an object model, where it’s everything— you see everything through the lens of an object, and the whole world is an object, and everything in that world is an object. In functional programming, it’s the similar, it’s you see everything as a function, and the whole world looks like— everything in the world looks like a function. You solve all your problems with functions. The reason it’s older and wasn’t adopted is because the ideas were there, the mathematics, the ideas, and everything were there, the hardware just couldn’t keep up with it. So it became relegated to academia and the hardware just wasn’t available to do all of the things. That has been, since probably the 90s, it’s been not a problem anymore.</p><p><strong>Cass:</strong> So I just wanted to like, as somebody who is, I would call itself a kind of a very journeyman programmer. So one of the first things I learned when I’m using a new language is usually the section says, how to define a function, and there’s a little— you know, everybody’s got it, Python’s got it, you know, even some versions of Basic used to have it, C has it. So I think function here means something different to those functions I’m used to in something like C or Python.</p><p><strong>Scalfani:</strong> Yeah. I have a joke that I always say is that when I learned C, the first program I wrote was “hello world.” And when I learned Haskell, a functional programming language, the <em>last</em> thing I learned was “hello world.” And so you really, with C, you did, your first “hello world” was a print function, something that printed to the console, and you could say, “yay, I got my first C program working. Here it is.” But the complexity of doing side effects and IO and all of that is such that it gets pushed aside for just pure functional programming. What does that look like? How do you put functions together? How do you compose them? How do you take these smaller pieces and put them all together? And the idea of side effects is something that’s more advanced. And so when you get into a standard language, you just, kind of, jump in and start writing— everybody writes the “hello world,” thanks to Kernighan and Ritchie, <a href="https://en.wikipedia.org/wiki/The_C_Programming_Language" target="_blank">what they did in their book</a>, but you really don’t get to do that for a very long time. In fact, in the <a href="https://leanpub.com/fp-made-easier" rel="noopener noreferrer" target="_blank">book that I wrote</a>, it isn’t for hundreds of pages before you actually get to putting something on the screen. It’s relegated to the fourth section of the book. So it is a difference in that. Side effects where you can affect the world is very standard in imperative languages. The languages that everybody uses C, and Java, and JavaScript, and Python and you name it, the standard languages.</p><p>And that’s why it’s very easy when you first learn a language is just hop in and feel like you’re able to do lots of stuff, and get lots of things done very quickly. And that gets kind of deferred in a functional language. You tend to learn that later. So the kinds of functions that we deal with in functional languages were called pure functions. They’re very different than how we think of functions in programming today, but more how you think of functions in math. Right? So you have inputs, you have processing that happens in the function, computations that are going to occur in that function, and then you have those outputs. And that’s all. You don’t get to manipulate the world in any way, shape, or form.</p><p>Cass: So I want to get back into a little bit of that tutorial on how you get started up on stuff. But it sounds to me a little bit like, I’m searching for a model, my previous model of experience. It sounds to me a little bit like kind of the Unix philosophy of piping very discrete little utility programs together, and then getting results at the end. And that kind of philosophy.</p><p><strong>Scalfani:</strong> Yes. Yeah. That’s a great example. That’s like composing functions using pipes— I’m sorry, composing programs using pipes, and we compose functions in the very same way. And the power of being able to do that, the power they figured out back in Unix, to be able to just say, well, I’ll write this very simple little program that just does one little thing, and then I’ll just take its output and feed it into the next. And it does one little thing. And it’s exactly the same thing, just at a smaller level. Because you’re dealing with functions and not full programs.</p><p><strong>Cass:</strong> Got it. But this does seem like a fairly big cultural shift where you’re telling people, you don’t even get to print until you’re halfway through the book and so on. But I think this is something you raised in the article. We have asked programmers before to do, make fairly big shifts, and the benefits have been immense. And the one you talk about, is getting rid of goto, whereby, you know, in the beginning, we all, you know, ten, goto, whatever. And it was this goto palazza. And then we kind of realized that <a href="https://homepages.cwi.nl/~storm/teaching/reader/Dijkstra68.pdf" rel="noopener noreferrer" target="_blank">goto had some problems</a>. But even though it was this very simple tool that every program are used, we’ve kind of mostly weaned ourselves off goto. Can you talk a little bit about sort of the parallels between saying bye bye to goto and maybe saying bye bye to some of this imperative stuff? And these things like side effects and then maybe talk a little bit about what you mean about like global state, and then— because I think that will perhaps illuminate a little bit more about what you mean about side effects.</p><p><strong>Scalfani: </strong>When I started in programming it was way back in you know 78, 79, around that time and everything was a go— you had Basic, a machine with 8K of RAM. That was it. K. You didn’t have you didn’t have room to do all the fancy stuff we can do today. And so you had to try to make things as efficient as possible. And it really comes from <a href="https://www.tutorialspoint.com/assembly_programming/assembly_conditions.htm" rel="noopener noreferrer" target="_blank">branching down in the assembly language</a>, right? Everybody was used to doing that, goto the, just jump over here and do this thing and then jump back maybe or return from a subroutine and you had very little machine power to do things. So goto came out of assembly language. And as it got in the higher and higher level languages, and as things got more complicated, then you wound up with what’s called spaghetti code, because you can’t follow the code. It’s like trying to follow a strand of spaghetti in a bowl of spaghetti. And so you’re like, well this is jumping to this and that’s jumping to this and you don’t even remember where you were anymore. And I remember looking at code like that and mostly written in assembly language.</p><p>And so as structured languages came about, people realized that if we could have this kind of branching but do it in a do it in a way in which we could abstract it. We could think about it in a more abstract level than down in the details. And so if you look at that, I use it as an example because I look to the past to try to figure out what are we doing today? If we take imperative languages and if we move to functional, we are giving up a lot of things. You can’t do this and you can’t do that. You don’t do side effects. You don’t have global state. There’s all these things that you— there’s no such thing as a <a href="https://en.wikipedia.org/wiki/Null_pointer" rel="noopener noreferrer" target="_blank">null pointer</a> or a null value. Those things don’t exist here in this way of thinking. And it’s like you have to ask yourself, wait, wait, I’m giving up these things that I’m very familiar with and well, how do you do things then in this new way? And is it beneficial or is it just a burden? So at first, it feels like a burden, an absolute burden. It’s going to because you’re so used to falling back on these old ways of doing things in old ways of thinking. And especially when I— I was like 36 years or 30 some odd years into programming and imperative languages, and then all of a sudden I’m thinking functionally. And now I have to change my whole mode of thinking. And you really have to say, well, is it beneficial?</p><p>So I kind of look to the past. Getting rid of the go to was highly beneficial. And I would never advocate for it back. And people did comment on the article saying, “well, yeah, <em>these</em> languages have goto,” but not the goto I’m talking about. They still have these kind of controlled gotos in C, not where you could just jump to the middle of anywhere. And that’s the way things were back in the day. So, yeah, things were pretty wild back then. And we wrote much simpler bits of software. You didn’t use libraries. You didn’t run in operating systems always. I did a lot of embedded coding in the early days. And so you wrote everything. It was all your own code. And now, you might have written, I don’t know, maybe you wrote a thousand lines of code. And now we’re working in millions of lines of code. So it’s a very different world, but when we came out of that early stage, we started shedding these bad habits. And we haven’t done that over time. And I think you have to shed some bad habits to move to functional.</p><p><strong>Cass:</strong> So I do want to talk really getting into the benefits of functional programming are, especially with, I think, the idea of like thinking about maintenance instead of sort of the white hot moment of creation that everybody loves to write that first draft, really thinking about how software is used. But I did just want to unpack a sentence there. And it’s something that also comes from C, and it’s not necessarily something that is baked into assembly in the same way, but it does come in to C, which is this idea of the null pointer. You mentioned the null. And can you talk just a little bit about the null and why it causes so much problems, not just for C, but for all of the sort of, as you call them, curly bracket languages that inherit from it.</p><p><strong>Scalfani:</strong> Right. So in most of those languages, they all support this idea of a null. That is you don’t have anything. So you either have a value or you don’t have a value. And it’s not— it’s sort of like just this idea of that every reference to something could be potentially not— have no reference, right? You have no reference. So think of a plan of an empty bucket, right?</p><p><strong>Cass: </strong>Just for maybe readers who are not familiar. So a pointer is something that points to a bit of memory where something of information is stored. And usually at that point, there’s a valuable number. But sometimes there’s just junk. And so a null pointer kind of helps you tell, ideally, what are the pointers pointing to something useful or it’s pointing to to junk? Would that be kind of a fair summary or am I butchering it a little?</p><p><strong>Scalfani: </strong>Yeah, I think at the lowest level, like if you think about C or assembly, you always have a value somewhere, right? And so what you would do is you would say, okay, so they always point to something. But if I have an address of zero at the very lowest level here, if I have an address— so if my register has a value of zero in it, and I usually use that register to dereference memory to point to someplace in memory, then just that’s going to be treated specially as, oh, that’s not pointing anywhere in particular. There is no value that I’m referencing. So it’s a non, I have no reference. I have nothing, basically, in my hands.</p><p><strong>Cass: </strong>So it’s not something there, it’s just the language is trained that if I see a zero, that’s a flag, there’s nothing there.</p><p><strong>Scalfani:</strong> Right. Right. Exactly, exactly.</p><p><strong>Cass: </strong>And then so then how does this then— so that sounds like a great idea. Wonderful. So how does this then—</p><p><strong>Scalfani:</strong> It is.</p><p><strong>Cass: </strong>Well, how does this cause problems later on? I’ve got this magic number that tells me that it’s bad stuff there. Why does this thing cause problems? And then how can functional programming really help with that?</p><p><strong>Scalfani: </strong>Okay. So the problem isn’t in this idea. It’s sort of a hack. It’s like, oh, well, we’ll just put a zero in there. And then we’ll have to— so that was, okay, that solved that problem. But now you’re just kicking the can. So everywhere down the road where you’re dealing with this thing, now everybody has to check all the time. Right? And it’s not a matter of having to check, because the situation of where you have something or you don’t have something is something that’s valid situation, right? So that’s a perfectly valid thing. But it’s when you forget to check that you get burned. And it’s not built into most of the languages to where it does the checking for you and you have to say, oh, well, this thing is a null or if it’s not a null, then do this you. There’s all these if checks. And you just pollute your code with all the checks everywhere. Now, functional programming doesn’t eliminate that. It’s not magic. It doesn’t eliminate it. But many of the functional languages, at least the ones that I’ve worked in, they have this concept of a maybe, right? So a maybe is, it can either be nothing, or it can be just something. And it’s other languages call it an option. But it’s the same idea. And so you either have nothing, or you just have this value. And because of that, it forces— because of the way that that’s implemented, and I won’t go into gory details, but because of it, they force you to the compiler won’t compile if you didn’t handle both cases.</p><p>And so you’re forced to always handle it, as opposed to the null, you can choose to handle it or not, and you could choose to forget it, or you could go— you could not even know that it could be a null, and you could just assume you have a good value all the time. And then you don’t know until you’re running your program that, oh, you made a mistake. The last place you want to find out is in production when you hit a piece of code that is run rarely, but then you didn’t do your null check, and then it crashes in production and you’ve got problems. With the maybe, you don’t have a choice. You can’t compile it. You can’t even build your program. It really is a great tool. And many times, I still don’t like the maybe. Because it’s like, ugh, I have to handle maybe. Because it forces your hand. You don’t have a choice. Ideally, yes, that’s the right thing, but I still grumble.</p><p><strong>Cass:</strong> I mean, I think the tendency is always to take the shortcut because you think to yourself, oh, this will never— This will never be wrong. It’s <em>fine</em>. I mean, I just all the time. I know when I write even the limited— I know I should be checking a return value. I should be writing it so that it returns. If something goes wrong, it should return an error value, and I should be checking for that error value. But do I do that? No, I just carry on my merry way.</p><p><strong>Scalfani:</strong> Because we know better, right? We know better.</p><p><strong>Cass:</strong> Right. So I do want to talk a little bit about the benefits, then, that functional programming can build. And you make the case for some of these concrete benefits. And especially when it comes to maintenance. And as I say, I think, one of the charges that’s fairly laid against maybe sort of the software enterprise as a whole is that it’s great at creating stuff and inventing stuff, but not so good at maintaining stuff, even though there are examples we have of code, very important code that runs very important systems, that sits around for decades. So maintainability is kind of actually super important. So can you talk a little bit about those benefits, especially with regard to maintainability?</p><p><strong>Scalfani: </strong>Yeah. So I think, so before you even get into maintainability, there’s always the architectural phase, right? You want to model the problem well. So you want to have a language that can do really— can really aid you in the proper modeling of your types. And so that you can model the domain. So that’s the first step, because you can write bad in any code, right? In any technology, you can destroy it. No matter how great the technology is, you can wreak havoc with it. So no technology is magical in that it’s going to keep you from doing bad things. The trick about technology is that you want it to help you do good things. And encourage you and make it easy to do those good things. So that’s the first step, is to have a language that’s really good about modeling. And then the next thing is you want to— we haven’t talked about global state, but you need to control the global state in your program. And in the early days, going back to assembly, every variable, every memory location is global, right? There is no local. The only local data you might have is if you allocated memory on a stack, or if you have registers and you pushed your old registers as you went into a subroutine, things like that. But basically everything was global.</p><p>And so we’ve been we’ve been, as languages have been progressing, we’ve been making things more local, what’s in scope. Who has access to this variable? Who doesn’t have access to the variable? And the more, if you just follow that line as you get to functional programming, you control your global state, right? And so there is no global state. You actually are passing state around all the time. So in a lot of modern, say, JavaScript, frameworks do a lot of that. They’ve taken a lot architecturally from functional programming, like React is one that it’s a matter of how do you control your state? And that’s been a problem in the browser since day one. So controlling the state is another important thing. And why am I mentioning these other things about maintainability? Because if you do these things right, if you get these things right, it aids in your maintainability, right? There’s nothing that’s going to fix logic problems. There’s always logic, right? And if you get— if you make a logic problem mistake, there’s nothing there. Like you just made the wrong call. No language is going to save you because it’s got to be powerful enough so you can make those mistakes. Otherwise, you can’t make all the things.</p><p>So but what it can do is it can restrict you to, you can’t make <em>this</em> mistake, and you can’t make <em>that</em> mistake, and you won’t make this mistake. It restricts you in the mistakes, right? And it makes it easy to do the other things. And that’s where the maintainability really, I think, comes in is the ability to create a system where, if you got the proper modeling of the problem, you’ve properly managed— because really, what are you maintaining software for? You’re fixing problems, right? Or you’re adding features. So that’s all there really is. So if you’re spending all your time fixing problems, then you don’t have time to add any features. And I found that we’ve spent— in the old days we spent more time fixing problems than adding new features. Why? Because why are you adding features when you have bugs, right? So you have to fix the bugs first. So when we move to functional programming, I found that we were spending yeah, we still have logic problems here and there. I mean, we’re still human, but most of our time was spent thinking about new features. Like we would put something into production, you got to have good QA, no matter how great the language is. But if you have good QA and you do your job right, and you have a good solid language that helps you architect it originally correct, then you don’t think about like, oh, I have all these bugs all the time, or these crashes in production. You just don’t have crashes in production. Most of that stuff’s caught before that. The language doesn’t let you paint yourself into a corner.</p><p>So there’s a lot of those kinds of things. So you’re like, oh, well, what can I add? Oh, let’s add this new feature. And that’s really value add, at the business level, because that’s really at the end of the day, it doesn’t matter how cool some technology is. But if it doesn’t really have a bottom line return on investment, there’s no sense in doing it. Unless it’s a hobby, but for most of us, it’s a job, and it matters the bottom line of the business. And the bottom line of the business is you want to make improvements to your product so you can get either greater market share, keep your customers happy and keep them from moving to people who can add features to their products. Competitors and so forth. So I think the maintainability part comes with, originally with really good implementation, initial implementation.</p><p><strong>Cass:</strong> So I want to get that idea of implementations. So oftentimes, when I think about— maybe I’m in the past, I’ve thought about functional languages. And I have thought about them in this kind of academic way, or else things that live in deep black boxes way down in the system. But you have been working on <a href="https://www.purescript.org/" rel="noopener noreferrer" target="_blank">PureScript</a>, which is something that is directly applicable to web browsers, which is, when I think about advanced clever mathematical code models, browsers are not necessarily what I would associate. That’s kind of a very fast and loose environment, historically. So can you talk a little bit about PureScript and how people can kind of get a little bit of experience in that?</p><p><strong>Scalfani:</strong> PureScript is a statically typed, purely functional language that has its lineage from <a href="https://www.haskell.org/" rel="noopener noreferrer" target="_blank">Haskell</a>, which would start as an academic language. And it compiles into <a href="https://en.wikipedia.org/wiki/JavaScript" rel="noopener noreferrer" target="_blank">JavaScript</a> so that it can run in the browser, but it also can run on the back end, running in Node. Or you can write it and have your program run in Electron, which is like a desktop application. So pretty much everywhere JavaScript works, you can pretty much get PureScript to work. I’ve done it in backends, and I’ve done it in browsers. I haven’t done it in Electron yet, but it’s pretty academic. So that’s totally doable. I know other people have done it. So it doesn’t get more run of the mill, kind of, programming than the browser, right? And JavaScript is a pretty terrible language, honestly. It’s terrible on so many ways because you can shoot your foot off in so many different ways in JavaScript. And every time I have to write a little bit of JavaScript, just the tiniest bit of JavaScript, I’m always getting burned constantly.</p><p>And so anyway, so what is a pure functional language? A pure functional language is that all your functions are pure, and a pure function is what I talked about earlier. It only has access to the inputs to a function, it does its computations, and it has its outputs. So that’s kind of like what we did in math, right? You have a function, <em>f</em> of <em>x</em>, <em>x</em> gets some value, and maybe your function is <em>x</em>+2, and so it takes the <em>x</em>, it adds two to it, and the result is whatever that value is, right? Whatever the computation is. So that’s what it purely functional language is. It’s completely pure. And there are languages that are hybrids, right? PureScript, Haskell, Elm. These are all languages that are pure. And they don’t compromise. So compromised languages are really great in the beginning, but you can easily lose out on all the benefits, right? So if you can— it’s the same thing with the goto, right? If we had, if we relegated goto to, like, okay, we’re going to stick it in this corner and you sort of don’t want to use it. It doesn’t stop you from pulling that off the shelf and using it all day, right? So it’s best to just eliminate something and not compromise. Not have a compromise language. To me, Scala as a compromise language. It’s not fully functional. And there are lots, like Clojure, I believe, has— even JavaScript. JavaScript is actually, for me, was my introduction to functional programming. There’s functional concepts in JavaScript.</p><p>And I thought JavaScript was the best thing since sliced bread when I had those things. I didn’t know they were functional at the time, but I’m like, this is something that I’ve been looking for for years, and I finally have it in this language called JavaScript, and I can pass a function as a parameter. I mean, I wanted that for decades. And all of a sudden, I could do it. And so I’m a big proponent of a purely functional languages because of that. Because of hybrids don’t work well. And all you need is a single library that you’re using that didn’t— the author didn’t use all the benefits, and all of a sudden, now your whole thing is messed up. Whatever you’ve built is tainted by this library that isn’t that isn’t pure, let’s say. So I think that the benefits of Haskell and PureScript being fully pure are really great. Complications are, you have to think very differently because of that, because we’re not used to thinking that way. There’s all these extra things that have to be built that are all part of the libraries that make that much, much easier. But then you have to understand the concepts. So I hope that explains PureScript a little bit.</p><p><strong>Cass:</strong> Well, I literally could go back and forth with you all day because this really is truly fascinating, but I’m afraid we’re out of time. So I do very much want to thank you for talking with us today.</p><p><strong>Scalfani: </strong>Great. Thank you. It was fun.</p><p><strong>Cass: </strong>Yeah. I really was. So today in <em>Fixing the Future</em>, we were talking with Charles Scalfani about functional programming and creating better code. I’m Stephen Cass of <em>IEEE Spectrum,</em> and I hope you’ll join us next time. </p>]]></description><pubDate>Thu, 30 Mar 2023 20:31:25 +0000</pubDate><guid>https://spectrum.ieee.org/functional-programming-biggest-change</guid><category>Type-podcast</category><category>Fixing-the-future</category><category>Javascript</category><category>Functional-programming</category><category>Haskell</category><dc:creator>Stephen Cass</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/33385326/origin.webp"/><enclosure length="623785" type="application/pdf" url="https://homepages.cwi.nl/~storm/teaching/reader/Dijkstra68.pdf"/><itunes:explicit/><itunes:subtitle>Transcript Stephen Cass: Welcome to Fixing the Future, an IEEE Spectrum podcast. I’m senior editor Stephen Cass, and this episode is brought to you by IEEE Explorer, your gateway to trusted engineering and technology research with nearly 6 million documents with research and abstracts. Today we are talking with Charles Scalfani, CTO of Panoramic Software, about how adopting functional programming could lead to cleaner and more maintainable code. Charles, welcome to Fixing the Future. Charles Scalfani: Thank you. Cass: So you recently wrote an expert feature for us that turned out to be incredibly popular with readers. That argued that we should be adopting this thing called functional programming. Can you briefly explain what that is? Scalfani: Okay. Functional programming is an older version of programming, actually, than what we do today. It is basically, as it says, it’s basically based around functions. So where object oriented programming is has an object model, where it’s everything— you see everything through the lens of an object, and the whole world is an object, and everything in that world is an object. In functional programming, it’s the similar, it’s you see everything as a function, and the whole world looks like— everything in the world looks like a function. You solve all your problems with functions. The reason it’s older and wasn’t adopted is because the ideas were there, the mathematics, the ideas, and everything were there, the hardware just couldn’t keep up with it. So it became relegated to academia and the hardware just wasn’t available to do all of the things. That has been, since probably the 90s, it’s been not a problem anymore. Cass: So I just wanted to like, as somebody who is, I would call itself a kind of a very journeyman programmer. So one of the first things I learned when I’m using a new language is usually the section says, how to define a function, and there’s a little— you know, everybody’s got it, Python’s got it, you know, even some versions of Basic used to have it, C has it. So I think function here means something different to those functions I’m used to in something like C or Python. Scalfani: Yeah. I have a joke that I always say is that when I learned C, the first program I wrote was “hello world.” And when I learned Haskell, a functional programming language, the last thing I learned was “hello world.” And so you really, with C, you did, your first “hello world” was a print function, something that printed to the console, and you could say, “yay, I got my first C program working. Here it is.” But the complexity of doing side effects and IO and all of that is such that it gets pushed aside for just pure functional programming. What does that look like? How do you put functions together? How do you compose them? How do you take these smaller pieces and put them all together? And the idea of side effects is something that’s more advanced. And so when you get into a standard language, you just, kind of, jump in and start writing— everybody writes the “hello world,” thanks to Kernighan and Ritchie, what they did in their book, but you really don’t get to do that for a very long time. In fact, in the book that I wrote, it isn’t for hundreds of pages before you actually get to putting something on the screen. It’s relegated to the fourth section of the book. So it is a difference in that. Side effects where you can affect the world is very standard in imperative languages. The languages that everybody uses C, and Java, and JavaScript, and Python and you name it, the standard languages. And that’s why it’s very easy when you first learn a language is just hop in and feel like you’re able to do lots of stuff, and get lots of things done very quickly. And that gets kind of deferred in a functional language. You tend to learn that later. So the kinds of functions that we deal with in functional languages were called pure functions. They’re very different than how we think of functions in programming today, but more how you think of functions in math. Right? So you have inputs, you have processing that happens in the function, computations that are going to occur in that function, and then you have those outputs. And that’s all. You don’t get to manipulate the world in any way, shape, or form. Cass: So I want to get back into a little bit of that tutorial on how you get started up on stuff. But it sounds to me a little bit like, I’m searching for a model, my previous model of experience. It sounds to me a little bit like kind of the Unix philosophy of piping very discrete little utility programs together, and then getting results at the end. And that kind of philosophy. Scalfani: Yes. Yeah. That’s a great example. That’s like composing functions using pipes— I’m sorry, composing programs using pipes, and we compose functions in the very same way. And the power of being able to do that, the power they figured out back in Unix, to be able to just say, well, I’ll write this very simple little program that just does one little thing, and then I’ll just take its output and feed it into the next. And it does one little thing. And it’s exactly the same thing, just at a smaller level. Because you’re dealing with functions and not full programs. Cass: Got it. But this does seem like a fairly big cultural shift where you’re telling people, you don’t even get to print until you’re halfway through the book and so on. But I think this is something you raised in the article. We have asked programmers before to do, make fairly big shifts, and the benefits have been immense. And the one you talk about, is getting rid of goto, whereby, you know, in the beginning, we all, you know, ten, goto, whatever. And it was this goto palazza. And then we kind of realized that goto had some problems. But even though it was this very simple tool that every program are used, we’ve kind of mostly weaned ourselves off goto. Can you talk a little bit about sort of the parallels between saying bye bye to goto and maybe saying bye bye to some of this imperative stuff? And these things like side effects and then maybe talk a little bit about what you mean about like global state, and then— because I think that will perhaps illuminate a little bit more about what you mean about side effects. Scalfani: When I started in programming it was way back in you know 78, 79, around that time and everything was a go— you had Basic, a machine with 8K of RAM. That was it. K. You didn’t have you didn’t have room to do all the fancy stuff we can do today. And so you had to try to make things as efficient as possible. And it really comes from branching down in the assembly language, right? Everybody was used to doing that, goto the, just jump over here and do this thing and then jump back maybe or return from a subroutine and you had very little machine power to do things. So goto came out of assembly language. And as it got in the higher and higher level languages, and as things got more complicated, then you wound up with what’s called spaghetti code, because you can’t follow the code. It’s like trying to follow a strand of spaghetti in a bowl of spaghetti. And so you’re like, well this is jumping to this and that’s jumping to this and you don’t even remember where you were anymore. And I remember looking at code like that and mostly written in assembly language. And so as structured languages came about, people realized that if we could have this kind of branching but do it in a do it in a way in which we could abstract it. We could think about it in a more abstract level than down in the details. And so if you look at that, I use it as an example because I look to the past to try to figure out what are we doing today? If we take imperative languages and if we move to functional, we are giving up a lot of things. You can’t do this and you can’t do that. You don’t do side effects. You don’t have global state. There’s all these things that you— there’s no such thing as a null pointer or a null value. Those things don’t exist here in this way of thinking. And it’s like you have to ask yourself, wait, wait, I’m giving up these things that I’m very familiar with and well, how do you do things then in this new way? And is it beneficial or is it just a burden? So at first, it feels like a burden, an absolute burden. It’s going to because you’re so used to falling back on these old ways of doing things in old ways of thinking. And especially when I— I was like 36 years or 30 some odd years into programming and imperative languages, and then all of a sudden I’m thinking functionally. And now I have to change my whole mode of thinking. And you really have to say, well, is it beneficial? So I kind of look to the past. Getting rid of the go to was highly beneficial. And I would never advocate for it back. And people did comment on the article saying, “well, yeah, these languages have goto,” but not the goto I’m talking about. They still have these kind of controlled gotos in C, not where you could just jump to the middle of anywhere. And that’s the way things were back in the day. So, yeah, things were pretty wild back then. And we wrote much simpler bits of software. You didn’t use libraries. You didn’t run in operating systems always. I did a lot of embedded coding in the early days. And so you wrote everything. It was all your own code. And now, you might have written, I don’t know, maybe you wrote a thousand lines of code. And now we’re working in millions of lines of code. So it’s a very different world, but when we came out of that early stage, we started shedding these bad habits. And we haven’t done that over time. And I think you have to shed some bad habits to move to functional. Cass: So I do want to talk really getting into the benefits of functional programming are, especially with, I think, the idea of like thinking about maintenance instead of sort of the white hot moment of creation that everybody loves to write that first draft, really thinking about how software is used. But I did just want to unpack a sentence there. And it’s something that also comes from C, and it’s not necessarily something that is baked into assembly in the same way, but it does come in to C, which is this idea of the null pointer. You mentioned the null. And can you talk just a little bit about the null and why it causes so much problems, not just for C, but for all of the sort of, as you call them, curly bracket languages that inherit from it. Scalfani: Right. So in most of those languages, they all support this idea of a null. That is you don’t have anything. So you either have a value or you don’t have a value. And it’s not— it’s sort of like just this idea of that every reference to something could be potentially not— have no reference, right? You have no reference. So think of a plan of an empty bucket, right? Cass: Just for maybe readers who are not familiar. So a pointer is something that points to a bit of memory where something of information is stored. And usually at that point, there’s a valuable number. But sometimes there’s just junk. And so a null pointer kind of helps you tell, ideally, what are the pointers pointing to something useful or it’s pointing to to junk? Would that be kind of a fair summary or am I butchering it a little? Scalfani: Yeah, I think at the lowest level, like if you think about C or assembly, you always have a value somewhere, right? And so what you would do is you would say, okay, so they always point to something. But if I have an address of zero at the very lowest level here, if I have an address— so if my register has a value of zero in it, and I usually use that register to dereference memory to point to someplace in memory, then just that’s going to be treated specially as, oh, that’s not pointing anywhere in particular. There is no value that I’m referencing. So it’s a non, I have no reference. I have nothing, basically, in my hands. Cass: So it’s not something there, it’s just the language is trained that if I see a zero, that’s a flag, there’s nothing there. Scalfani: Right. Right. Exactly, exactly. Cass: And then so then how does this then— so that sounds like a great idea. Wonderful. So how does this then— Scalfani: It is. Cass: Well, how does this cause problems later on? I’ve got this magic number that tells me that it’s bad stuff there. Why does this thing cause problems? And then how can functional programming really help with that? Scalfani: Okay. So the problem isn’t in this idea. It’s sort of a hack. It’s like, oh, well, we’ll just put a zero in there. And then we’ll have to— so that was, okay, that solved that problem. But now you’re just kicking the can. So everywhere down the road where you’re dealing with this thing, now everybody has to check all the time. Right? And it’s not a matter of having to check, because the situation of where you have something or you don’t have something is something that’s valid situation, right? So that’s a perfectly valid thing. But it’s when you forget to check that you get burned. And it’s not built into most of the languages to where it does the checking for you and you have to say, oh, well, this thing is a null or if it’s not a null, then do this you. There’s all these if checks. And you just pollute your code with all the checks everywhere. Now, functional programming doesn’t eliminate that. It’s not magic. It doesn’t eliminate it. But many of the functional languages, at least the ones that I’ve worked in, they have this concept of a maybe, right? So a maybe is, it can either be nothing, or it can be just something. And it’s other languages call it an option. But it’s the same idea. And so you either have nothing, or you just have this value. And because of that, it forces— because of the way that that’s implemented, and I won’t go into gory details, but because of it, they force you to the compiler won’t compile if you didn’t handle both cases. And so you’re forced to always handle it, as opposed to the null, you can choose to handle it or not, and you could choose to forget it, or you could go— you could not even know that it could be a null, and you could just assume you have a good value all the time. And then you don’t know until you’re running your program that, oh, you made a mistake. The last place you want to find out is in production when you hit a piece of code that is run rarely, but then you didn’t do your null check, and then it crashes in production and you’ve got problems. With the maybe, you don’t have a choice. You can’t compile it. You can’t even build your program. It really is a great tool. And many times, I still don’t like the maybe. Because it’s like, ugh, I have to handle maybe. Because it forces your hand. You don’t have a choice. Ideally, yes, that’s the right thing, but I still grumble. Cass: I mean, I think the tendency is always to take the shortcut because you think to yourself, oh, this will never— This will never be wrong. It’s fine. I mean, I just all the time. I know when I write even the limited— I know I should be checking a return value. I should be writing it so that it returns. If something goes wrong, it should return an error value, and I should be checking for that error value. But do I do that? No, I just carry on my merry way. Scalfani: Because we know better, right? We know better. Cass: Right. So I do want to talk a little bit about the benefits, then, that functional programming can build. And you make the case for some of these concrete benefits. And especially when it comes to maintenance. And as I say, I think, one of the charges that’s fairly laid against maybe sort of the software enterprise as a whole is that it’s great at creating stuff and inventing stuff, but not so good at maintaining stuff, even though there are examples we have of code, very important code that runs very important systems, that sits around for decades. So maintainability is kind of actually super important. So can you talk a little bit about those benefits, especially with regard to maintainability? Scalfani: Yeah. So I think, so before you even get into maintainability, there’s always the architectural phase, right? You want to model the problem well. So you want to have a language that can do really— can really aid you in the proper modeling of your types. And so that you can model the domain. So that’s the first step, because you can write bad in any code, right? In any technology, you can destroy it. No matter how great the technology is, you can wreak havoc with it. So no technology is magical in that it’s going to keep you from doing bad things. The trick about technology is that you want it to help you do good things. And encourage you and make it easy to do those good things. So that’s the first step, is to have a language that’s really good about modeling. And then the next thing is you want to— we haven’t talked about global state, but you need to control the global state in your program. And in the early days, going back to assembly, every variable, every memory location is global, right? There is no local. The only local data you might have is if you allocated memory on a stack, or if you have registers and you pushed your old registers as you went into a subroutine, things like that. But basically everything was global. And so we’ve been we’ve been, as languages have been progressing, we’ve been making things more local, what’s in scope. Who has access to this variable? Who doesn’t have access to the variable? And the more, if you just follow that line as you get to functional programming, you control your global state, right? And so there is no global state. You actually are passing state around all the time. So in a lot of modern, say, JavaScript, frameworks do a lot of that. They’ve taken a lot architecturally from functional programming, like React is one that it’s a matter of how do you control your state? And that’s been a problem in the browser since day one. So controlling the state is another important thing. And why am I mentioning these other things about maintainability? Because if you do these things right, if you get these things right, it aids in your maintainability, right? There’s nothing that’s going to fix logic problems. There’s always logic, right? And if you get— if you make a logic problem mistake, there’s nothing there. Like you just made the wrong call. No language is going to save you because it’s got to be powerful enough so you can make those mistakes. Otherwise, you can’t make all the things. So but what it can do is it can restrict you to, you can’t make this mistake, and you can’t make that mistake, and you won’t make this mistake. It restricts you in the mistakes, right? And it makes it easy to do the other things. And that’s where the maintainability really, I think, comes in is the ability to create a system where, if you got the proper modeling of the problem, you’ve properly managed— because really, what are you maintaining software for? You’re fixing problems, right? Or you’re adding features. So that’s all there really is. So if you’re spending all your time fixing problems, then you don’t have time to add any features. And I found that we’ve spent— in the old days we spent more time fixing problems than adding new features. Why? Because why are you adding features when you have bugs, right? So you have to fix the bugs first. So when we move to functional programming, I found that we were spending yeah, we still have logic problems here and there. I mean, we’re still human, but most of our time was spent thinking about new features. Like we would put something into production, you got to have good QA, no matter how great the language is. But if you have good QA and you do your job right, and you have a good solid language that helps you architect it originally correct, then you don’t think about like, oh, I have all these bugs all the time, or these crashes in production. You just don’t have crashes in production. Most of that stuff’s caught before that. The language doesn’t let you paint yourself into a corner. So there’s a lot of those kinds of things. So you’re like, oh, well, what can I add? Oh, let’s add this new feature. And that’s really value add, at the business level, because that’s really at the end of the day, it doesn’t matter how cool some technology is. But if it doesn’t really have a bottom line return on investment, there’s no sense in doing it. Unless it’s a hobby, but for most of us, it’s a job, and it matters the bottom line of the business. And the bottom line of the business is you want to make improvements to your product so you can get either greater market share, keep your customers happy and keep them from moving to people who can add features to their products. Competitors and so forth. So I think the maintainability part comes with, originally with really good implementation, initial implementation. Cass: So I want to get that idea of implementations. So oftentimes, when I think about— maybe I’m in the past, I’ve thought about functional languages. And I have thought about them in this kind of academic way, or else things that live in deep black boxes way down in the system. But you have been working on PureScript, which is something that is directly applicable to web browsers, which is, when I think about advanced clever mathematical code models, browsers are not necessarily what I would associate. That’s kind of a very fast and loose environment, historically. So can you talk a little bit about PureScript and how people can kind of get a little bit of experience in that? Scalfani: PureScript is a statically typed, purely functional language that has its lineage from Haskell, which would start as an academic language. And it compiles into JavaScript so that it can run in the browser, but it also can run on the back end, running in Node. Or you can write it and have your program run in Electron, which is like a desktop application. So pretty much everywhere JavaScript works, you can pretty much get PureScript to work. I’ve done it in backends, and I’ve done it in browsers. I haven’t done it in Electron yet, but it’s pretty academic. So that’s totally doable. I know other people have done it. So it doesn’t get more run of the mill, kind of, programming than the browser, right? And JavaScript is a pretty terrible language, honestly. It’s terrible on so many ways because you can shoot your foot off in so many different ways in JavaScript. And every time I have to write a little bit of JavaScript, just the tiniest bit of JavaScript, I’m always getting burned constantly. And so anyway, so what is a pure functional language? A pure functional language is that all your functions are pure, and a pure function is what I talked about earlier. It only has access to the inputs to a function, it does its computations, and it has its outputs. So that’s kind of like what we did in math, right? You have a function, f of x, x gets some value, and maybe your function is x+2, and so it takes the x, it adds two to it, and the result is whatever that value is, right? Whatever the computation is. So that’s what it purely functional language is. It’s completely pure. And there are languages that are hybrids, right? PureScript, Haskell, Elm. These are all languages that are pure. And they don’t compromise. So compromised languages are really great in the beginning, but you can easily lose out on all the benefits, right? So if you can— it’s the same thing with the goto, right? If we had, if we relegated goto to, like, okay, we’re going to stick it in this corner and you sort of don’t want to use it. It doesn’t stop you from pulling that off the shelf and using it all day, right? So it’s best to just eliminate something and not compromise. Not have a compromise language. To me, Scala as a compromise language. It’s not fully functional. And there are lots, like Clojure, I believe, has— even JavaScript. JavaScript is actually, for me, was my introduction to functional programming. There’s functional concepts in JavaScript. And I thought JavaScript was the best thing since sliced bread when I had those things. I didn’t know they were functional at the time, but I’m like, this is something that I’ve been looking for for years, and I finally have it in this language called JavaScript, and I can pass a function as a parameter. I mean, I wanted that for decades. And all of a sudden, I could do it. And so I’m a big proponent of a purely functional languages because of that. Because of hybrids don’t work well. And all you need is a single library that you’re using that didn’t— the author didn’t use all the benefits, and all of a sudden, now your whole thing is messed up. Whatever you’ve built is tainted by this library that isn’t that isn’t pure, let’s say. So I think that the benefits of Haskell and PureScript being fully pure are really great. Complications are, you have to think very differently because of that, because we’re not used to thinking that way. There’s all these extra things that have to be built that are all part of the libraries that make that much, much easier. But then you have to understand the concepts. So I hope that explains PureScript a little bit. Cass: Well, I literally could go back and forth with you all day because this really is truly fascinating, but I’m afraid we’re out of time. So I do very much want to thank you for talking with us today. Scalfani: Great. Thank you. It was fun. Cass: Yeah. I really was. So today in Fixing the Future, we were talking with Charles Scalfani about functional programming and creating better code. I’m Stephen Cass of IEEE Spectrum, and I hope you’ll join us next time.</itunes:subtitle><itunes:summary>Transcript Stephen Cass: Welcome to Fixing the Future, an IEEE Spectrum podcast. I’m senior editor Stephen Cass, and this episode is brought to you by IEEE Explorer, your gateway to trusted engineering and technology research with nearly 6 million documents with research and abstracts. Today we are talking with Charles Scalfani, CTO of Panoramic Software, about how adopting functional programming could lead to cleaner and more maintainable code. Charles, welcome to Fixing the Future. Charles Scalfani: Thank you. Cass: So you recently wrote an expert feature for us that turned out to be incredibly popular with readers. That argued that we should be adopting this thing called functional programming. Can you briefly explain what that is? Scalfani: Okay. Functional programming is an older version of programming, actually, than what we do today. It is basically, as it says, it’s basically based around functions. So where object oriented programming is has an object model, where it’s everything— you see everything through the lens of an object, and the whole world is an object, and everything in that world is an object. In functional programming, it’s the similar, it’s you see everything as a function, and the whole world looks like— everything in the world looks like a function. You solve all your problems with functions. The reason it’s older and wasn’t adopted is because the ideas were there, the mathematics, the ideas, and everything were there, the hardware just couldn’t keep up with it. So it became relegated to academia and the hardware just wasn’t available to do all of the things. That has been, since probably the 90s, it’s been not a problem anymore. Cass: So I just wanted to like, as somebody who is, I would call itself a kind of a very journeyman programmer. So one of the first things I learned when I’m using a new language is usually the section says, how to define a function, and there’s a little— you know, everybody’s got it, Python’s got it, you know, even some versions of Basic used to have it, C has it. So I think function here means something different to those functions I’m used to in something like C or Python. Scalfani: Yeah. I have a joke that I always say is that when I learned C, the first program I wrote was “hello world.” And when I learned Haskell, a functional programming language, the last thing I learned was “hello world.” And so you really, with C, you did, your first “hello world” was a print function, something that printed to the console, and you could say, “yay, I got my first C program working. Here it is.” But the complexity of doing side effects and IO and all of that is such that it gets pushed aside for just pure functional programming. What does that look like? How do you put functions together? How do you compose them? How do you take these smaller pieces and put them all together? And the idea of side effects is something that’s more advanced. And so when you get into a standard language, you just, kind of, jump in and start writing— everybody writes the “hello world,” thanks to Kernighan and Ritchie, what they did in their book, but you really don’t get to do that for a very long time. In fact, in the book that I wrote, it isn’t for hundreds of pages before you actually get to putting something on the screen. It’s relegated to the fourth section of the book. So it is a difference in that. Side effects where you can affect the world is very standard in imperative languages. The languages that everybody uses C, and Java, and JavaScript, and Python and you name it, the standard languages. And that’s why it’s very easy when you first learn a language is just hop in and feel like you’re able to do lots of stuff, and get lots of things done very quickly. And that gets kind of deferred in a functional language. You tend to learn that later. So the kinds of functions that we deal with in functional languages were called pure functions. They’re very different than how we think of functions in programming today, but more how you think of functions in math. Right? So you have inputs, you have processing that happens in the function, computations that are going to occur in that function, and then you have those outputs. And that’s all. You don’t get to manipulate the world in any way, shape, or form. Cass: So I want to get back into a little bit of that tutorial on how you get started up on stuff. But it sounds to me a little bit like, I’m searching for a model, my previous model of experience. It sounds to me a little bit like kind of the Unix philosophy of piping very discrete little utility programs together, and then getting results at the end. And that kind of philosophy. Scalfani: Yes. Yeah. That’s a great example. That’s like composing functions using pipes— I’m sorry, composing programs using pipes, and we compose functions in the very same way. And the power of being able to do that, the power they figured out back in Unix, to be able to just say, well, I’ll write this very simple little program that just does one little thing, and then I’ll just take its output and feed it into the next. And it does one little thing. And it’s exactly the same thing, just at a smaller level. Because you’re dealing with functions and not full programs. Cass: Got it. But this does seem like a fairly big cultural shift where you’re telling people, you don’t even get to print until you’re halfway through the book and so on. But I think this is something you raised in the article. We have asked programmers before to do, make fairly big shifts, and the benefits have been immense. And the one you talk about, is getting rid of goto, whereby, you know, in the beginning, we all, you know, ten, goto, whatever. And it was this goto palazza. And then we kind of realized that goto had some problems. But even though it was this very simple tool that every program are used, we’ve kind of mostly weaned ourselves off goto. Can you talk a little bit about sort of the parallels between saying bye bye to goto and maybe saying bye bye to some of this imperative stuff? And these things like side effects and then maybe talk a little bit about what you mean about like global state, and then— because I think that will perhaps illuminate a little bit more about what you mean about side effects. Scalfani: When I started in programming it was way back in you know 78, 79, around that time and everything was a go— you had Basic, a machine with 8K of RAM. That was it. K. You didn’t have you didn’t have room to do all the fancy stuff we can do today. And so you had to try to make things as efficient as possible. And it really comes from branching down in the assembly language, right? Everybody was used to doing that, goto the, just jump over here and do this thing and then jump back maybe or return from a subroutine and you had very little machine power to do things. So goto came out of assembly language. And as it got in the higher and higher level languages, and as things got more complicated, then you wound up with what’s called spaghetti code, because you can’t follow the code. It’s like trying to follow a strand of spaghetti in a bowl of spaghetti. And so you’re like, well this is jumping to this and that’s jumping to this and you don’t even remember where you were anymore. And I remember looking at code like that and mostly written in assembly language. And so as structured languages came about, people realized that if we could have this kind of branching but do it in a do it in a way in which we could abstract it. We could think about it in a more abstract level than down in the details. And so if you look at that, I use it as an example because I look to the past to try to figure out what are we doing today? If we take imperative languages and if we move to functional, we are giving up a lot of things. You can’t do this and you can’t do that. You don’t do side effects. You don’t have global state. There’s all these things that you— there’s no such thing as a null pointer or a null value. Those things don’t exist here in this way of thinking. And it’s like you have to ask yourself, wait, wait, I’m giving up these things that I’m very familiar with and well, how do you do things then in this new way? And is it beneficial or is it just a burden? So at first, it feels like a burden, an absolute burden. It’s going to because you’re so used to falling back on these old ways of doing things in old ways of thinking. And especially when I— I was like 36 years or 30 some odd years into programming and imperative languages, and then all of a sudden I’m thinking functionally. And now I have to change my whole mode of thinking. And you really have to say, well, is it beneficial? So I kind of look to the past. Getting rid of the go to was highly beneficial. And I would never advocate for it back. And people did comment on the article saying, “well, yeah, these languages have goto,” but not the goto I’m talking about. They still have these kind of controlled gotos in C, not where you could just jump to the middle of anywhere. And that’s the way things were back in the day. So, yeah, things were pretty wild back then. And we wrote much simpler bits of software. You didn’t use libraries. You didn’t run in operating systems always. I did a lot of embedded coding in the early days. And so you wrote everything. It was all your own code. And now, you might have written, I don’t know, maybe you wrote a thousand lines of code. And now we’re working in millions of lines of code. So it’s a very different world, but when we came out of that early stage, we started shedding these bad habits. And we haven’t done that over time. And I think you have to shed some bad habits to move to functional. Cass: So I do want to talk really getting into the benefits of functional programming are, especially with, I think, the idea of like thinking about maintenance instead of sort of the white hot moment of creation that everybody loves to write that first draft, really thinking about how software is used. But I did just want to unpack a sentence there. And it’s something that also comes from C, and it’s not necessarily something that is baked into assembly in the same way, but it does come in to C, which is this idea of the null pointer. You mentioned the null. And can you talk just a little bit about the null and why it causes so much problems, not just for C, but for all of the sort of, as you call them, curly bracket languages that inherit from it. Scalfani: Right. So in most of those languages, they all support this idea of a null. That is you don’t have anything. So you either have a value or you don’t have a value. And it’s not— it’s sort of like just this idea of that every reference to something could be potentially not— have no reference, right? You have no reference. So think of a plan of an empty bucket, right? Cass: Just for maybe readers who are not familiar. So a pointer is something that points to a bit of memory where something of information is stored. And usually at that point, there’s a valuable number. But sometimes there’s just junk. And so a null pointer kind of helps you tell, ideally, what are the pointers pointing to something useful or it’s pointing to to junk? Would that be kind of a fair summary or am I butchering it a little? Scalfani: Yeah, I think at the lowest level, like if you think about C or assembly, you always have a value somewhere, right? And so what you would do is you would say, okay, so they always point to something. But if I have an address of zero at the very lowest level here, if I have an address— so if my register has a value of zero in it, and I usually use that register to dereference memory to point to someplace in memory, then just that’s going to be treated specially as, oh, that’s not pointing anywhere in particular. There is no value that I’m referencing. So it’s a non, I have no reference. I have nothing, basically, in my hands. Cass: So it’s not something there, it’s just the language is trained that if I see a zero, that’s a flag, there’s nothing there. Scalfani: Right. Right. Exactly, exactly. Cass: And then so then how does this then— so that sounds like a great idea. Wonderful. So how does this then— Scalfani: It is. Cass: Well, how does this cause problems later on? I’ve got this magic number that tells me that it’s bad stuff there. Why does this thing cause problems? And then how can functional programming really help with that? Scalfani: Okay. So the problem isn’t in this idea. It’s sort of a hack. It’s like, oh, well, we’ll just put a zero in there. And then we’ll have to— so that was, okay, that solved that problem. But now you’re just kicking the can. So everywhere down the road where you’re dealing with this thing, now everybody has to check all the time. Right? And it’s not a matter of having to check, because the situation of where you have something or you don’t have something is something that’s valid situation, right? So that’s a perfectly valid thing. But it’s when you forget to check that you get burned. And it’s not built into most of the languages to where it does the checking for you and you have to say, oh, well, this thing is a null or if it’s not a null, then do this you. There’s all these if checks. And you just pollute your code with all the checks everywhere. Now, functional programming doesn’t eliminate that. It’s not magic. It doesn’t eliminate it. But many of the functional languages, at least the ones that I’ve worked in, they have this concept of a maybe, right? So a maybe is, it can either be nothing, or it can be just something. And it’s other languages call it an option. But it’s the same idea. And so you either have nothing, or you just have this value. And because of that, it forces— because of the way that that’s implemented, and I won’t go into gory details, but because of it, they force you to the compiler won’t compile if you didn’t handle both cases. And so you’re forced to always handle it, as opposed to the null, you can choose to handle it or not, and you could choose to forget it, or you could go— you could not even know that it could be a null, and you could just assume you have a good value all the time. And then you don’t know until you’re running your program that, oh, you made a mistake. The last place you want to find out is in production when you hit a piece of code that is run rarely, but then you didn’t do your null check, and then it crashes in production and you’ve got problems. With the maybe, you don’t have a choice. You can’t compile it. You can’t even build your program. It really is a great tool. And many times, I still don’t like the maybe. Because it’s like, ugh, I have to handle maybe. Because it forces your hand. You don’t have a choice. Ideally, yes, that’s the right thing, but I still grumble. Cass: I mean, I think the tendency is always to take the shortcut because you think to yourself, oh, this will never— This will never be wrong. It’s fine. I mean, I just all the time. I know when I write even the limited— I know I should be checking a return value. I should be writing it so that it returns. If something goes wrong, it should return an error value, and I should be checking for that error value. But do I do that? No, I just carry on my merry way. Scalfani: Because we know better, right? We know better. Cass: Right. So I do want to talk a little bit about the benefits, then, that functional programming can build. And you make the case for some of these concrete benefits. And especially when it comes to maintenance. And as I say, I think, one of the charges that’s fairly laid against maybe sort of the software enterprise as a whole is that it’s great at creating stuff and inventing stuff, but not so good at maintaining stuff, even though there are examples we have of code, very important code that runs very important systems, that sits around for decades. So maintainability is kind of actually super important. So can you talk a little bit about those benefits, especially with regard to maintainability? Scalfani: Yeah. So I think, so before you even get into maintainability, there’s always the architectural phase, right? You want to model the problem well. So you want to have a language that can do really— can really aid you in the proper modeling of your types. And so that you can model the domain. So that’s the first step, because you can write bad in any code, right? In any technology, you can destroy it. No matter how great the technology is, you can wreak havoc with it. So no technology is magical in that it’s going to keep you from doing bad things. The trick about technology is that you want it to help you do good things. And encourage you and make it easy to do those good things. So that’s the first step, is to have a language that’s really good about modeling. And then the next thing is you want to— we haven’t talked about global state, but you need to control the global state in your program. And in the early days, going back to assembly, every variable, every memory location is global, right? There is no local. The only local data you might have is if you allocated memory on a stack, or if you have registers and you pushed your old registers as you went into a subroutine, things like that. But basically everything was global. And so we’ve been we’ve been, as languages have been progressing, we’ve been making things more local, what’s in scope. Who has access to this variable? Who doesn’t have access to the variable? And the more, if you just follow that line as you get to functional programming, you control your global state, right? And so there is no global state. You actually are passing state around all the time. So in a lot of modern, say, JavaScript, frameworks do a lot of that. They’ve taken a lot architecturally from functional programming, like React is one that it’s a matter of how do you control your state? And that’s been a problem in the browser since day one. So controlling the state is another important thing. And why am I mentioning these other things about maintainability? Because if you do these things right, if you get these things right, it aids in your maintainability, right? There’s nothing that’s going to fix logic problems. There’s always logic, right? And if you get— if you make a logic problem mistake, there’s nothing there. Like you just made the wrong call. No language is going to save you because it’s got to be powerful enough so you can make those mistakes. Otherwise, you can’t make all the things. So but what it can do is it can restrict you to, you can’t make this mistake, and you can’t make that mistake, and you won’t make this mistake. It restricts you in the mistakes, right? And it makes it easy to do the other things. And that’s where the maintainability really, I think, comes in is the ability to create a system where, if you got the proper modeling of the problem, you’ve properly managed— because really, what are you maintaining software for? You’re fixing problems, right? Or you’re adding features. So that’s all there really is. So if you’re spending all your time fixing problems, then you don’t have time to add any features. And I found that we’ve spent— in the old days we spent more time fixing problems than adding new features. Why? Because why are you adding features when you have bugs, right? So you have to fix the bugs first. So when we move to functional programming, I found that we were spending yeah, we still have logic problems here and there. I mean, we’re still human, but most of our time was spent thinking about new features. Like we would put something into production, you got to have good QA, no matter how great the language is. But if you have good QA and you do your job right, and you have a good solid language that helps you architect it originally correct, then you don’t think about like, oh, I have all these bugs all the time, or these crashes in production. You just don’t have crashes in production. Most of that stuff’s caught before that. The language doesn’t let you paint yourself into a corner. So there’s a lot of those kinds of things. So you’re like, oh, well, what can I add? Oh, let’s add this new feature. And that’s really value add, at the business level, because that’s really at the end of the day, it doesn’t matter how cool some technology is. But if it doesn’t really have a bottom line return on investment, there’s no sense in doing it. Unless it’s a hobby, but for most of us, it’s a job, and it matters the bottom line of the business. And the bottom line of the business is you want to make improvements to your product so you can get either greater market share, keep your customers happy and keep them from moving to people who can add features to their products. Competitors and so forth. So I think the maintainability part comes with, originally with really good implementation, initial implementation. Cass: So I want to get that idea of implementations. So oftentimes, when I think about— maybe I’m in the past, I’ve thought about functional languages. And I have thought about them in this kind of academic way, or else things that live in deep black boxes way down in the system. But you have been working on PureScript, which is something that is directly applicable to web browsers, which is, when I think about advanced clever mathematical code models, browsers are not necessarily what I would associate. That’s kind of a very fast and loose environment, historically. So can you talk a little bit about PureScript and how people can kind of get a little bit of experience in that? Scalfani: PureScript is a statically typed, purely functional language that has its lineage from Haskell, which would start as an academic language. And it compiles into JavaScript so that it can run in the browser, but it also can run on the back end, running in Node. Or you can write it and have your program run in Electron, which is like a desktop application. So pretty much everywhere JavaScript works, you can pretty much get PureScript to work. I’ve done it in backends, and I’ve done it in browsers. I haven’t done it in Electron yet, but it’s pretty academic. So that’s totally doable. I know other people have done it. So it doesn’t get more run of the mill, kind of, programming than the browser, right? And JavaScript is a pretty terrible language, honestly. It’s terrible on so many ways because you can shoot your foot off in so many different ways in JavaScript. And every time I have to write a little bit of JavaScript, just the tiniest bit of JavaScript, I’m always getting burned constantly. And so anyway, so what is a pure functional language? A pure functional language is that all your functions are pure, and a pure function is what I talked about earlier. It only has access to the inputs to a function, it does its computations, and it has its outputs. So that’s kind of like what we did in math, right? You have a function, f of x, x gets some value, and maybe your function is x+2, and so it takes the x, it adds two to it, and the result is whatever that value is, right? Whatever the computation is. So that’s what it purely functional language is. It’s completely pure. And there are languages that are hybrids, right? PureScript, Haskell, Elm. These are all languages that are pure. And they don’t compromise. So compromised languages are really great in the beginning, but you can easily lose out on all the benefits, right? So if you can— it’s the same thing with the goto, right? If we had, if we relegated goto to, like, okay, we’re going to stick it in this corner and you sort of don’t want to use it. It doesn’t stop you from pulling that off the shelf and using it all day, right? So it’s best to just eliminate something and not compromise. Not have a compromise language. To me, Scala as a compromise language. It’s not fully functional. And there are lots, like Clojure, I believe, has— even JavaScript. JavaScript is actually, for me, was my introduction to functional programming. There’s functional concepts in JavaScript. And I thought JavaScript was the best thing since sliced bread when I had those things. I didn’t know they were functional at the time, but I’m like, this is something that I’ve been looking for for years, and I finally have it in this language called JavaScript, and I can pass a function as a parameter. I mean, I wanted that for decades. And all of a sudden, I could do it. And so I’m a big proponent of a purely functional languages because of that. Because of hybrids don’t work well. And all you need is a single library that you’re using that didn’t— the author didn’t use all the benefits, and all of a sudden, now your whole thing is messed up. Whatever you’ve built is tainted by this library that isn’t that isn’t pure, let’s say. So I think that the benefits of Haskell and PureScript being fully pure are really great. Complications are, you have to think very differently because of that, because we’re not used to thinking that way. There’s all these extra things that have to be built that are all part of the libraries that make that much, much easier. But then you have to understand the concepts. So I hope that explains PureScript a little bit. Cass: Well, I literally could go back and forth with you all day because this really is truly fascinating, but I’m afraid we’re out of time. So I do very much want to thank you for talking with us today. Scalfani: Great. Thank you. It was fun. Cass: Yeah. I really was. So today in Fixing the Future, we were talking with Charles Scalfani about functional programming and creating better code. I’m Stephen Cass of IEEE Spectrum, and I hope you’ll join us next time.</itunes:summary><itunes:keywords>Type-podcast, Fixing-the-future, Javascript, Functional-programming, Haskell</itunes:keywords></item><item><title>Rerouting Intention And Sensation In Paralyzed Patients</title><link>https://spectrum.ieee.org/neural-bypass-for-paralysis</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.webp?id=33293244&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/90e9de12" width="100%"></iframe><h3>Transcript</h3><p><strong>Eliza Strickland:</strong> Paralysis used to be thought of as a permanent condition, but over the past two decades, engineers have begun to find workarounds. They’re building on a new understanding of the electric code used by the nervous system. I’m Eliza Strickland, a guest host for <em>IEEE Spectrum’s</em> Fixing the Future podcast. Today I’m talking with <a href="https://www.linkedin.com/in/chad-bouton-ba0825a/" target="_blank">Chad Bouton</a>, who’s at the forefront of this electrifying field of research. Chad, welcome to the program, and can you please introduce yourself to our listeners?</p><p><strong>Chad Bouton: </strong>Yes, thanks so much, Eliza, for having me. And my name is Chad. I’m at the <a href="https://feinstein.northwell.edu/" target="_blank">Northwell Health Feinstein Institute for Medical research</a>.</p><p><strong>Strickland:</strong> And can you tell me a bit about the patient population that you’re working with? I believe these are people who had become paralyzed, and maybe you can tell us how that happened and the extent of their paralysis.</p><p><strong>Bouton:</strong> Absolutely. Absolutely. In fact, we work with folks that have been paralyzed either from a traumatic injury, stroke, or even a brain injury. And there’s over 100 million people worldwide that are living with paralysis. And so it’s a very devastating and important condition, and we are working to restore not only movement, but we’re making efforts to restore sensation as well, which is often not the focus and certainly should be.</p><p><strong>Strickland:</strong> So these are people who typically don’t have much movement below the head, below the neck?</p><p><strong>Bouton: </strong>So we have focused on tetraplegia or quadriplegia because, obviously, it’s extremely important and it is very difficult to achieve independence in our daily lives if you don’t have the use of your hands in addition to not being able to move around and walk. And it surprisingly accounts for about half of the cases of spinal cord injury, even slightly more than half. And it used to be thought of as something that was a more rare condition, but with car accidents and diving accidents, it’s a prominent and critical condition that we need to really address. And there’s no cure currently for paralysis. No easy solution. No simple fix at this point.</p><p><strong>Strickland: </strong>And from your experiences working with these people, what kind of capabilities would they like to get back if possible?</p><p><strong>Bouton: </strong>Well, individuals with paralysis would like to really regain independence. I’ve had patients and study participants comment on that and really ask for advances in technology that would give them that independence. I’ll speak to some of the things we’re doing in the lab, but folks often ask, “Could we take this home or take it outside the lab?” And we’re certainly working to do that as well. But the goal is to be more independent, ask for help less, be able to achieve functional abilities to do even things that we might consider just basic necessities, feeding, grooming, and even some of the personal aspects, being able to hold someone’s hand and to feel that person’s hand or a loved one’s hand. Those are the things that we’re really targeting and working hard to address.</p><p><strong>Strickland:</strong> Yeah, I thought it’s really interesting that your group is focused on hands. There are other groups that are working on letting people walk again, but the hands feel like a very obviously important target too.</p><p><strong>Bouton: </strong>Yeah, absolutely. And in fact, there’s been studies and widespread surveys on this topic, and folks that are living with tetraplegia or quadriplegia prioritize or say their top desire is to move their hands again. And if you step back and think about it for a second, it makes sense because we rely on our hands so much. And even losing one hand, say from a stroke, can be devastating and very disruptive to our lives.</p><p><strong>Strickland: </strong>Yeah, let’s go over the basics of electrophysiology for listeners who don’t have a background in that area. I love this field. It has such a long history that goes back to the 1780s when <a href="https://www.unibo.it/en/university/who-we-are/our-history/famous-people-and-students/luigi-galvani" target="_blank">Luigi Galvani </a>touched an exposed nerve of a dead frog with a scalpel that had an electric charge and saw the frog’s leg kick.</p><p><strong>Bouton:</strong> Yes.</p><p><strong>Strickland:</strong> Can you explain how the nervous system uses electricity?</p><p><strong>Bouton:</strong> Yes, absolutely. So it’s an electrochemical phenomenon. And of course, it involves neurotransmitters as well. When a neuron fires, as we say, that’s an electrical impulse. It only lasts a very brief moment, less than a thousandths of a second. But basically, there’s a polarization of the neuron itself and charges that are passing through ion channels. So what does this mean? Well, it’s kind of like in a computer where you have zeros and ones. For a brief moment, that cell has changed from, let’s say, a zero to a one, and it is firing or having this impulse that represents that binary one. And what’s so neat about it is that the firing rate, so basically how often those impulses are happening or how fast they’re happening, carries information. And then, of course, which neurons or nerve fibers carry the information or which ones are firing is what we call spatial encoding. So you have temporal encoding and spatial encoding. Those together can carry a tremendous amount of information or can mean different things, whether it’s a motor event where there’s a need to activate certain muscles in the hand or the fingers or the legs and any muscle throughout the body. And we also have sensory information that gets encoded by the same approach. And so information can pass from the brain to the body and from the body back to the brain, and we have these two-way information highways all throughout our central and peripheral nervous system. I call it often the most complex control system in nature, and we’re still trying to understand it.</p><p><strong>Strickland:</strong> Yeah, so for a person with tetraplegia, these electrical messages from the brain are essentially not getting through. The highway is blocked, right?</p><p><strong>Bouton: </strong>That’s right. Absolutely. And so let’s walk through that scenario. So now, someone who’s had a car accident or a diving accident, often the highest level of stress occurs at the base of the neck, and we call that C5, so it’s the cervical, a fifth vertebra there. Often that cord gets damaged because the vertebra itself, which normally would protect that cord, unfortunately, it gets fractured and can then slip or slide and can actually crush or damage the cord itself. So then what is often misunderstood is that you don’t get a simple complete shutdown. You get damage and certain levels of damage or amounts of damage. And what can happen is someone can become paralyzed but lose sensation as well along with motor capability. It’s not going to be the same for everyone. There’s different levels of it. But usually, there’s damage, and signals are able to get through but often very attenuated, very weak. And so I’ll talk through some of the approaches we’re taking now to boost, if you will, those signals and try to enhance those signals. The good news is that we’re finding more and more that those signals are there and can be boosted or enhanced, which is very, very exciting because it’s opening new doors to new therapies that we’re developing.</p><p><strong>Strickland:</strong> Yeah, I love that you call your system the neural bypass, which is very evocative. You can imagine picking up the signals in the brain, getting around the blockage, and sending the information onto the muscles. So maybe we can talk about the first part of that first. How do you get the information from the brain?</p><p><strong>Bouton: </strong>Well, yes, the neural bypass, so it’s funny because that phrase was used very briefly back in the ‘70s. And then it kind of went away and I think really because it wasn’t possible with technology at that time. But then in the early 2000s, we started to really explore this concept and use that phrase again and say, if we can put a microelectrode array in the brain, which we did back around 2005, 2006, and a number of colleagues and various team members kind of looked at that and said, yes, we can record from the brain. We can even stimulate the brain. But we said, why couldn’t we take that information, reroute it, as you say, around an injury or even a damaged part of the nervous system or the brain itself and create this neural bypass, and then reinsert the signals or link those signals directly to muscle stimulation? And that was what we called the one-way bypass, neural bypass. And why couldn’t we do that and restore movement? And so we attempted to do that and were thankfully successful in 2014. In fact, we had enrolled a young man named <a href="https://ianburkhart.com/" target="_blank">Ian Burkhart</a>. His name, of course, became public, and he was the first paralyzed individual to regain movement using a brain implant that formed this neural bypass, this one-way or unidirectional neural bypass. And it was very, very exciting, and he was able to do some pretty amazing things with this approach. And in fact, I still remember when he first drank from a glass on his own. He reached out, opened his fingers using the bypass, which he hadn’t been able to do for four years since his accident, and he was able to open his hand by himself without help, pick up a glass, bring it to his lips, and be able to just take a drink. It was really quite a moment, and the entire team and myself were very moved and we thought we’re really taking an important step forward here.</p><p><strong>Strickland:</strong> Ian Burkhart also played <em><a href="https://en.wikipedia.org/wiki/Guitar_Hero" target="_blank">Guitar Hero</a></em> if I remember right. Is that correct?</p><p><strong>Bouton:</strong> Yeah, so another very, very exciting moment was when we explored the idea of rhythmic movements in the hand. So I’ll do a little experiment here. We’ll do it even though this is a podcast, but we can all do this experiment. If you hold up one hand-- and you should try this, Eliza. Okay, so hold up, say, either left or right. Now take your other hand and drum your fingers against the palm of your hand and go very, very fast. Okay, now stop, and now try to reverse directions. Okay. And is it awkward and harder? Okay, so now pay attention to which way was the fastest, what we would call, quote, “natural” way for you. Was it pinky to index or index to pinky?</p><p><strong>Strickland: </strong>Pinky to index was really easy for me. The other way was almost impossible.</p><p><strong>Bouton:</strong> Okay, well, you’re what we call the normal group. So the 85 percent of population does the faster, more natural direction from pinky to index. Only 15 percent of the population goes from index to pinky. And the question is, why in the world is there a wiring, if you will, or a natural direction? And we looked at rhythmic movements. As we looked at the electrode array and the signals we were recording, we could see there was a group or an ensemble of neurons that were firing when we are thinking about rhythmic movements, say just wiggling a finger. And then the other, there’s a totally different group when you actually try to do a static movement of that finger. You’re trying to press it and hold that finger in a certain position. So we thought, let’s see if we can decipher these different groups. And then we linked those signals back to neuromuscular stimulators that we had developed, and we then asked the question, could Ian or others move the fingers in a more dynamic way? And we published another paper on this, but he was able to dynamically move his fingers and then also statically move those, and he could then play <em>Guitar Hero</em> just by thinking about different static or sustained movements and holding a note, let’s say, in the guitar or dynamically doing riffs. And we <a href="https://www.youtube.com/watch?v=60fAjaRfwnU" target="_blank">have videos</a> and whatnot online. But it was really amazing to deepen our understanding but also to allow, again, a little more independence, allow someone to do something fun, a little bit more recreational too.</p><p><strong>Strickland:</strong> Sure, sure. So Ian was using implanted electrodes to get his brain signals. Can you walk us through the different approaches in plants versus wearables?</p><p><strong>Bouton:</strong> Yes, actually, there are a number of ways of tapping into the nervous system and specifically into the brain. And a more recent approach we’ve been taking is to use a minimally invasive procedure to place a very thin electrode. It’s called a stereo electroencephalogram-type electrode, an SEEG. And these are used routinely at our location and a number of locations around the world for mapping the brain in epilepsy patients. But now we ask the question, well, could we use these electrodes to record and stimulate in the motor and sensory area? And we just recently this past year did both, and our findings were quite striking. We were able to not only decode individual finger movements with this different type of electrode and approach, but we were also able to stimulate in primary sensory cortex actually down in the central sulcus. That’s right between your motor and sensory area. And on the wall of the sulcus on the sensory side, we were able to stimulate and elicit highly focal percepts at the fingertips. And this has been a challenge with different electrodes, like the kind of electrodes that I was previously talking about, which were placed on the surface of the brain, not down into the sulcus. So this has allowed us to answer new questions and is also opening up a door to a minimally invasive approach that could be extremely effective in trying to restore even finer movements of the human hand and also sensations. You have to know that you can’t button your shirt without tactile feedback, and getting that feedback at the tips of the fingers is so important for fine motor tasks and dexterous hand movement, which is one of the goals of our lab and center.</p><p><strong>Strickland: </strong>Yeah, I wanted to ask about this idea of the two-way bypass. So in this idea, you have sensors on your fingers or on your hand, and those are sending information to electrodes that are conveying it to the brain?</p><p><strong>Bouton:</strong> That’s absolutely right. With the fingertips and the thin membrane sensors that we’ve developed, we can pick up not only the pressure level that the fingertips but also even directional information. So in other words, when we pick up, say, a cup, I have one here on my desk, and I’m picking this cup up. There’s a downward, what we call shear force that’s pushing the skin down towards the floor. And this is additional information the brain receives so that we know, oh, we’re picking something up that has some weight to it. And you don’t even realize you’re doing this, but there’s a circuit, a relatively complex circuit that involves interneurons in the spinal cord that tightens that grip naturally. You don’t, again, realize you’re doing it. Just a little subtle increase in your grasp. And so when we want to create a bidirectional or a two-way neural bypass, we have to use that information from the sensors, we have to route that back into our computer, we have to decode or decipher that information. That part is straightforward from the sensors, but then how do you encode that information so the brain will interpret that as, oh, I feel not only some kind of sensation at my fingertips, but what’s the level of that sensation?</p><p>And we just, again, last year, were able to show that we can encode the different levels of pressure or force felt, and the participants have reported very accurately what those levels are. And then once the computer understands and interprets that and then starts to send signals back to another set of what we call microstimulators that stimulates the brain, again, with the right firing rate or frequency, then the challenge still remains to make that feel natural. Right now, people still report it’s a bit of a slightly artificial sensation sometimes, or they feel like, I feel this pressure in different levels, but it’s a little bit electrical or even mechanical like a vibration. But it is still extremely useful, and we’re still refining that. But now what you’ve done is you’ve started to close the loop, right? Not only can signals from the brain be interpreted and sent to stimulation devices for muscle activation, we can also pick up the sensation, the tactile sensation, send it back into the brain, and now we have a fully closed loop or a bidirectional bypass.</p><p><strong>Strickland: </strong>So when you’re sending commands to muscles to have the hand do some movement, how much do we understand the neural code that makes one finger move versus another one?</p><p><strong>Bouton: </strong>Yeah, that’s a great question. So we surprisingly understand a fair amount on that after many years and many groups looking at this. We now understand that we can change the firing rate, and we can change how fast we’re stimulating or how fast we need to stimulate that muscle to get a certain contraction level. Recording this signal, understanding the signal from the motor cortex in the brain and how that translates to a different level of contraction, we also understand much better now. Even understanding if it should be a static movement or a dynamic movement, I spoke a little bit to that. I think what’s hard, that we’re still trying to understand, is synergistic movements, when you want to activate multiple fingers together and do a pinch grasp or you want to do something more intricate. There have been studies where people have tried to understand the signal when someone flips a quarter between the fingers, you’ve seen this trick, or a drum stick when you’re spinning it around and manipulating it and transferring it from one pair of fingers to another. Those super complex movements involve motor and sensory networks working together very, very, very closely. And so even if you’re, say, listening in or eavesdropping in on the motor cortex, you only really have half the picture. You only have half the story.</p><p>And so one of the things we’re going to be looking at, and we now have FDA clearance to do this, is to record in both motor and sensory and then to be able to stimulate in the sensory area of the brain. But by recording in both motor and sensory, we can start to look more deeply into this question of, well, how are those networks communicating with each other? How do we further decode or decipher that information? I have someone in my lab, <a href="https://www.linkedin.com/in/sadegh-ebrahimi-38207157/" target="_blank">Dr. Sadegh Ebrahimi</a>, who did his graduate work at Stanford and his postdoc work there, he looked at the question of how do different areas of the brain communicate and pass these massive amounts of information back and forth, and how are they connected, and how does this information flow? He is going to be looking at that question along with, can we use reinforcement learning techniques to further refine our decoding and more importantly our encoding and how we stimulate and how we even stimulate the muscles and get all of these networks working together?</p><p><strong>Strickland: </strong>And for the electrodes that are controlling movement, are those a wearable system that people can just have on their arm?</p><p><strong>Bouton:</strong> Yes, we’re very excited to announce that we’re now developing wearable versions of the neuromuscular stimulation technology, and our hopes are to make this available outside the lab in the next year or two. What we have done is we’ve developed very thin, flexible electrode arrays that have been ruggedized and encapsulated in a silicone material. And there are literally over 200 electrodes now that we have in these patches, and they’re able to precisely stimulate different muscles. But what’s so fascinating is that by using the right electrical waveforms, and we have been optimizing these for a number of years, but in the right electrode array design, turns out we can isolate individual finger movements very accurately. We can even get the pinky to move in very unique ways and the thumb in multiple directions. And with this approach and it also being wireless, people can, with this being lightweight and thin, they can actually wear it under their clothes and folks can use it out and about, outside the lab, in their homes. And so we’re really looking forward to accelerating this.</p><p>And you can link this wearable technology either to a brain-computer interface, which is what we’ve been talking a lot about, or there’s even a stand-alone mode where it uses the inertial sensing of what we call body language or basically body movements. These would be the residual movements that individuals are able to do even after their injury. It might be shoulder movement or lifting their arm. Often, in a C5-level injury, the biceps are spared, thankfully, and one can lift their arm and lift their shoulders. So folks can reach, but they can’t open and use their hand. But with this technology, we infer what they want to do. If they’re reaching for a cup of water, we can infer, ah, they’re reaching with a certain trajectory, and we use our machine learning or AI algorithms to detect, even before the hand gets to the target, we know, ah, they’re trying to reach and do what we call a power grasp or a cylindrical grasp. And we start to stimulate the muscles to help them finish that movement that they can’t otherwise do on their own. And this will not allow, say, playing <em>Guitar Hero</em>, but it is allowing folks to do very basic types of actions like picking up a cup or feeding themselves. We have a video of someone picking up a granola bar and a participant that fed himself for the first time. And that was also really an incredible moment because really achieving that independence is what we’re trying to do at the end of the day.</p><p><strong>Strickland:</strong> Yeah, let’s talk a little bit about commercialization. I imagine it’s a very different story when you’re talking about brain implants versus noninvasive devices. So where are you in that pathway?</p><p><strong>Bouton:</strong> Yeah, so you’re absolutely right. There’s a big difference between those two pathways. I spent many years commercializing technologies. And when you take them out of the lab and try to get through what we call the valley of death, it’s a tough road. And so what we decided to do is carve out the technology from the lab that was more mature and had a more direct regulatory path. We have been working closely with the FDA on this. We formed a company called Neuvotion, and Neuvotion is solely focused on taking the noninvasive versions of the technology and making those available to users and those that really can benefit from this technology. But the brain-computer interface itself is going to take a little bit longer in terms of the regulatory pathway. Thankfully, the FDA has now issued as of last year a guidance document, which is always a first step and a very important step, available. And this is a moment in time where it is no longer a question of whether we will have brain-computer interfaces for patients, but it’s now just a question of when.</p><p><strong>Strickland: </strong>Before we wrap up, I wanted to ask you about another very different approach to helping people with tetraplegia. So some researchers are using brain-computer interface technology to read out intentions from the brain, but then sending those messages to robotic limbs instead of the person’s own limbs. Can you talk about the tradeoffs, the challenges, and the advantages of each approach?</p><p><strong>Bouton: </strong>Absolutely. So the idea of using a brain-computer interface to interface with a robotic arm was and is an important step forward in understanding the nervous system and movement and even sensation. But the comment I heard from a number of participants through the years is that at the end of the day, they would like to be able to move their own arm and feel, of course, with their own hands. And so we have really been focused on that problem. However, it does bring in some additional challenges. Not only is a biological arm more complex and more difficult to control and you have fatigue, muscle fatigue, and things like this to deal with, but also, there’s another complication in the brain. So when we reach out for something, we pick up a cup, I talked earlier about the nervous system reacts to the weight of the cup and different things happen. Well, there’s another issue, too, when you stimulate in the sensory area and you cause a percept. Someone says, “Okay, I feel kind of pressure on my fingertips.” Well, the sensory cortex is right next door to the motor cortex primary, S1 and M1 as they’re called. And so you have all these interconnections, a huge number of interconnections.</p><p>And so we hypothesize and we have some evidence already on this is that when you stimulate and you start to encode and put information or you’re writing into the brain, if you will, well, guess what? When you’re on the read side and you’re reading from the motor cortex, because of all those interconnections, you’re going to cause changes in what we call modulation. You’re going to see changes in patterns. This is going to make the decoding algorithms more difficult to architect. We predicted this would happen when Ian became the first person to move their hand and to be able to pronate his arm. We predicted that during the transfer of objects, there might be difficulties and changes in the modulation and would affect the decoding algorithms. And indeed that did happen. So we believe as we close the loop on this bidirectional neural bypass, we’re going to run into similar challenges and changes in modulation, and we’re going to have to adapt to that. So we’re also working on adaptive decoding. And there’s been some great work in this area, but with actually reanimating or enabling movement and sensation in the human arm itself and the human hand itself, we believe we’re in for some additional challenges. But we’re up for it, and we are very excited to move into that space of this year.</p><p><strong>Strickland:</strong> Well, Chad, thank you so much for joining us on the Fixing the Future podcast. I really appreciate your time today.</p><p><strong>Bouton:</strong> Absolutely. Glad to do it, and thanks so much for talking with me.</p><p><strong>Strickland:</strong> Today on Fixing the Future, we were talking with Chad Bouton about a neural bypass to help people with paralysis move again. I’m Eliza Strickland for <em>IEEE Spectrum</em>, and I hope you’ll join us next time.</p>]]></description><pubDate>Mon, 27 Mar 2023 18:09:25 +0000</pubDate><guid>https://spectrum.ieee.org/neural-bypass-for-paralysis</guid><category>Type-podcast</category><category>Neurostimulation</category><category>Fixing-the-future</category><dc:creator>Eliza Strickland</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/33293244/origin.webp"/></item><item><title>Better Carbon Sequestration With AI</title><link>https://spectrum.ieee.org/carbon-sequestration-with-ai</link><description><![CDATA[
<img src="https://spectrum.ieee.org/media-library/image.webp?id=33292594&width=980"/><br/><br/><iframe frameborder="no" height="180" scrolling="no" seamless="" src="https://share.transistor.fm/e/7ce47e23" width="100%">
</iframe><hr/><h3>Transcript</h3><p><strong>Eliza Strickland:</strong> Technology to combat climate change got a big boost this year when the US Congress passed the <a href="https://www.congress.gov/bill/117th-congress/house-bill/5376/text" target="_blank">Inflation Reduction Act</a>, which authorized more than 390 billion for spending on clean energy and climate change. One of the big winners was a technology called carbon capture and storage. I’m Eliza Strickland, a guest host for <em>IEEE Spectrum</em>‘s Fixing the Future podcast. Today, I’m speaking with <a href="https://www.microsoft.com/en-us/research/people/pwitte/" target="_blank">Philip Witte of Microsoft Research</a> who’s going to tell us about how artificial intelligence and machine learning are helping out this technology. Philip, thanks so much for joining us on the program.</p><p><strong>Philip Witte: </strong>Hi, Eliza, I’m glad to be here.</p><p><strong>Strickland:</strong> Can you just briefly tell us what you do at Microsoft Research, tell us a little bit about your position there?</p><p><strong>Witte:</strong> Sure. So I’m a researcher at Microsoft Research, and I’m working on scientific machine learning in a broader sense and high-performance computing in the cloud. And specifically, how do you apply recent advances in machine learning in the HPC to carbon capture? And I’m part of a group at Microsoft that’s called Research for Industry, and we’re overall part of Microsoft Research, but we’re specifically focusing on transferring technology and computer science to solving industry problems.</p><p><strong>Strickland:</strong> And how did you start working in this area? Why did you think there might be real benefits of applying artificial intelligence to this tricky technology?</p><p><strong>Witte:</strong> So I was actually pretty interested in this topic for a couple years now, and then really started diving deeper into it maybe a year-and-a-half ago when Microsoft had signed a memorandum of understanding with one of the big CCS projects that is called <a href="https://norlights.com/" target="_blank">Northern Lights</a>. So Microsoft and them signed a contract to explore possibilities of how Microsoft can support the Northern Lights project as a technology partner.</p><p><strong>Strickland:</strong> So we’ll get into some of these super tech details in a little bit. But before we get to those, let’s do a little basic tutorial on the climate science here. How and where can carbon dioxide be meaningfully captured, and how can it be stored, and where?</p><p><strong>Witte:</strong> So I think it’s worth pointing out that there are kind of two main technologies around carbon capture, and one is called direct air capture, where you capture CO<sub>2</sub> directly from ambient air. And the second one is what’s usually referred to as CCS carbon capture and storage, is more carbon capture in an industrial setting where you extract or capture CO<sub>2</sub> from industrial flue gases. And the big difference is that in direct air capture, where you’re capturing CO<sub>2</sub> directly from the air, the CO<sub>2</sub> content is very low in the ambient air. It’s about 0.04 percent overall. So the big challenge of direct air capture is that you have to process a lot of air to capture a given amount of CO<sub>2</sub>. But you are actively reducing the overall amount of CO<sub>2</sub> in the air, which is why it’s also referred to as a negative emission technology. And then on the other hand, if you have some CCS, where you extracting CO<sub>2</sub> from industrial flue gases, the advantage there is that the CO<sub>2</sub> content is much higher in these flue gases. It’s about a 3 to 20 percent. So by processing the same amount of air using CCS, you can extract, overall, much more CO<sub>2</sub> from the atmosphere, or more accurately, prevent CO<sub>2</sub> from entering the atmosphere in the first place. So this is basically to distinguish between direct air capture and CCS.</p><p>And then for the actual capture part of the CCS, there’s a bunch of different technologies so you can do that. And they are typically grouped into pre-combustion, post-combustion, and oxy-combustion. But the most popular one that’s mostly used in practice right now is a post-combustion process called the amine process, where essentially, we have your exhaust from factories that has very high CO<sub>2</sub> content, and you bring it in contact with a liquid that has this amine chemical that binds the CO<sub>2</sub>, that you basically suck the CO<sub>2</sub> out of the air. And now you have a liquid, this amine liquid with a high CO<sub>2</sub> concentration. And because you want to be able to reuse this chemical that binds the CO<sub>2</sub>, there has to be a second step in which you now separate the CO<sub>2</sub> from this amine. And this is actually where now you have to spend most of your energy because now you have to reheat this mixture to separate the CO<sub>2</sub> and get a very high content CO<sub>2</sub> stream out that you can then store, and then you can reuse the amine. So you have to invest a lot of energy and bring it up to temperature. I think it’s about 250 to 300 degrees Fahrenheit. And once you have extracted the CO<sub>2</sub>, you have to compress the CO<sub>2</sub> so that you can store it in the next step.</p><p>And then in between the capture and the storage, you have, of course, the transportation, because usually you have to transport it from wherever you captured it to where you can store it. The most common ways to transport the CO<sub>2</sub> is either in pipelines or in vessels. And then in the final step, when we actually want to store CO<sub>2</sub>, there’s different possibilities for a storage that has been explored in the past. So people that have looked even at storing CO<sub>2</sub> at the bottom of the ocean, which we kind of moved away from that idea now. I don’t think anybody’s really considering that anymore. People have also looked at storing CO2 in old mineshafts, and the approaches that are most seriously looked at now, or already used in practice, actually, is storing CO<sub>2</sub> in old oil and gas-depleted reservoirs or in deep saltwater aquifers that are a couple kilometers below the surface. The important factors when you look at storage sites and where should I source CO<sub>2</sub> is that, first of all, you have to have a large enough volume so that it’s very impactful that you can store enough CO<sub>2</sub> there. Obviously, it has to be safe. Once you store the CO<sub>2</sub> there, you’d want to make sure that it actually stays where you injected it. And then just as important as also the cost factor, if you can not store it cost-effectively, then it’s just not going to be used in practice. So like I said, this depleted oil and gas reservoirs in these deep-water saline aquifers are right now the storage sites that pretty much satisfy these three requirements.</p><p><strong>Strickland: </strong>And as I understand it, carbon capture and storage is looked on as a useful technology for this transition because it can help society move away from fossil fuels like power plants that run on gas and coal and factories that use fossil fuels. Those sort of entities can keep going for a little while, but if we can capture their emissions, then they’re not adding to our climate change problem. Is that how you think about it?</p><p><strong>Witte: </strong>I think so. There’s a few areas like, for example, the power grid, that we have a good understanding of how we can actually decarbonize it. Because a lot of it now is still using coal and natural gas, but we have kind of a path towards carbon-neutral energy using nuclear power plants, renewable energies, of course. But then there’s other areas where the answer is maybe not that obvious. For example, you release a lot of CO<sub>2</sub> and steel production or petrochemical production or cement, construction. So all these areas where we don’t really have a very good alternative at the moment, you could make that carbon neutral or carbon negative by using CCS technology. And then I guess also why CCS is considered one of the main options is just because it’s very mature in terms of technology because the underlying technology behind carbon capture itself and CCS dates back actually to the 1930s where they developed this process that I just described, but it captured the CO<sub>2</sub>. And then as part of other industrial processes, has been used extensively since the 1970s. That’s why we have this whole network of pipelines that you could use to transport CO<sub>2</sub>. So I mean, in terms of technology, we have a really good understanding of how CCS works. That’s why a lot of people are looking at this as one possible technology. But of course, it’s not going to solve all the problems. There’s no silver bullet, really. So eventually, it has to just be part of a whole bigger package for climate change mitigation.</p><p>And it’s going to have to be part of the package at pretty enormous scale, right? What volume of carbon could we be potentially storing below ground in decades to come?</p><p>I have some numbers that I got from listening to a talk from a Philip Ringrose, who is one of the leading CCS experts. Roughly, we are releasing about 40 gigatons of CO<sub>2</sub> into the atmosphere every year worldwide. And then one of the first commercial CCS projects that is currently being deployed is the Northern Lights project. And at the Northern Lights project, they’re looking at storing about 1.5 megatons initially, and then 3.5 tons at a later stage. So if you take these numbers and you look at the overall global release of CO<sub>2</sub>, you would have to have roughly 10,000-ish Northern Lights projects, 10,000 to 20,000 CO<sub>2</sub> injection wells. So if you hear that, you might think, “Wow, that’s really a lot. 10 to 20,000 projects. I mean, how would we ever be able to do that?” But I think you really need to put that into perspective as well. Just looking, for example, how many wells we have for oil and gas production just in the US alone, I think in 2014, it was roughly 1 million active wells for oil and gas exploration, and only in that year alone, they drilled an additional 33,000 new wells, only in 2014. So in that perspective, 10 to the 20,000 wells, only for CCS, doesn’t sound that bad, is actually quite doable. But you’re not going to be able to capture all the CO<sub>2</sub> emissions only with CCS. It’s just going to be part of it.</p><p><strong>Strickland: </strong>So how can artificial intelligence systems be helpful in this mammoth undertaking? Are you working on simulating how the carbon dioxide flows beneath the surface or trying to find the best spots to put it?</p><p><strong>Witte: </strong>Overall, you can apply AI to all the different three main components of CCS, the capture part, the transport part, whereas I’m focusing mainly on the storage part and the monitoring. So for that, there’s essentially three main questions that you have to answer before you can do anything. Where can I store the CO<sub>2</sub>? How much CO<sub>2</sub> can I store, and how much can it inject at a time? And then is it safe and can I do a cost-efficiently? In order to answer these questions, what you have to do is you have to run these so-called reservoir simulations, where you have a numerical simulator that predicts how the CO<sub>2</sub> behaves during injection and after injection. And the challenge of these reservoir simulations is that, first of all, it’s computationally very expensive. So it’s these big simulations that run on high-performance computing clusters for many hours or days, even. And then the second real big challenge is that you have to have a model of what the earth looks like so that you can simulate it. So specifically for reservoir simulation, you have to know what the permeability is like, what the porosity is like, how the different geological layers look like. And obviously, you can’t directly look into the subsurface. So the only information that we do have is from drilling wells, which usually in CCS projects, you don’t have very many wells, so that might only be one or two wells.</p><p>And then the second information comes from basically remote sensing, something like seismic imaging, where you get an image of the subsurface, but it’s not super accurate. But then using this very sporadic data from wells and seismic data and some additional ones, you build up this model of what this subsurface might look like, and then you can run your simulation. And the simulation is very accurate in the sense that if you give it a model, it’s going to give you a very accurate answer of what happens for that model. But like I said, the problem is that model is very inaccurate. So over time, you have to adjust that model and kind of tweak the different inputs so that it actually explains what’s really happening in practice. So one of the big challenges there is that you want to be able to run a lot of these simulations with always changing the input a little bit to see if you get the answer that you would expect.</p><p>So where we see the role of AI helping out is, on the one hand, providing a way to simulate much faster than with conventional methods, because like I said, the conventional methods, they’re very generic, but oftentimes, I sort of have an idea of what this subsurface looks like. I only want to tweak it a little bit here and there, which is where we think that AI might be helpful. Because you have a lot of data from just running the simulations, and now you can use that simulated data to train a surrogate model for that simulator. And you might be able to evaluate that surrogate model much, much faster, and then use it in downstream applications like optimization or uncertain quantification to eventually answer these three questions that I initially mentioned.</p><p><strong>Strickland: </strong>So you’re talking about using simulated data to train the model. How then do you check it against reality if you’re starting with simulated data?</p><p><strong>Witte:</strong> So the simulated data, you would still have to do the same process of matching the simulated data to the data that you measure when you’re out in the field. For example, in the CCS project, the CO<sub>2</sub> injection wells has all kinds of measurements at the bottom that measures, for example, pressure, temperature, and then you have these seismic surveys that you run during injection and after injection, and then you can get an image, for example, of where the CO<sub>2</sub> is after you inject it. So you have a rough idea of where the CO<sub>2</sub> plume is, and now you can run your simulations, and again, change the inputs that the CO<sub>2</sub> plume that you simulate actually matches the one that you observe in the seismic data or matches the information from your well logs. That’s something that’s often done by hand, which is very time-consuming. And the hope of machine learning is that you can not only make it faster, you can also maybe automate some of these things.</p><p><strong>Strickland: </strong>You’re using a type of neural network called Fourier Neural Operators in this work, which seem to be particularly useful in physics for modeling things like fluid flows. Can you tell us a little bit about what Fourier Neural Operators are, what kind of inputs they use, and what the benefit of using them is?</p><p><strong>Witte:</strong> Fourier Neural Operators is a kind of neural network that was designed for solving partial differential equations, and the original work was done by Anima Anandkumar, a PhD student, Zongyi Li, and I think Andrew Stuart from Caltech was also involved. And the idea is you simulate training data using a numerical simulator where you have a bunch of different inputs that could be, for example, the earth model, what does the earth look like? And then you simulator output would be how does the CO<sub>2</sub> behave over time? You have many different inputs, and then typically, you train this in a supervised fashion where I now have thousands of training pairs. And then you would train, for example, a Fourier Operator to simulate the CO<sub>2</sub> for a given input. And then you can use that in these downstream applications that require a lot of these simulations.</p><p><strong>Strickland: </strong>Okay. So to bring this back to the physical world, what happens if carbon dioxide that’s injected into a subsurface aquifer or something like that doesn’t stay put? Is there a safety problem? Could it potentially cause earth tremors, or is it just that it would negate the effect of putting CO<sub>2</sub> underground?</p><p><strong>Witte: </strong>There’s definitely a risk. It’s not risk-free, but I initially overestimated the risks because kind of the mental picture that I had is that there’s a big, empty space in the subsurface: You inject CO<sub>2</sub> as a gas, and then you only need the tiniest leak somewhere and the whole CO<sub>2</sub> is going to come back out. But when you actually inject the CO<sub>2</sub>, it’s not a gas anymore because you have it under very high pressure and very high temperature, so it’s more like a liquid. It’s not an actual liquid. It’s called a supercritical state, but essentially, it’s like a liquid. Philip Ringrose said, “Think of it as olive oil.” And then the second aspect is that in the subsurface where you store it, it’s not an empty space. It’s more like a sponge, like a very porous medium that absorbs the CO<sub>2</sub>. So overall, you have these different mechanisms, chemical, and mechanical mechanisms that trap the CO<sub>2</sub>, and they’re all additive. So the one mechanism is what’s called structural trapping, because if you inject CO<sub>2</sub>, for example, in these saltwater aquifers, the CO<sub>2</sub> rises up because it has a lower density than the salt water, and so you need a good geological seal that traps the CO<sub>2</sub>. You can kind of think of it maybe as an inverted bowl in the subsurface, where the CO<sub>2</sub> is now going to go up, but it’s going to be trapped by the seal. So that’s called structural trapping, and that’s very important, especially during the early project phases. But yes, you have these different trapping mechanisms that are additive, which generally, I mean, even if you would have a leak, the CO<sub>2</sub> would not all come out at the same time. It would be very, very slow. So in the CCS projects, they have measurements that measure the CO<sub>2</sub> content, for example, so that you could easily or very quickly detect that.</p><p><strong>Strickland:</strong> And can you talk a bit more about the Northern Lights project and tell us about its current status and what you’re working on next to help that project move forward?</p><p><strong>Witte:</strong> Yeah, so Northern Lights describes itself as the world’s first open-source CO<sub>2</sub> transport and storage project. It doesn’t mean open-source in the sense like in software. What it means in this case is that they essentially offer carbon capture and storage as a service so that if you’re a client, for example, you’re a steel factory and you install CCS technology to capture the carbon, you can now sell it to Northern Lights, and they will send a vessel, pick up the CO<sub>2</sub>, and then store it permanently using geological storage. So the idea is that Northern Lights builds the transportation and storage infrastructure, and then sells that as a service to companies like— I think the first client that they signed a contract with is a Dutch petrochemical company called Yara Sluiskil.</p><p><strong>Strickland: </strong>And to be sure I understand, you said that the companies that are generating the CO<sub>2</sub> are selling the CO<sub>2</sub> to the Northern Lights project, or is it the other way around?</p><p><strong>Witte:</strong> How I think about it more as they pay for the service that Northern Lights picks up the CO<sub>2</sub> and then stores it for them.</p><p><strong>Strickland:</strong> And one last question. If I remember right, Microsoft was really emphasizing open-source for this research. And what exactly is open-source here?</p><p><strong>Witte: </strong>So the training datasets that we create, we’re planning to make those open-source, the code to generate the datasets as well as the code to train the models. I’m actually currently working on open-sourcing that, and I think by the time this interview comes out, hopefully it will already be open-source, and you should be able to find that at the Microsoft Research industry website. But yeah, we really want to emphasize the open-sourceness of not just CCS itself, but the technology and the monitoring part, because I think in order for the public to accept CCS and have confidence that it works and that it’s safe, you have to have accountability and you have to be able to put that data, for example, the monitoring data out there, as well as the software. Traditionally, in oil and gas exploration, the data and also the codes to run simulations and to do monitoring are. I mean, the companies keep it very tight to the chest. There’s not a whole lot of open-source data or codes. And luckily, with CCS we already see that changing. Companies like Northern Lights are actually putting their data on the web as open-source material for people to use. But of course, the data is only part of the story. You also need to be able to do something with that data, process it in the cloud using HPC and AI. And so we work really hard on making some of these components accessible, and that does not only include the AI models, but also, for example, API suppresses data in the cloud using HPC. But eventually, we were really hoping to-- once we have all the data and the codes available, that it’s really helping the overall community to accelerate innovations and build on top of these tools and datasets.</p><p><strong>Strickland: </strong>And that’s a really good place to end. Philip, thank you so much for joining us today on Fixing the Future. I really appreciate it.</p><p><strong>Witte: </strong>Yeah, thanks, Eliza. I really enjoyed the conversation.</p><p><strong>Strickland: </strong>Today on fixing the future, we were talking with Philip Witte about using AI to help with carbon capture and storage. I’m Eliza Strickland for <em>IEEE Spectrum</em>, and I hope you’ll join us next time.</p><div></div><div>
</div>]]></description><pubDate>Mon, 27 Mar 2023 18:08:06 +0000</pubDate><guid>https://spectrum.ieee.org/carbon-sequestration-with-ai</guid><category>Climate-tech</category><category>Microsoft</category><category>Climate-change</category><category>Ai</category><category>Type-podcast</category><category>Fixing-the-future</category><category>Carbon-sequestration</category><dc:creator>Eliza Strickland</dc:creator><media:content medium="image" type="image/jpeg" url="https://assets.rbl.ms/33292594/origin.webp"/></item></channel></rss>