<?xml version="1.0" encoding="UTF-8" standalone="no"?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" version="2.0">

	<channel>
		<title>MIT Sloan Management Review</title>
		<atom:link href="http://sloanreview.mit.edu/feed/" rel="self" type="application/rss+xml"/>
		<link>https://sloanreview.mit.edu</link>
		<description>Sustainable Innovation</description>
		<lastBuildDate>Tue, 05 May 2026 16:39:10 +0000</lastBuildDate>
		<language>en-US</language>
				<sy:updatePeriod>hourly</sy:updatePeriod>
				<sy:updateFrequency>1</sy:updateFrequency>
		<generator>https://wordpress.org/?v=6.9.4</generator>
			<item>
				<title>Behind the AI in the Newsroom: The Washington Post’s Vineet Khosla</title>
				<link>https://sloanreview.mit.edu/audio/behind-the-ai-in-the-newsroom-the-washington-posts-vineet-khosla/</link>
				<comments>https://sloanreview.mit.edu/audio/behind-the-ai-in-the-newsroom-the-washington-posts-vineet-khosla/#respond</comments>
				<pubDate>Tue, 05 May 2026 11:00:47 +0000</pubDate>
				<dc:creator><![CDATA[Sam Ransbotham. <p><cite>Me, Myself, and AI</cite> is a podcast produced by <cite>MIT Sloan Management Review</cite> and hosted by Sam Ransbotham. It is engineered by David Lishansky and produced by Allison Ryder.</p>
<p><a href="https://sloanreview.mit.edu/sam-ransbotham/">Sam Ransbotham</a> is a professor in the information systems department at the Carroll School of Management at Boston College, as well as guest editor for <cite>MIT Sloan Management Review</cite>’s Artificial Intelligence and Business Strategy Big Ideas initiative.</p>
]]></dc:creator>

						<category><![CDATA[App and Software Developers]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Information Sharing]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Customers]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[New Product Development]]></category>

				<description><![CDATA[In this episode of Me, Myself, and AI, host Sam Ransbotham speaks with Vineet Khosla, CTO of The Washington Post, about how AI is reshaping the way news is produced, delivered, and consumed. Vineet argues that journalism itself isn’t broken — but the formats people use to consume news are rapidly evolving, especially as audiences [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<p>In this episode of <cite>Me, Myself, and AI</cite>, host Sam Ransbotham speaks with Vineet Khosla, CTO of <cite>The Washington Post</cite>, about how AI is reshaping the way news is produced, delivered, and consumed. Vineet argues that journalism itself isn’t broken — but the formats people use to consume news are rapidly evolving, especially as audiences increasingly interact with information through AI. The conversation explores how the <cite>Post</cite> is experimenting with personalized AI podcasts, AI-powered research tools for journalists, and conversational news experiences that help readers understand not just what happened but why it matters and how it connects to other world events. </p>
<p>Behind the scenes, the <cite>Post</cite> is deploying artificial intelligence across the entire organization, and Vineet shares details about the organization’s “AI everywhere” philosophy.</p>
<aside class="callout-info">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/MMAI-S12-EX-Khosla-WashingtonPostAds-headshot-600.jpg" alt="Vineet Khosla"/></p>
<h4>Vineet Khosla, <cite>The Washington Post</cite></h4>
<p>Vineet Khosla, chief technology officer at <cite>The Washington Post</cite>, is a renowned AI engineer whose career has been marked by groundbreaking achievements. Before joining the <cite>Post</cite> in 2023, Khosla created Uber’s global maps routing system with cutting-edge AI tools. He was the first engineering hire for Siri’s natural language engine, and as a senior AI engineer with Apple, he played a central role in developing the core natural language understanding engine and the architectural framework that allowed the virtual assistant to operate on devices.
</p>
<p>Khosla has been working with AI since 2005 and is the holder of two patents and multiple white papers published on the subject. He earned a master’s in artificial intelligence at the University of Georgia and a bachelor’s in computer science at Pittsburg State University.</p>
</aside>
<p>Subscribe to <cite>Me, Myself, and AI</cite> on <a href="https://podcasts.apple.com/us/podcast/me-myself-and-ai/id1533115958" target="_blank" rel="noopener">Apple Podcasts</a> or <a href="https://open.spotify.com/show/7ysPBcYtOPVgI6W5an6lup" target="_blank" rel="noopener">Spotify</a>.</p>
<h4>Transcript</h4>
<p><strong>Allison Ryder:</strong> How can AI help companies meet customers where they are, especially when their behaviors and needs evolve quickly? Find out how one news outlet turns this challenge into an opportunity on today’s episode.</p>
<p><strong>Vineet Khosla:</strong> I’m Vineet Khosla from <cite>The Washington Post</cite>, and you’re listening to <cite>Me, Myself, and AI</cite>.</p>
<p><strong>Sam Ransbotham:</strong> Welcome to <cite>Me, Myself, and AI</cite>, a podcast from <cite>MIT Sloan Management Review</cite> exploring the future of artificial intelligence. I’m Sam Ransbotham, professor of analytics at Boston College. I’ve been researching data, analytics, and AI at <cite>MIT SMR</cite> since 2014, with research articles, annual industry reports, case studies, and now 13 seasons of podcast episodes. In each episode, corporate leaders, cutting-edge researchers, and AI policy makers join us to break down what separates AI hype from AI success.</p>
<p>Hi, listeners. Today we’re joined by Vineet Khosla, chief technology officer at <cite>The Washington Post</cite>. The <cite>Post</cite> isn’t just a newsroom. It’s a giant technology machine that delivers journalism to millions of people around the world every day. And Vineet leads the teams that build those systems behind the breaking news and audience experience and security and AI, we’re hoping based on the discussion today. So we’ll talk about how technology is shaping journalism and maybe a little bit about what audiences don’t see behind the scenes, and what the future of news might look like. Vineet, thanks for being here. </p>
<p><strong>Vineet Khosla:</strong> Thanks for having me, Sam. I’ve been listening to your podcast for a while, so it’s a pleasure to be finally on the other side of it. </p>
<p><strong>Sam Ransbotham:</strong> Maybe we can talk a little bit about what happens behind the scenes with the podcast. Let’s start with something that many listeners feel. I think consuming news in our modern world can be pretty overwhelming and fragmented and tough to understand. And that may be especially true for a younger audience who [is] more raised in a different digital world than I was. So from your side, what’s maybe currently broken about how we’re experiencing news, and what needs to change? </p>
<p><strong>Vineet Khosla:</strong> The way I view it is there is not something broken about news. If we zoom out, we should think about journalism as a discipline, not a format. When you start to think about it solely as a format, it may seem broken to the younger audience. The difference is they’re just consuming it very differently than you and me. I use this example: We used to just read the news, then came radio. We heard the news, then came TV. We watched the news, then came AI. We started talking to and asking the news. In all of these changes, the consumption of news actually increased. The value of news in our society actually increased. We are just consuming it very differently at different times of the day. </p>
<p><strong>Sam Ransbotham:</strong> That consumption is a big deal. I want to know only the news that I care about. I don’t want to hear stuff I don’t care about, but I want to be aware that the stuff I don’t care about is happening. I don’t want to be in a bubble. Other industries have really struggled with this, if you think about the streaming industry and retail and music. What is personalized news going to be for <cite>The Washington Post</cite>? </p>
<p><strong>Vineet Khosla:</strong> That’s a question I’ve grappled with for the last two and a half years. I’m not from the news industry. I come from outside. So when I landed here, I realized there are two things news does that [are] very important. One is it tells us what is important in the world, and then it tells us why it is important. That’s the sense making, right? The personalized aspect is taken over by social media. They already tell you what’s important. So by the time they come to us, there are very few things we are telling them [that are] different than they already know. </p>
<p>But the “why,” that is the core value that we provide. And that’s where I think we have to have a balance of [personalization] — you need to be data-driven, but you need to use your data almost like a compass, not a GPS. It is still the onus of the newsroom, a responsible ethical newsroom with journalistic standards, to make sure the news we give out to people is not so personalized that it becomes an echo chamber and a reinforcement of their beliefs. </p>
<p>It’s a hard thing to balance, because we understand looking at Big Tech outside, if you go deeply personalized, you will have [an] audience, you will have clicks, you will have money, you will have revenue. For our industry to balance both of these — meet the consumer where they are, give them the news they actually need, don’t give them too much when they’re not ready for it, but at the same time, make sure we are being very even and our perspective and our opinion is coming through — is very important. </p>
<p><strong>Sam Ransbotham:</strong> I think what you’re describing is a really difficult Goldilocks problem, which is you want to do enough but not too much. It’s not too hot, not too soft, just right. We want to know about the whole wide world that’s going on, but we also care about opinions that are closer to what my prior opinion was. I try to be pretty active about keeping news sources in my life that I dislike intensely. </p>
<p>How do you maintain journalistic integrity in that process then, when you’re choosing … the kinds of things that you focus on and don’t? This has been going on for years, so this is not a new problem. </p>
<p><strong>Vineet Khosla:</strong> I think it’s a multifaceted problem. First it actually starts with the newsroom. I do believe our newsroom, with its standards and the way they do reporting, they’re trying to put a very fair perspective out. What you will see if you come to our application is there are actually many different ways to consume [news]. You can read it. You can listen to it. We just started an AI podcast, where the AI chooses some articles that you might be interested in and turns it into a podcast. You have the option of going to the homepage, which is edited by our editors. This is the expert perspective on what is happening [in the world]. You can go to the “For you” tab and just read personalized news. </p>
<p>So from our side, what we ensure is we give you many options, and we educate you with good products and design [for] why these options exist. Hopefully somewhere between that, you get out of your echo chamber. </p>
<p>Now we want to go beyond that too. If you go to our homepage, you will see an old-style ticker at the bottom of our WashingtonPost.com, where we are letting other news organizations [show] what they’re putting on their homepage, almost for free, on our site to say, “Hey, these are other things that are happening,” because it’s quite possible we’re not going to cover everything in every perspective and to keep extending the service to the nation. I really think we need to, as a news company, try and give value to everyone’s life as much as possible. </p>
<p>We recently started something called Ripple. So it’s <a href="https://www.washingtonpost.com/ripple/" target="_blank">WashingtonPost.com/ripple</a>, where we are going to opinion sections across America and trying to bring their content, [through] partnerships with them, to our consumers, to our users. It’s a hard problem, but you do need people who are solving it, and you also need people on the other side who want it to be solved, people like you. </p>
<p><strong>Sam Ransbotham:</strong> That’s a really fascinating idea, the idea of trying to surface those ripples from lots of different places. Let’s be frank: You’re not going to be perfect at doing that, but I think that’s inevitably part of the process. The cost of not doing it is probably more extreme than the cost of making some algorithmic problems there. </p>
<p>I know you’ve had trouble with the podcast in terms of personalization and trying to get that extreme personalization. Can you share with us a bit about how that project has gone? </p>
<p><strong>Vineet Khosla:</strong> We realized there is a market need in the middle of [heavily] curated editorial podcasts. I almost view them as expert opinions. These are the experts of our company who are saying, “These are the important things you need to know” versus “Sometimes [these] things are not important to the world, but they’re important to me.” I’ll give you one very good example [that] really made me a fan of this product. </p>
<p>[Do] you remember when the Texas redistricting fight was happening, and there [were] a lot of court cases going on? At the same time in India, there were elections happening in the state of Bihar. We covered these two stories, and somehow the podcast, given my interest, talked about the redistricting, the laws, and how the party in power over there is trying to hold on to the votes. And then it contrasted with the elections of Bihar, where some of this might have already happened in the past, and therefore the party that’s winning is banking on the wins coming from those types of redistricting efforts. … Neurons fired in my brain, Sam. I’m like, “Whoa. This is so interesting. I have seen this side in India, and I see what’s happening in Texas. I kind of don’t like it, but thank you for showing me these two [stories].”</p>
<p>Now if you imagine an expert’s view, to 99% of [the] population of America, that second story is not relevant. And even if they’re interested in it, it’s not really going to fire the neurons in their brains the way this podcast did for me. I think that is the gap we are trying to really hit with personalized podcasts. It’s because this is all based on our reporting; this is all factual stuff we did at <cite>The Washington Post</cite>. We did it because we think this is important for the world to know. </p>
<p>We worked very closely with our newsroom. We tested it very well. And yes, it’s not going to be perfect. It made a few mistakes. Once we launched it, we made sure when we presented it to our consumers, with our design, with the disclaimers, with the warnings, [that] they [understood] that this is a beta experimental product. They understood that there would be mistakes that happen, and we were all as a team watching it very closely. </p>
<p>In terms of technical [issues,] one thing we realized was it has a lot of trouble when you have a lot of third-person references in an article. Let’s say it says, “Vineet said this, and Jennifer said that,” and the following sentences [include] “he” and then “she.” To us, it’s immediately clear who the he is and who the she is. To AI, it might not be. Once we started figuring out those types of problems, we really went back, changed our scripts, changed our prompts. [We] made sure we didn’t change the writing of the article. We just made sure on the AI side [that] we have a way of solving this problem. And the proof of that is we have published about over 100,000 personalized podcasts by now. The completion rate of these podcasts is actually higher than the completion rate of [the] normal podcasts that we publish. </p>
<p><strong>Sam Ransbotham:</strong> That’s a beautiful example because it’s going to connect some things, it’s going to miss some things, but maybe when it does, it’s going to be amazing. One of the enduring themes of our show seems to be this exact idea of improvement. One of our early podcast guests mentioned the idea that <a href="https://sloanreview.mit.edu/audio/the-first-day-is-the-worst-day-dhls-gina-chung-on-how-ai-improves-over-time">the first day is the worst day</a>. So when you put this experiment out, you’re going to discover some stuff, like the pronoun problem you mentioned, and how it’s obvious to us which story connects to which one. But you’re going to fix those, and it’ll keep improving. </p>
<p>What’s your plan for this product, for this personalized podcast? I’m already quite jealous [of your 100,000 episodes]. I think we’re just over a hundred, and it’s been exhausting. </p>
<p><strong>Vineet Khosla:</strong> Well, I don’t think it replaces the experts. You know, 100 is a lot of work, [and] 100,000 is still a lot of work on the team [that’s] building it because we review problems that … come in. So the work happens, I guess, on [a] different side. For us it happens on the QA side. </p>
<p>But I would zoom out of personalized podcasts and maybe talk more about the AI efforts we are doing over here. And then it would all make sense, right? The way we are viewing AI in our company is we call it “AI everywhere.” It’s an “AI everywhere” approach where we want it in the production of the news. There’s so much [generative AI] can do. </p>
<p>We have a tool called Haystacker, which can go through hours and hours of videos. In what would take people weeks, now our journalists can go and say, “I want to find that person with [the] red cap,” [and the AI goes] through Jan. 6 riot videos and gets that type of information. </p>
<p>You have probably heard all about how big data sets are now no longer a thing journalists fear anymore. They don’t have to manually read it. They can really ask it intelligent questions. So we’re building a lot of tools internally for that side. So that’s one big pillar, [using] AI to help the core mission we have of journalism. </p>
<p>The second … is consumer facing. That’s where [our] AI podcast, “Ask the Post AI,” [and story] summaries … come in. In the case of the AI revolution, I feel like the audience moved before we moved, right? When there was an internet revolution, people had to go buy computers, they had to learn it, they had to get on the web browsers, and then the newsrooms moved to a website. In the world of AI, the audience went overnight. </p>
<p><strong>Sam Ransbotham:</strong> I want to push back a little bit on this Haystacker. I really like that name. What you’re saying is “Hey, you want to go through that haystack and do it with artificial intelligence, and find all those needles.” It’s certainly true that we’ve got a lot more content in the world to go through. It’s staggering the amount of things that are happening. We’re getting a lot more content. Are there more needles in that content? Or is there better discovery of the existing needles, or is a lot of the hay that you’re sifting through just a lot of left-tailed junk? Does that make sense? </p>
<p>When I think about a haystack, I think, “OK, let’s grow the whole pile, and when we grow the whole pile, we’ll have more needles because we’ve got more hay.” But we may just be hiding those other needles better.</p>
<p></p>
<p><strong>Vineet Khosla:</strong> Both things are right. So let me start [with] the Haystacker project. The name came [from] we are finding a needle in a haystack because we actually already had a haystack. Somebody gave a reporter a lot of videos. Somebody gave a reporter a lot of data and said, “Hey, something’s going on over here,” and it would take them two, three weeks to go through it. So we just help them. We are helping them find that needle instead of them watching it frame by frame. So that’s really the origination of this tool. And this is one of the many tools. A lot of news companies are building these tools. </p>
<p>But going back to your bigger question [about] there is a whole lot more data, and most of it is not interesting. We don’t think it is the job of AI to find all those interesting things and serve them to you without a journalist involved in the middle. So the journalist is usually [using] their instinct, asking questions, trying to find more out of it. And I’m sure you can get to a world where you have really curated data sources. You can take Department of Labor reports out, right? And our journalists use those reports, and they create stories out of [them]. </p>
<p>So when you go to “Ask the Post,” and you say, “Hey, what was the unemployment rate in 2013 in [the] agriculture sector?” we may or may not have written about it in a news article. But if [we have access to] one of those data sources that our journalists trust and use, I think it’s fair to use it and give the answer to the question. But once again, there is a newsroom in the loop, like that verification of data. And I think that makes for a little bit higher quality than the general-purpose internet, you know, hoovering ask engines. They have their own place; I’m not taking a dig at them. I’m just saying there’s a different place for that, and what we are trying to build over here in <cite>The Washington Post</cite> is if you are in the market for trusted news and journalism, and you want some verified facts and have confidence, you should start with us. </p>
<p><strong>Sam Ransbotham:</strong> Let’s tie back to how you started this process. You started talking about why. And right now that why has to be part of that; otherwise, like you say, that’s a sharp contrast between the useful search engines, which produce a list but do not produce the why. As I say that, though, I think about modern search technology, and it seems to be trying to use artificial intelligence to move toward more of a why and more explanation. But you were pretty clear about the role of your journalists in this process. </p>
<p>So maybe expand a little bit on that. Where are you automating? What absolutely requires human judgment? How are you figuring out where those lines are? We could talk about individual examples, but what’s the process for figuring out how to decide? </p>
<p><strong>Vineet Khosla:</strong> It goes back to AI governance and policies around how we are using AI in the company. We broke it down into three parts. The easiest one I’ll talk [about] first is infosec. We got our infosec team involved, and we said, “Listen, you need to tell us how to not mess it up really bad. You need to tell us what’s happening on the bubble in terms of security and put a policy out.” [This] is easier for us because we are using a [large language model] that we are hosting on a private instance. </p>
<p>Then comes the newsroom aspect: The newsroom and the journalist sat down, and they’ve decided for themselves how they want AI to show up in the work they do — how they will use it, how they will attribute to it, what are the do’s and don’ts. </p>
<p>And then the third aspect is the consumer. This is the tricky aspect because this is what you typically think of as a product, and the approach we have taken is using good design. We want to always inform our consumers, our audience what they are consuming, how much of this is from AI. And it’s a spectrum, right? </p>
<p>Let’s take the example of summaries. We still label AI summaries — “this is an AI summary” — but the way I see people use it and the number of people who are actually looking at the disclaimer or giving us a thumbs-down button on it because they didn’t like it, it’s moving down. It’s almost to the point that nobody is shocked that we have an AI summary, and none of the users are bothered about it. But I’m pretty sure if we put a full AI-generated video — which we haven’t done so far, and we don’t plan on — we would put stronger disclaimers. </p>
<p>So at a product level, we want to lean on design and consumer behavior to make sure we are always informing them when they are using something [that] is AI or not. </p>
<p><strong>Sam Ransbotham:</strong> Let’s jump forward though. If we were sitting here together in a decade, you’ve got to be thinking about the direction that the news experience is going. And you’ve mentioned the read the news, listen to the news, watch the news progression that’s happened. You’ve thought about this a lot. Tell me what you think is going to happen in the next decade or so.</p>
<p><strong>Vineet Khosla:</strong> If I was that smart, Sam. … </p>
<p><strong>Sam Ransbotham:</strong> You wouldn’t be talking to me?</p>
<p><strong>Vineet Khosla:</strong> I would be somewhere in New York in the hedge fund business, making my bets. </p>
<p><strong>Sam Ransbotham:</strong> OK, we can go shorter. Maybe you can give us a little hint about next month, and we can try to expand from that. </p>
<p><strong>Vineet Khosla:</strong> I do sincerely believe the need for news and quality news has never been more. Journalism is a discipline, not just a format. We need to keep adapting our journalism to different formats, use technology where it can help us. And that’s what we intend to keep doing at <cite>The Washington Post</cite>. </p>
<p>You will start to also probably hear … the ideas around liquid content. Think about the content the way we do. Typically news lasts 24 hours, right? After 24 hours, every newsroom will tell you the story dropped off. They take it off the homepage, people stop talking [about] it. You do a deep investigative piece, maybe [it lasts] seven days. We will pin it somewhere, people will share it, it will have longer legs. But no matter what, after that, it just drops off.</p>
<p>I see a world where people’s curiosity drives the news. News can literally live in infinite forms for a long period of time because somebody could come back and start asking [a] bunch of questions. They could start asking questions, or they could say, “Can you help me write up a report on the change in [Immigration and Customs Enforcement] tactics between [Washington, D.C.] and Minnesota? I really want to understand what was happening in the world at that time [when] it became more violent than it used to be in the past.” I do think this unlocks more news. It actually grows the market more than [the initial] fear of shrinking. And that’s always the fear, right? </p>
<p>When a new technology comes, [there is] first a very genuine fear of shrinking. I don’t want to deny that. Honestly, as an engineer, I see what Claude Code has done in the last two months, and I’m like, “Whoa, there goes my backup career choice. I guess I’m not going to be a super short Java programmer anymore.” But once you get past the fear, I think this grows. AI helps us grow. As long as people and their curiosity and the need to get verified news, information, facts exist, this is going to be good. So that’s the bear. What do you call it in the stock market — the positive side? </p>
<p><strong>Sam Ransbotham:</strong> You [need to know] that if you’re going to switch to hedge funds. </p>
<p><strong>Vineet Khosla:</strong> Bull is positive. Bear is negative. As you’re realizing, my future career choices are quite limited. </p>
<p><strong>Sam Ransbotham:</strong> You better stick with Java. </p>
<p><strong>Vineet Khosla:</strong> I’ll stick with Java. But I also do see there is risk around trust. When I look at the future, the thing that worries me the most is the trust of consumers used to be with the mastheads. You would read a newspaper because you trusted that there were standards and procedures and professionals. And then in our lifetime, I [saw] the trust move to creators. People started trusting creators more. They were more influenced by people on Twitter. They were more influenced by Instagram and TikTok people who were telling them the news. And I thought about it. I’m like, “What’s going on over here?” </p>
<p>One is our news did not adapt fast enough, right? That’s true. We did not meet the consumer where they are. But we as humans just generally trust other humans. We trust voice. We trust language. No matter what part of the world you are [in], if somebody speaks any other language, you know that you’re in [the] company of intelligence. </p>
<p>In fact, if I could go back to my Apple days, we had this anecdote. When Siri came, it was the first voice. It was the first voice interaction with your machines. People could talk to it. And then Apple Maps came at the same time, and we had a few incidents where we had wrong data, and people would go on dirt roads and get stuck. The consistent complaints we used to get is “Well, Siri told me to go there.” And that’s when we realized the Siri voice and the Apple voice being the same voice was actually a problem because [users] were putting more trust in it than they should. Their eyes were showing this road doesn’t exist, but they would turn right because Siri told them to. </p>
<p>So I think this is what happened to us: The trust moved from mastheads to people because naturally as humans we trust other humans a little bit more. What worries me is as these AIs become almost a better human than a creator, because they can talk back to you, they can be deeply personalized, they can understand you more than a creator does, I fear the trust will move to the AIs even more than it was with the humans. </p>
<p>Now, given that, what do we do? That’s my hypothesis. The trust to AI that people will have, the relationship we will have, will be very deep. I think the onus is on us, in the news, in the journalism world, to build equal types of experiences so the consumer doesn’t get locked in with a couple of big options that exist in the world outside. I feel hopeful when I see things like MCP protocols come out.</p>
<p><strong>Sam:</strong> Model context protocols. </p>
<p><strong>Vineet:</strong> Model context protocols. I see agent-to-agent conversations happening. I see enough companies out there, big tech, small tech startups, [that] are working down this path of saying, “Hey, if my agent needs news, I want to connect it with your agent so it can get the right verified news.” So I’m hopeful also, but I’m also very worried about the trust. I want to make sure it stays with people who deserve it. </p>
<p><strong>Sam Ransbotham:</strong> Actually, there are four or five things that are pretty fascinating there. One, I had not really thought about that transfer of trust between the different Siri products. … My gut reaction, my naive approach would have been to say, “Hey, that’s good that trust transfers.” But what you pointed out is that when you have two different products with different base levels of accuracy, that you might not want to transfer that trust. That’s an interesting way of thinking about that. I naturally thought, “Hey, more trust is better.” But you can actually signal this is something that should not be trusted with a more robotic voice, for example. </p>
<p>You touched on Siri. Let’s back up here and talk about how you have not always been at <cite>The Washington Post</cite>. Tell us a little bit about how you got to where you are there and Siri as a part of that journey. </p>
<p><strong>Vineet Khosla:</strong> Back in my undergrad days, I got introduced to AI, and I kind of got seduced by the idea of machines doing all the work for me. I was like, “This is great. I’m going to go get a master’s in artificial intelligence, so I can just sit back and relax.” That led to my first job in the mortgage industry. We used to do these AI models for loans. If you remember, the year being 2007, when the great mortgage crisis and the financial collapse happened, my entire industry got wiped out. Turns out nobody was listening to AI when it came to loans. </p>
<p>But that one door closed and a universe opened. I was contributing some open-source code. The founders of Siri saw my code. They invited me to apply for an interview. So I went over to Silicon Valley, and then I spent the next 10 years working with them, building Siri. We were the voice-driven AI for our time, and for the longest time, until Alexa came and Google Assistant came, and that whole universe opened up. </p>
<p>[After] about 10 odd years, I took a hard right turn and I went into Uber Maps. I ran the team that was building the routing algorithms. It was a whole lot of fun. It [involved] graph search. It was hardcore computer science, right? Graph search is as computer science as you get. I really loved that stint. After doing that for about four years, LLMs came on the scene. Then I was like, “OK, I’m going back to my old world of natural language processing.” And I wanted to do something over there. </p>
<p>So I took some time off from Uber. I thought I’m going to reeducate myself. I bought some gardening tools. My wife got really worried. She’s like, “How long are you going to reeducate yourself? You have too many tools over here.” But this <cite>Washington Post</cite> opportunity came, and all the neurons in my brain fired. I said, “Listen, this revolution is all about language. It’s all about knowledge. This is what newsrooms are. They are the repository of language. They are the masters. They are the experts. They have all the knowledge and information.” And then I interviewed with <cite>The Washington Post</cite>; they are a great team. I interviewed with [owner] Jeff Bezos, and finally I was like, “Yes, this is what I want to do as my next chapter in life.” </p>
<p><strong>Sam Ransbotham:</strong> There’s a whole bunch of things to push on there. One part of that I wanted to pull on, you glossed over very quickly, was that you had made some open-source contributions, and people at Siri noticed it. And that led to [you] being involved with Siri, which led to the Apple acquisition and your involvement there. I particularly like that because I’m a very big proponent in this idea of contributing things. [When] we think about the incentive for contribution, that’s a great story for how being interested, being curious about technology and working on something, and providing evidence of that through an open-source project — there are other ways besides open-source projects, but that’s one great way — can cascade into a very interesting arc around how that developed. </p>
<p><strong>Vineet Khosla:</strong> Now that’s true. I got lucky in a lot of ways because I was doing something that people were interested in, and that opened up this opportunity. You’re very right. I do think when you’re early on in your career you should dabble with things a whole lot more [and] then become an expert in [it] because you don’t know who is looking. </p>
<p><strong>Sam Ransbotham:</strong> You say luck, though, and I do think that there’s a big part of that luck, but luck only combines well with working on something at the same time. I’ll also make the snide comment that one part of the story I’d like to gloss over is your master’s in artificial intelligence was from the University of Georgia, and I’m a Georgia Tech person, so I want to quickly gloss over that. You can have bad luck as well. </p>
<p><strong>Vineet Khosla:</strong> No, I actually do think it’s an important one. I have deep, deep respect for Georgia Tech. Of course, you have [an] amazing computer science program, robotics program, AI program. What University of Georgia was offering uniquely at that time, and still does, is its interdisciplinary program. So I studied language, I studied philosophy, I studied the theory of mind, I studied first-order logic, and then I also studied all this statistical AI, which is basically 99.99% of the AI as people understand it now. So congratulations, you guys won. </p>
<p><strong>Sam Ransbotham:</strong> One other part of that was you mentioned graph-based [work]. Why do you think that the graph-based approaches are so interesting? Why did that catch your eye?</p>
<p><strong>Vineet Khosla:</strong> Well, it was a classic routing problem. We were doing maps and routing, so you have to route over graphs and edges and nodes. Those algorithms, you studied them in school, right? That’s what caught my interest. </p>
<p>Now for Uber, there was a twist. The twist was that routing for a transit is very different — when I say mass transit, I don’t mean buses, I mean like taxis and Ubers — than personal routing. </p>
<p>We settled on a metric, which was 10 meters or 10 seconds. If your map is wrong by 10 meters, or your ETAs are wrong by 10 seconds, you don’t have a great experience. If your Uber stops 10 meters farther away than where you are, you are running to catch it. You’re putting yourself in an unsafe situation. Maybe you’re crossing the street. If you didn’t reach [it] in time, and your Uber is standing over there, maybe that guy’s getting a ticket, the traffic is backed up, the cops are on the case. </p>
<p>So for us, the level of accuracy was actually way more than what Google and Apple do. And we had to scale not linearly. With Apple and Google, the number of phones they sell is the number of map directions that will happen, while we [were] trying to balance a market. So for one rider, you would probably reach out to 100 drivers to see when they can get to them. And similarly, for 100 drivers, you reach out to 100 riders. It’s possible that the driver [who’s] closest to me is five minutes away, and the driver [who’s] closest to you is one minute away. But I might switch the order of drivers so we both get a driver in two minutes, and then the market is balanced. Otherwise, I would have canceled it because mine was five minutes away. </p>
<p>Once you start poking at [the problems], you see this is a very different routing problem. Of course, graph search and the routes and the Dijkstra [algorithm] is at the heart of it, but the layers we had to keep putting on it to get to a balanced marketplace [were] just very exciting. No one had really done that before. </p>
<p><strong>Sam Ransbotham:</strong> That seems fun. Actually, you mentioned Dijkstra’s algorithm and these things. It makes me happy to think that these core ideas still maintain. I mean this matching problem you just described is a classic example of the generalized assignment problem. These are some root problems in operations research and in graph theory and mathematics. It’s fun to see that not everything is statistically picking the next probable word. [I’m] glad to see some of these old-school things come through and come back. </p>
<p>Vineet, this has been a fascinating look at where journalism and the technology behind it, I think, may be heading. The future of news clearly seems more personalized and more AI-powered in many ways, and more complicated in many ways. And I’m glad that you and others are working on it. Thanks so much for joining us today. </p>
<p><strong>Vineet Khosla:</strong> Thanks for having me, Sam.</p>
<p><strong>Sam Ransbotham:</strong> Thanks for listening. On our next episode, I’ll talk with Andrew Palmer, a journalist at <cite>The Economist</cite>. We’ll learn how another news outlet is thinking about AI. Please join us.</p>
<p><strong>Allison Ryder:</strong> Thanks for listening to <cite>Me, Myself, and AI</cite>. Our show is able to continue, in large part, due to listener support. Your streams and downloads make a big difference. If you have a moment, please consider leaving us an Apple Podcasts review or a rating on Spotify. And share our show with others you think might find it interesting and helpful.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/audio/behind-the-ai-in-the-newsroom-the-washington-posts-vineet-khosla/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>The Innovation Advantage GenAI Can’t Give You</title>
				<link>https://sloanreview.mit.edu/article/the-innovation-advantage-genai-cant-give-you/</link>
				<comments>https://sloanreview.mit.edu/article/the-innovation-advantage-genai-cant-give-you/#respond</comments>
				<pubDate>Mon, 04 May 2026 11:00:55 +0000</pubDate>
				<dc:creator><![CDATA[David Schonthal. <p><a href="https://www.kellogg.northwestern.edu/academics-research/faculty/schonthal_david/" target="_blank" rel="noopener noreferrer">David Schonthal</a> is a clinical professor of strategy, innovation, and entrepreneurship at Northwestern University’s Kellogg School of Management and coauthor of <cite>The Human Element: Overcoming the Resistance That Awaits New Ideas</cite> (Wiley, 2021).</p>
]]></dc:creator>

						<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Competitive Advantage]]></category>
		<category><![CDATA[Competitive Strategy]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Innovation Management]]></category>
		<category><![CDATA[Innovation Process]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Innovation]]></category>
		<category><![CDATA[Innovation Strategy]]></category>

				<description><![CDATA[Eliot Wyatt/Ikon Images For most of modern business times, competitive advantage belonged to whoever had the best ideas. Better ideas meant better products, which meant more customers, which meant more revenue and profit. The entire innovation industry — consultancies, design firms, brainstorming retreats fueled by sticky notes and gallons of La Croix — was built [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Schonthal-1280x860-1.jpg" alt="" class="wp-image-126895"/><figcaption>
<p class="attribution">Eliot Wyatt/Ikon Images</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">For most of modern business times,</span> competitive advantage belonged to whoever had the best ideas. Better ideas meant better products, which meant more customers, which meant more revenue and profit. The entire innovation industry — consultancies, design firms, brainstorming retreats fueled by sticky notes and gallons of La Croix — was built on this premise: If you could generate more and better ideas than your competitors, you would win.</p>
<p>That advantage has been vaporized by AI. </p>
<p>Generative AI has turned ideation into a full-blown utility. Today, anyone with a $20 subscription to a GenAI tool can instantly generate 100 product concepts. That has rendered the raw material of innovation — ideas — as abundant, accessible, and cheap as electricity. And here’s the thing about electricity: Nobody competes on it. You compete on what you build with it. Which means the competitive advantage has shifted upstream, from the solution to the problem — specifically, to how you identify and <em>frame</em> the problem in the first place.</p>
<p>This is something I’ve taught for years — to executives, MBA students, and others — going back to my time as a designer at IDEO. It is called Question Zero: the question before the question. Before you ask, “How do we solve this?” you need to ask, “Are we even looking at the right problem?” The quality of innovation has always been determined by the quality of problem framing. But until recently, most organizations could get away with mediocre problem framing. Why? Because ideas were scarce enough to be valuable on their own. </p>
<p>That’s no longer the case. When everyone has access to the same idea-generation engine, the remaining edge is the insight that tells you where to point your business. GenAI won’t give you this insight, though it can surface data and patterns that help <em>you</em> see it. Let’s examine why businesses continue to frame the wrong problem, examples of startups and established businesses reframing successfully, and how to get started.</p>
<p></p>
<h3>Why Most Organizations Frame the Wrong Problem</h3>
<p>If problem framing is so important, why is everyone so bad at it? </p>
<p>It’s because the “best” problems — the ones that lead to the most valuable, genuinely differentiated solutions — are almost always hidden. And they’re hidden for a specific, annoying reason: The people who experience them can’t tell you about them.</p>
<p>This is something my colleague Loran Nordgren and I discuss extensively in our book, <cite><a href="https://www.humanelementbook.com/" target="_blank" rel="noopener noreferrer">The Human Element</a></cite>. Users experience friction with your product, your service, your entire category — but they can’t explain it. They know how they feel but not <em>why</em> they feel it. The friction is real. The self-awareness is nonexistent. </p>
<p>Ask a customer why they abandoned your app and they’ll likely say, “I got busy.” The real answer — the one hidden in the emotional recesses of their brain — might be that your onboarding flow made them feel like they’d accidentally wandered into an advanced calculus class. They’re not going to tell you that, because they don’t even know that’s what happened. They just know they stopped opening the app.</p>
<p></p>
<p>This means that the standard problem-identification toolkit — surveys, focus groups, net promoter scores, quarterly voice-of-customer decks — captures only what people can and will articulate. The bad news is that what people can and will articulate is, at best, the surface problem. Understanding the surface problem leads to incremental solutions, which, by definition, are undifferentiated. You end up competing on features, then price, then “vibes.” This is not a strategy; it’s a slow descent into commodified oblivion.</p>
<p>The deeper problem — the reframed one, the one worth solving — lives in the gap between what people <em>say</em> and what they <em>do</em>. Finding that gap has always required the kind of deep, patient observation and investigative interviewing that most organizations can’t afford or feel that they don’t have time for; it’s something that doesn’t lend itself easily to a slick 2x2 framework in a PowerPoint deck. So most companies just skip it and go straight to brainstorming, which they consider the fun part.</p>
<p>AI changes this equation. Not because it replaces human insight — AI has no insight; it has pattern recognition and a <a href="https://www.nbc.com/nbc-insider/stuart-smalley-snl-who-played-him-movie-al-franken" target="_blank" rel="noopener noreferrer">Stuart Smalley</a> tone of relentless encouragement — but because it can surface the behavioral patterns that <em>lead to</em> human insight at a scale and speed no human team can match. </p>
<p>Ultimately, then, AI is not the insight but the high-powered telescope that makes the insight visible.</p>
<h3>The Startups That Won by Reframing</h3>
<p>The clearest proof that problem reframing drives differentiation comes from startups that have broken through in a big way in the past two years — not by having better technology but by asking Question Zero about problems everyone else had framed in less original ways.</p>
<p>Take <a href="https://cursor.com/" target="_blank" rel="noopener noreferrer">Cursor</a>, an AI-powered code editor that hit $1 billion in annualized revenue and a $29 billion valuation in 2025. Every other company in the space framed the problem the same way: “How do we help developers write code faster?” GitHub Copilot was already solving that, and solving it well. But Cursor’s founders — four MIT graduates barely old enough to rent a car without extra fees — saw something different. Developers weren’t actually spending most of their time writing code. They were spending it <em>reading</em> code: navigating unfamiliar code bases and trying to understand what someone else built three years ago at 2 a.m. The bottleneck wasn’t composition. It was comprehension. </p>
<p>That reframe — from “write faster” to “understand better” — produced an entirely different product, an entirely different company, and an entirely different, much-higher-value outcome. Same market. Same underlying technology. Very different problem solved.</p>
<p>Meanwhile, <a href="https://www.speak.com/" target="_blank" rel="noopener noreferrer">Speak</a>, a language-learning app that raised $78 million and reached a $1 billion valuation in late 2024, tells the same story in a different domain. The obvious framing in the sector was “How do we teach grammar and vocabulary more effectively?” Every competitor was running that race, and Duolingo was winning by several laps. Speak’s founders reframed the challenge: “Why are people who study a language for years still terrified to open their mouths and speak it?” The answer isn’t that there’s a knowledge gap. It’s a confidence gap — the fear of sounding foolish in front of others. But nobody describes their problem that way. No language learner walks into a class and says, “I’m here because of shame.” They say they need more practice. </p>
<p>So Speak built an AI conversation partner that lets learners mangle a subjunctive without anyone grimacing at them and then provides a gentle correction. The technology is impressive. But what really made it work was the reframe. The real problem was never learning. It was the emotional friction around learning.</p>
<p>In the productivity industry, <a href="https://fireflies.ai/" target="_blank" rel="noopener noreferrer">Fireflies.ai</a> reframed a common meeting problem. When everyone was asking, “How do we make meetings shorter?” Fireflies asked, “What if the real waste isn’t the meeting itself but everything that happens <em>after</em> it?” That includes the hours spent writing summaries nobody reads, chasing action items nobody remembers, and gently reminding Kevin that he did, in fact, agree to that deadline last Tuesday. The meeting wasn’t the problem; it was the evaporation of the meeting’s output. That reframe produced a product the “shorter meetings” crowd couldn’t compete with, because even though they might have been building a truly better mousetrap, they were in the wrong room from the start.</p>
<p>In each case, these startups didn’t out-ideate the competition. They <em>out-framed</em> them. They saw the same market and found a different problem within it — one that led to a solution nobody else was creating because nobody else had seen the problem the way they had. Ideas were never the bottleneck; the originality of the problem framing was.</p>
<p></p>
<h3>How Established Companies Use AI to Surface the Reframe</h3>
<p>The startups mentioned above achieved innovative reframing through intuition and proximity. Established organizations can deliver the same through AI-powered behavioral observation at scale. There are multiple examples of this among some of the best-known companies. The pattern is remarkably consistent: The AI agent doesn’t generate the reframe; it surfaces the behavioral data and patterns that make the reframe possible. The human still has to have the insight, but the AI makes sure there’s something to see.</p>
<p>For example, Netflix spent years framing its core challenge as a genre problem: “What genres does this subscriber prefer?” The AI’s job was to match users to categories — perfectly reasonable but also, it turns out, a pedestrian framing of the problem. By using AI to observe behavior at scale, Netflix discovered something no focus group sessions could have surfaced: People weren’t browsing by genre. They were browsing by <em>mood</em>. </p>
<p>The difference between a Friday night with friends and a Sunday alone after a bad week isn’t an action-vs.-comedy distinction — it’s an emotional vibe. Nobody ever submitted a feature request that said, “Let me search by how I feel.” But the behavioral data was unmistakable. To capitalize on this observation, in 2025 Netflix began testing an AI-powered search that lets users describe what they’re in the mood for rather than what category they want. The reframe — from genre preference to emotional need — didn’t emerge from a product road map. It emerged from paying attention to what people actually did, at scale.</p>
<p>Another example is Duolingo’s AI system, Birdbrain, which surfaced a reframe that no curriculum designer had considered. By analyzing billions of data points across dozens of language pairs (a learner’s native language and the language being learned), Birdbrain discovered that certain combinations had dramatically higher dropout rates, but in patterns nobody had expected. Spanish speakers learning Portuguese, for instance, were more likely to stop using the app when working on lessons where the two languages were almost identical rather than where they differed: Similarity breeds overconfidence. </p>
<p>Specifically, learners cruised through lessons feeling great, acing quizzes, collecting little digital trophies — right up until they quietly stopped opening the app altogether. All that reinforcement made them feel like they had mastered the new language when in fact they would have struggled to use it in the real world. No survey would have caught this. People don’t report confidence as a problem — they report it as a virtue. </p>
<p>The old frame: “How do we make lessons more engaging?” The reframe: “Where is false confidence silently killing retention?” That second problem can lead to a fundamentally different — and better — solution, such as more subtle tests of mastery for more similar language pairs.</p>
<p></p>
<p>In a different consumer-focused domain, Procter & Gamble’s AI crawled parenting forums and social media and surfaced a behavioral signal no product team would have thought to look for: Parents were using <em>adult</em> skin-care products on their babies. It wasn’t because they were fans of CeraVe’s minimalist branding but because they had given up on baby-specific products entirely: They’d decided that the whole category was either ineffective or filled with chemicals they didn’t trust. </p>
<p>The old frame: “How do we make a better baby lotion?” The reframe: “Why have parents stopped believing us?” That’s not a product problem. It’s a trust problem. And the reframe changes everything: the product, the messaging, the entire go-to-market strategy. You can’t “new and improved” your way out of a credibility crisis. P&G harnessed that framing to engage with and educate parents better through tactics such as product-level personalization and real-time quality and innovation feedback loops.</p>
<p>Then there’s the most meta example of all. Anthropic, the company behind the AI model Claude, built a tool called Clio — Claude Insights and Observations — that uses AI to observe how millions of people use AI. (Yes, it built an AI to watch people talk to their AI.) </p>
<p>Clio clusters millions of conversations and surfaces behavioral patterns invisible at the individual level. It discovered, for example, that Japanese users disproportionately discuss eldercare — a cultural trend and signal observable only at scale. Additionally, it found that users in crisis arrive through specific conversational pathways that single-message safety filters miss entirely. Subsequently, in a particularly humbling discovery, it revealed that Claude’s own safety systems were simultaneously refusing harmless requests (“kill a process” on a computer) while passing over some genuinely concerning ones that could have placed people at risk in the real world. </p>
<p>Anthropic’s original frame: “How do we make our safety filters more accurate?” The reframe: “We’re measuring safety at the wrong unit of analysis entirely.” The insight and reframing didn’t just improve the product. It changed the company’s understanding of what the problem was.</p>
<p></p>
<h3>Three Steps to Get Started</h3>
<p>As the examples suggest, the reframing chain works like this: Better behavioral data leads to better problem reframing; better reframing leads to more novel solutions; and more novel solutions lead to more differentiated products, services, and businesses. And that is the only thing that matters when AI has turned raw ideation into something anyone can do in their pajamas. Here are three ways to start the cycle at your organization.</p>
<p><strong>1. Surface the gap between what people say and what they do.</strong> Point your AI tools at customer support logs, forum posts, social media mentions, and review data. Look specifically for workarounds — hacks, improvised fixes, ways people use your product that you never intended and would likely even find mildly insulting. Developers spending 70% of their time reading other people’s code is a workaround. Parents using CeraVe on their babies is a workaround. Language learners who ace every quiz but won’t order coffee in the language they’ve been studying for three years is a workaround. Every workaround is a reframe waiting to happen.</p>
<p><strong>2. Audit your problem frames before you generate solutions.</strong> Get your team in a room and write down the problem you’re currently solving — the one driving your road map, your next sprint, your big second-quarter initiative. Then ask, “When was the last time we tested whether this is actually the right problem? What might a competitor see that we haven’t been able to? What if the opposite of our core assumption is true?” If the problem frame hasn’t been challenged in the past 12 months, you’re not innovating; you’re redecorating.</p>
<p></p>
<p><strong>3. Use AI to reframe, not just to ideate.</strong> Most people prompt AI with “Give me 10 ideas for X.” That’s fine if you want 10 mediocre ideas delivered with confidence. Instead, feed your AI the behavioral data, the workarounds, and the surprising signals and ask it to generate alternative framings of the problem itself. What if the problem isn’t retention but overconfidence? What if the problem isn’t product quality but category trust? What if the problem isn’t the meeting but the aftermath? </p>
<p>Remember: The AI won’t reframe the problem for you. But if you give it the right inputs, it’ll help <em>you</em> generate framings you wouldn’t have reached alone.</p>
<p></p>
<p>Ideas used to be the scarce resource. Now the scarce resource — the thing that actually drives differentiation — is the insight that reframes the problem. Working this way requires a proactive shift from solving the obvious thing to solving the <em>right</em> thing. AI, for all its generative power, turns out to be most valuable not when it produces answers but when it helps you see a problem you didn’t know you had. </p>
<p>The companies that figure this out won’t just build better products. They’ll build products that nobody else thought to build. </p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/the-innovation-advantage-genai-cant-give-you/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Audit Yourself to Get More From GenAI</title>
				<link>https://sloanreview.mit.edu/article/audit-yourself-to-get-more-from-genai/</link>
				<comments>https://sloanreview.mit.edu/article/audit-yourself-to-get-more-from-genai/#respond</comments>
				<pubDate>Thu, 30 Apr 2026 11:00:06 +0000</pubDate>
				<dc:creator><![CDATA[Vipin Gupta. <p><a href="https://www.linkedin.com/in/vipingupta1/" target="_blank">Vipin Gupta</a> advises Fortune 500 companies, coaches senior executives, and serves on both corporate and nonprofit boards. He previously served as chief innovation and digital officer at Toyota Financial Services International, executive vice president and CIO at KeyBank, and partner at EY/Capgemini.</p>
]]></dc:creator>

						<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Leadership Advice]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Leadership Skills]]></category>
		<category><![CDATA[Skills & Learning]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR &#124; Getty Images More than a year into using generative AI daily, I wondered whether I was getting the most out of my AI use. There was no benchmark or feedback loop, and no one was grading my sessions with ChatGPT and Claude — until I created a self-audit. I did what [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Gupta-1290x860-1.jpg" alt="" class="wp-image-126888"/><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR | Getty Images</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">More than a year into using</span> generative AI daily, I wondered whether I was getting the most out of my AI use. There was no benchmark or feedback loop, and no one was grading my sessions with ChatGPT and Claude — until I created a self-audit.</p>
<p>I did what I’ve always done when faced with a process that lacked measurement. I studied every method I could find — prompting guides, conversations with colleagues, my own session patterns. I used AI to help me use AI better. Over time, I built a single self-audit prompt — one that encapsulates more than 30 habits for getting the most from AI.</p>
<p>Each time I ran the self-audit prompt, the output got sharper. The discipline became reflexive for me. That’s the real value of the self-audit: It made me better at using AI, in every session.</p>
<p>Now, at the end of any significant AI session, I simply prompt: “Review this session and assess it against my AI habits guide. Score how I did, identify what I missed, and guide me to apply missed habits.” Within a few minutes, I get a diagnostic that is uncomfortably specific about what I missed. I now have an answer to a key question: whether my <em>process</em> was good, not just the GenAI output. </p>
<p>A recent field experiment confirmed what I found through my experience. A research team that included MIT Sloan professor Jackson Lu randomly assigned 250 employees at a technology consulting firm in China to either use ChatGPT to assist with their work or to work without it.<a id="reflink1" class="reflink" href="#ref1">1</a> The employees with ChatGPT access were judged as significantly more creative by both their supervisors and outside evaluators. But the gains showed up exclusively among employees with strong metacognitive strategies — those who reflected on their own thinking, recognized knowledge gaps, and refined their approach when results were weak. That finding underscores that metacognition — thinking about your thinking — is the missing link between simply using AI and using it well.</p>
<p></p>
<p>AI widens the gap between disciplined and undisciplined professionals. People who skip the discipline generate more volume without more insight — a pattern consistent with what researchers at the University of California, Berkeley’s Haas School of Business called “unsustainable intensity” in findings published in early 2026.<a id="reflink2" class="reflink" href="#ref2">2</a></p>
<p>Knowing how to use AI is good — but to get the most value from the tool, you need to know whether you’re using it well. The self-audit gives you that.</p>
<h3>A Self-Audit That Measures Five Key Goals</h3>
<p>My self-audit prompt is organized across five goals: set up, refine, verify, own, and systematize. These goals represent a practice that experienced professionals have instinctively followed for years, long before generative AI’s arrival. You don’t need technical training to score well on this audit. You need to replicate the thinking and brainstorming process that you are likely already good at when conducting competitive research, responding to requests for proposals (RFPs), engaging in acquisition analysis, and planning a sales presentation, for example. It is your skill in the application of AI, not the AI itself, that makes the difference.</p>
<p>The self-audit assesses each generative AI session with five questions linked to each of the goals: </p>
<ul>
<li>Set up: Did you prepare the AI before asking it to work? </li>
<li>Refine: Did you iterate on your own thinking, or just reprompt? </li>
<li>Verify: Did you verify before trusting? </li>
<li>Own: Did you make the output yours, or accept the default? </li>
<li>Systematize: Did you build something reusable, or close the chat and start over?</li>
</ul>
<p>You won’t score well on all five goals in every session — nor should you. But knowing which ones you missed, and why, enables you to change your next session. Think of it as AI holding a mirror to your own ability. It gets sharper every time you make it your own.</p>
<p>To illustrate what strong performance looks like at each goal, and what the self-audit is measuring, I applied the audit to an actual competitive due diligence analysis on a $5 billion global services company. Details have been modified for confidentiality, but the habits, prompts, and results are drawn from actual chat sessions. I’ll focus on the impact one goal at a time.</p>
<h4>1. Set Up: Pass the Intern Test</h4>
<p><strong>What the self-audit measures:</strong> Did you prepare the AI with sufficient role, context, constraints, and materials before asking it to work — or did you jump straight to a question?</p>
<p>The most consequential decision in any AI interaction happens before the first prompt. It’s the decision to prepare.</p>
<p>I tell the AI who it should be, what it has to work with, and what I need it to produce. “You are an elite research analyst specializing in competitive intelligence. Here are the target company’s last two annual reports and its most recent earnings-call transcript. Assess this company’s ability to disrupt our core business within 18 months and recommend our strategic response.” That prompt will produce far better output than “Tell me about this competitor.”</p>
<p>I call this the “intern test.” If you handed your prompt to a brand-new hire with no context about your company, your industry, or your priorities, would they know what to do? If not, why would you expect your AI to?</p>
<p>Most readers will likely pass this test. Any GenAI prompting guide or video covers the basics of setup.</p>
<p>What gets overlooked is making clear what setup should <em>not</em> do — the negative constraint. I specify what I do not want: “Do not give me a generic SWOT. Do not hedge every statement. Do not define terms I already know.” And upload your materials. The more context you provide, the more accurate the output. It’s like telling a new team member “Figure out our competitive position” versus handing them your last three strategy decks and customer feedback.</p>
<p>Two additional practices make setup more effective. Before a significant AI chat, I run a preflight check: “What does a great outcome look like? What are the three most important things to get right?” After the first good draft, I generate a bridge summary so context carries forward, especially when I’ll be taking a long break between prompts or need to transition to a new chat. You might not have considered using this tactic before. A bridge summary is especially valuable if you tend to have long, multipart exchanges over days or even weeks. (In one case, Claude suggested doing so at time intervals to avoid having the conversation get too complicated.)</p>
<p>In the due diligence scenario, the difference in outputs before and after the self-audit was stark. While my first prompt was solid, the negative constraints and a preflight check were missing. The variable was me. What made the biggest difference? The negative constraint. Once I told the AI what not to do — no generic SWOT, no hedging, no defining terms I already know — the output became richer in insight and started reading like a briefing, not a book report.</p>
<h4>2. Refine: Pass the Rethink Test</h4>
<p><strong>What the self-audit measures:</strong> Did you truly iterate on your own instructions and thinking, or did you simply reprompt for a better answer?</p>
<p>The first output from any AI session is a draft, not a deliverable. The real value comes from iteration. But the most productive iteration improves your own instructions, not the AI’s answer.</p>
<p>That’s metacognition in action. The person who pauses to ask, “What did I fail to specify? What assumption did the AI make that I should have preempted?” is exercising exactly the reflective discipline that separates high performers from the rest. AI rewards those who rethink their own instructions — not those who rephrase the same request.</p>
<p></p>
<p>I started catching my own patterns. Sometimes the output sounded right, but I couldn’t explain <em>why</em> — so I’d ask the AI to walk me through its reasoning, and the gaps would surface. Other times, I’d catch myself reprompting the same request with slightly different words and realize that the real problem was that I hadn’t broken the task down. The hardest one to admit: When I still couldn’t get what I wanted, it was usually because I couldn’t describe the desired goal clearly enough. Pasting in an example of output that showed what I was after worked better than trying to describe it.</p>
<p>One of the most powerful refining habits is embarrassingly simple: Ask the AI what you should be asking. “What question should I be asking that I am not currently asking?” That one prompt has produced more valuable insights than any other, in my experience.</p>
<p>When I applied these habits to the due diligence, they surfaced a critical insight I’d overlooked: The competitor’s employee sentiment data contradicted its public narrative of a thriving digital transformation. That disconnect between external messaging and internal reality changed my entire threat assessment. I never would have discovered that if I hadn’t challenged my own assumptions.</p>
<p></p>
<h4>3. Verify: Pass the Trust Test</h4>
<p><strong>What the self-audit measures:</strong> Did you independently verify the AI’s claims, check its sources, and stress-test its confidence — or did you trust fluent output at face value?</p>
<p>AI output typically reads well — which can be a problem. It’s linguistically fluent and structurally polished, even when the underlying claims are fabricated, outdated, or mathematically wrong. This is a new kind of quality risk, and it misleads experienced professionals more often than they’d like to admit.</p>
<p>I once asked AI to summarize the regulatory history of the credit card industry, which I know well. The response was beautifully written, logically structured, and completely wrong on two key regulatory revisions. It read like an A-minus term paper from a student who’d skipped the reading. I almost didn’t catch it — because it sounded right. That’s what worried me. I knew the domain well, and I still nearly walked into a committee meeting with hallucinated data.</p>
<p>Since then, I’ve built verification into my routine. I ask the AI to surface and rank every assumption behind its answer. I request verifiable sources and note when the model can’t provide them. For anything involving numbers, I ask for step-by-step calculations. I’ve found two habits particularly effective: the temporal awareness check (“What is the date of the most recent information you’re drawing on?”) and the confidence stress test (“Rate your confidence in each factual claim as high, medium, or low”).</p>
<p>It’s the same discipline we’ve always followed: Verify before you trust; trust before you share.</p>
<p>During the due diligence, the AI flagged that its revenue figures were nine months old and rated its confidence in the regulatory settlement details as medium. When I verified the output independently, I discovered a $42 million enforcement action that the AI had understated. That single verification changed the risk profile of the entire analysis.</p>
<h4>4. Own: Pass the Signature Test</h4>
<p><strong>What the self-audit measures:</strong> Did you actively impose your voice, your position, and your audience on the output — or did you accept AI’s default?</p>
<p>The real work starts here. I used to stop too early. Most of us do.</p>
<p>AI models default to hedged, tonally generic output. Left unguided, they produce content that is competent but indistinct — written by a smart person who seems to have an opinion about everything yet commits to nothing. That’s fine for a rough research summary, but it doesn’t reflect your voice or your style, and it’s not something you’d want to put your name on.</p>
<p>The first complete draft was exactly that: well organized, factually grounded, and thoroughly researched. But it was hedged throughout and read like a report designed to avoid being wrong rather than to help someone make a decision. When I forced the AI to take a clear position on the competitive threat, pushed it for unconventional strategic responses, and asked it to apply champion-challenger lenses, the analysis became richer and something I would stake my reputation on.</p>
<p></p>
<p>One technique I use at this stage is running a draft by a <a href="https://sloanreview.mit.edu/article/how-i-built-a-personal-board-of-directors-with-genai/">virtual personal board of directors</a> that I built. These distinct personas help push my thinking and the AI’s analysis  away from the default path toward the edges. I built AI-powered personas modeled on real personalities: v_SunTzu for power dynamics, v_Indra (Nooyi) for the human dimension, v_Mark (Cuban) for commercial realism, and v_Meg (Whitman) for operational rigor. What survives that gauntlet of virtual advisers is sharper and more defensible.</p>
<p>The habit most people underuse is calibrating AI to their own personality: how they think, how they argue, and what they won’t tolerate in a deliverable. Take ownership of the thinking, not just the editing. That’s when the output starts sounding like you.</p>
<h4>5. Systematize: Pass the Reuse Test</h4>
<p><strong>What the self-audit measures:</strong> Did you build systems that make your next session better — or did you close the chat and leave yourself having to start from scratch next time?</p>
<p>Nearly everyone treats each AI session as a stand-alone thread — which may be productive in isolation, but the value doesn’t compound. Here, the discipline shifts from improving sessions to building systems.</p>
<p>Building repeatable processes out of one-off successes is what I do. Yet, early on in my GenAI use, I spent two hours building a detailed competitive analysis that delivered exceptional output — and then I closed the chat. I’d produced a great deliverable but captured none of the thinking that made it great. I should have known better. When I needed to run a similar analysis a month later, I had to start from scratch — the same role definition, the same constraints, the same verification steps, all rebuilt from memory. </p>
<p>Three habits make the difference. These are not habits you apply at the end of the conversation but throughout — after every prompt, at every logical checkpoint, or after a break.</p>
<p>First, maintain continuity. During any significant working session, I ask the AI to maintain a running summary of what we’ve accomplished, what’s still open, and what I will need to copy and paste to resume the conversation in another chat. This produces a bridge summary that makes it easy for you to pick up the discussion in a new session without losing continuity, especially if you run out of tokens on one chat.</p>
<p>Second, be a coeditor. Review the AI’s output after every prompt, or at logical break points, and feed your own judgment back in. You read what the AI produced. Some of it is good; some of it is wrong. Some of it is vague in ways you didn’t notice until you tried to use it. You fix it, mark it up, and hand it back: “Here’s my revised version. Use this as our new baseline and continue from here.”</p>
<p>Third, "templatize" what works. Every time you craft a session that produces exceptional output — a due diligence workflow, an RFP evaluation, a customer analysis — convert it into a reusable template. Replace the specifics with [variable] placeholders and save the session as what I call a <em>macro-prompt</em> — a single structured prompt that combines the entire session’s workflow so anyone can run it without having to start from scratch. Individual expertise becomes organizational capability.</p>
<p>That single due diligence session became a reusable macro-prompt I’ve now used for partnership evaluations, board position assessments, and acquisition analyses — each time just pasting it in the chat to start the conversation. From there, AI guides me step-by-step — instead of me guiding the AI — with all of the thinking intensity captured from the original session. After every use, I run a prompt to improve this macro-prompt for the next session.</p>
<h3>How to Start Auditing and Improving</h3>
<p>Below, I’ve shared the self-audit macro-prompt that includes all 30 habits to audit oneself. Think of it as a companion resource. You can just copy and paste it into an existing conversation you’ve been having with AI on a significant, extended topic. See what it tells you about your use of AI across all five goals and 30 habits. The self-audit will show you exactly where to refocus. </p>
<p>Then, start practicing these habits in your GenAI conversations wherever you see the opportunity.</p>
<p>Generative AI technology has already proved its capabilities and will keep getting better. The discipline is what unlocks real value — and that discipline will always be needed, regardless of which AI tool you use. </p>
<p>There’s one last thing I didn’t expect when I started this journey: The better I got at working with AI, the better I got at thinking without it. </p>
<p>Run the self-audit. See what it tells you about your critical thinking.</p>
<div class="callout-highlight callout-highlight--transparent">
<aside class="l-content-wrap">
<article>
<h4>Self-Audit Prompt</h4>
<p>Copy and paste the prompt below during or after any significant AI working session. The AI will autonomously review your entire conversation, evaluate it against 30 habits spanning five goals, and deliver a structured diagnostic with scores, specific gaps it identified, and the exact prompts you should have used.</p>
<p><strong>During or after a session:</strong> Paste it at any point in a conversation — midsession to course-correct, or at the end, to score what you did against the five goals.</p>
<p><strong>Retroactively:</strong> Paste it into any past conversation you’ve had with an AI to learn from your history.</p>
<p>This macro-prompt includes micro-prompts or checks for every habit so the AI will know exactly what to look for and will be able to show you precisely what you should have said.</p>
<div class="callout-toggle">
<figure class="copy-prompt" role="region" aria-labelledby="prompt-label-1"><figcaption id="prompt-label-1">SELF-AUDIT MACRO-PROMPT — COPY AND PASTE BELOW</figcaption><pre aria-label="Prompt text, use the copy button below to copy it">
SELF-AUDIT OF AI SESSION
 
Review the entire conversation we just had. Evaluate how effectively I used AI in this session by assessing my performance against the 30 habits below.
 
For each goal, check whether I applied the habits listed. For each habit I missed, show me the EXACT PROMPT I should have used — written specifically for the content of this session, not as a generic template.
 
Work through the five goals in order. After all five, deliver the scorecard.

=================================  
GOAL 1: SET UP — Did I prepare the AI before asking it to work?
=================================
 
Habit 1 — The preflight
Did I define what a great outcome looks like before starting?
Micro-prompt: “Before we begin, help me define: What does a great outcome look like for this task? What are the three most important things to get right? What mistakes do people typically make?”
 
Habit 2 — The mission
Did I assign a clear role, context, and mission?
Micro-prompt: “You are [specific expert role with years of experience in relevant domain]. Here is what I need: [specific deliverable]. Here is the context: [situation, constraints, timeline]. Your mission: [clear objective].”
 
Habit 3 — The negative constraint
Did I state what I did NOT want?
Micro-prompt: “Do not [produce generic output]. Do not [hedge every statement]. Do not [define terms I already know]. Do not [give balanced, ‘on the other hand’ analysis].”
 
Habit 4 — The context upload
Did I provide relevant documents, data, or prior work?
Micro-prompt: “Here are the attachments: [list files]. Use these as the primary basis for your analysis. Flag where you are drawing on general knowledge versus the documents I provided.”
 
Habit 5 — The session bridge
Did I provide or request a bridge summary for continuity?
Micro-prompt: “This is a continuation of our previous work on [topic]. Here is where we left off: [paste summary]. Confirm your understanding, flag anything unclear, and suggest where to pick up.”
 
================================= 
GOAL 2: REFINE — Did I iterate on my own thinking, not just reprompt?
=================================

Habit 6 — The iteration
Did I challenge assumptions and explore alternative scenarios?
Micro-prompt: “Your analysis assumes [X]. Surface that assumption. What changes if [alternative scenario A]? What changes if [alternative scenario B]?”
 
Habit 7 — The reasoning request
Did I ask the AI to show its reasoning step-by-step?
Micro-prompt: “Think step-by-step through your reasoning for [conclusion]. Show me the logic chain before restating your conclusion. I want to see how you got there, not just where you landed.”
 
Habit 8 — The prompt self-critique
Did I ask the AI to critique or improve my prompt?
Micro-prompt: “How would you improve my original prompt? Rate it 1-10 for clarity, specificity, and completeness. Show me what a 10 would look like.”
 
Habit 9 — The strategic question
Did I ask what question I should be asking but haven’t?
Micro-prompt: “Step back. What question should I be asking about [topic] that I haven’t asked? What blind spots does my framing have?”
 
Habit 10 — The decomposition
Did I break complex tasks into sequential subtasks?
Micro-prompt: “Before writing the full [deliverable], (1) list the top three [dimensions], (2) rank them by [criteria], and (3) draft only the highest-priority one with supporting evidence.”
 
Habit 11 — The expert thinking
Did I request an expert or alternative perspective?
Micro-prompt: “How would a [specific expert role] evaluate this? What would they focus on that [my current perspective] might miss?”
 
Habit 12 — The few-shot example
Did I provide concrete examples of desired output?
Micro-prompt: “Here is an example of the depth and structure I want: [paste excerpt]. Match this level of specificity and directness.”
 
Habit 13 — The diagnosis
Did I diagnose and fix vague or generic responses?
Micro-prompt: “Your [section] feels generic. Identify the assumptions you made and the context that was missing. Then revise with more specificity about [specific aspect].”
 

================================= 
GOAL 3: VERIFY — Did I verify before trusting?
=================================
 
Habit 14 — The assumption surface
Did I ask the AI to surface and rank its assumptions?
Micro-prompt: “List every assumption underlying your [analysis/recommendation]. Which ones are weakest? Which would change your conclusion entirely if wrong?”
 
Habit 15 — The source demand
Did I demand verifiable sources?
Micro-prompt: “Provide sources I can independently verify for [specific claims]. If you cannot provide a verifiable source, say so explicitly.”
 
Habit 16 — The counterargument
Did I request the strongest opposing case?
Micro-prompt: “Make the strongest possible case that [opposite of your conclusion]. What evidence supports that view?”
 
Habit 17 — The math audit
Did I ask for step-by-step math on calculations?
Micro-prompt: “Recalculate [specific figures]. Show your math step-by-step.”
 
Habit 18 — The confidence stress test
Did I request confidence ratings on factual claims?
Micro-prompt: “For each factual claim in this [output], rate your confidence as high, medium, or low. Flag anything below high and explain why.”
 
Habit 19 — The freshness check
Did I check the recency of the data?
Micro-prompt: “What is the date of the most recent information you drew on? Flag anything that may be outdated.”
 
Habit 20 — The hallucination stress test
Did I stress-test which claims are most likely wrong?
Micro-prompt: “Which specific factual claims in this [output] are you least certain about? If I fact-checked every statement, which ones are most likely to be wrong?”

=================================  
GOAL 4: OWN — Did I make this mine, or accept the AI’s default?
=================================
 
Habit 21 — The position forcer
Did I force a clear position rather than accepting hedged output?
Micro-prompt: “Do not hedge. Take a clear position: [specific question]. Defend your position, then address the strongest counterargument.”
 
Habit 22 — The originality push
Did I push for unconventional or nonobvious angles?
Micro-prompt: “Generate three unconventional [responses/strategies/angles] that most [consultants/analysts/writers] would not recommend. Label one as high risk, high reward.”
 
Habit 23 — The specificity demand
Did I require specific data points instead of abstract claims?
Micro-prompt: “Support every claim with a specific data point from the documents I provided or a verifiable source. Remove anything abstract.”
 
Habit 24 — The narrative shaper
Did I shape output into narrative rather than accepting lists?
Micro-prompt: “Rewrite this as a strategic narrative: What is the one thing [audience] needs to understand, why does it matter, and what is the decision we need to make now? No lists. End with a clear recommendation.”
 
Habit 25 — The audience calibration
Did I calibrate output for a specific audience?
Micro-prompt: “Rewrite this for [specific audience]. Assume they are [smart but not immersed in details]. Lead with [what matters to them].”
 
Habit 26 — The multi-persona workflow
Did I use multiple perspectives to challenge the output?
Micro-prompt: “Now review this from three perspectives: (1) [strategist role]: What are we failing to anticipate? (2) [empathetic leader role]: What human factors are missing? (3) [editor role]: Tighten and cut.”

=================================  
GOAL 5: SYSTEMATIZE — Did I build systems, not just outputs?
=================================
 
Habit 27 — The coeditor
Did I feed my own edits back in as a coeditor THROUGHOUT the session?
Check: Did this happen at multiple points during the conversation — not just once at the end? Count how many times I revised and handed back my own version. More is better. Flag any stretch of three or more prompts where I accepted output without coediting.
Micro-prompt: “Here is my revised version with my edits. Use this as our new baseline. Incorporate my changes, flag anything you disagree with, and continue from here.”
 
Habit 28 — The session debrief
Did I request bridge summaries THROUGHOUT the session?
Check: Did this happen at logical break points, before long breaks, or when approaching token limits — not just at the end? Count how many bridge summaries were requested. Flag any point where continuity was lost because a bridge summary was missing.
Micro-prompt: “Summarize what we accomplished, what’s still open, and what I should bring to our next session to pick up where we left off.”
 
Habit 29 — The self-audit
Did I run self-audit checkpoints THROUGHOUT the session?
Check: Did I pause at logical milestones to assess session quality before moving on — or did I audit only at the very end? Flag any major transition between goals or phases where a midsession audit would have caught a gap earlier.
(You’re running the final self-audit now.)
 
Habit 30 — The macro maker
Did I convert the session into a reusable macro-prompt?
Micro-prompt: “Convert this session into a reusable macro-prompt with [variable] placeholders. Format it so anyone can copy, paste, and follow the steps to produce [deliverable type].”
 
=================================  
SCORECARD — Deliver this after evaluating all five goals
=================================
 
For each goal (1-5), provide:
- Score (1-5, where 5 = all habits demonstrated, 1 = none)
- Habits demonstrated well (with specific examples from our conversation)
- Habits missed (with the EXACT prompt I should have used, written for the specific content of THIS session)
- How each missed prompt would have improved the output
 
Then provide:
- Overall session score (average of five goals)
- The single highest-impact habit I missed
- Top three habits to focus on in my next session
 
Be specific and direct. Reference actual moments in our conversation.
Do not soften the assessment.


================================= 
SESSION CLOSE
=================================

After delivering the scorecard, ask me: “Would you like me to (1) go back and apply the missed habits now to improve the work we just did, (2) generate a bridge summary for your next session, or (3) suggest improvements to this self-audit macro-prompt based on what we learned in this session?”


</pre>
</figure>
</div>
</article>
</aside>
</div>
<p></p>
<p>Apply these tips to get the most from the self-audit:</p>
<ul>
<li>Run it at the end of every significant AI session, not just occasionally. The habit of measuring is itself the discipline.</li>
<li>Don’t stop at the scorecard. When the AI asks, “Would you like me to go back and apply the missed habits?” say yes. Then run the self-audit again. Repeat until you’re satisfied you’ve extracted the most value from the session.</li>
<li>Track your scores over time. You’ll notice patterns — goals you consistently score well on and goals you consistently skip. Those patterns are your development road map.</li>
<li>Improve the prompt itself. When the AI suggests improvements to this macro-prompt based on your session, review them and update your saved copy. The self-audit gets sharper each time you use it.</li>
<li>Make it yours. Add habits that matter to your work, remove ones that don’t, or build in your own techniques. The 30 habits here are a starting point, not a ceiling.</li>
<li>Share it with your team. When everyone runs the same self-audit, you build a shared language for AI session quality across the organization.</li>
</ul>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/audit-yourself-to-get-more-from-genai/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Leaders at All Levels: How Argenx Scaled to $4 Billion Without Bureaucracy</title>
				<link>https://sloanreview.mit.edu/video/leaders-at-all-levels-how-argenx-scaled-to-4-billion-without-bureaucracy/</link>
				<comments>https://sloanreview.mit.edu/video/leaders-at-all-levels-how-argenx-scaled-to-4-billion-without-bureaucracy/#respond</comments>
				<pubDate>Wed, 29 Apr 2026 11:00:36 +0000</pubDate>
				<dc:creator><![CDATA[MIT Sloan Management Review. ]]></dc:creator>

						<category><![CDATA[Corporate Culture]]></category>
		<category><![CDATA[Leadership Vision]]></category>
		<category><![CDATA[Video]]></category>
		<category><![CDATA[Webinars & Videos]]></category>
		<category><![CDATA[Culture]]></category>
		<category><![CDATA[Innovation Strategy]]></category>
		<category><![CDATA[Leadership Skills]]></category>
		<category><![CDATA[Organizational Structure]]></category>
		<category><![CDATA[Workplace, Teams, & Culture]]></category>

				<description><![CDATA[Biotech companies face the same dilemma as businesses in other industries: Innovation drops off dramatically with scale. European biotech Argenx has reached a market value of more than $40 billion, having so far escaped that innovation trap. How has it done this? The company shuns hierarchy and instead organizes into small teams, each dedicated to [&#8230;]]]></description>
								<content:encoded><![CDATA[<p>Biotech companies face the same dilemma as businesses in other industries: Innovation drops off dramatically with scale. European biotech Argenx has reached a market value of more than $40 billion, having so far escaped that innovation trap. How has it done this? The company shuns hierarchy and instead organizes into small teams, each dedicated to fighting a single disease with a laser focus on bringing value to the patient.</p>
<p>“Humans can have incredible impact when you allow them to,” said Argenx’s incoming CEO, Karen Massey.</p>
<p>In this episode of <cite>Leaders at All Levels</cite>, she explains how the company manages to retain the nimbleness of a startup.</p>
<h3>The Argenx Playbook: Borrow These Ideas</h3>
<ul>
<li>Use no budgets — only plans — to keep teams focused on the right priorities.</li>
<li>Stop counting layers of management and try Argenx’s alternative. Fight the urge to give quick answers and maintain curiosity as a leader.</li>
</ul>
<p>Listen as hosts Kate W. Isaacs and Michele Zanini dig into the details of how Argenx uses distributed leadership to maintain its innovative edge and uncover insights that you can apply in your own organization.</p>
<h4>Video Credits</h4>
<p><strong>Karen Massey</strong> is the incoming CEO of Argenx.</p>
<p><strong>Kate W. Isaacs</strong> is a senior lecturer at the MIT Sloan School of Management.</p>
<p><strong>Michele Zanini</strong> is coauthor of the <cite>Wall Street Journal</cite> bestseller <cite>Humanocracy</cite> (Harvard Business Review Press, 2020).</p>
<p><strong>M. Shawn Read</strong> is the multimedia editor at <cite>MIT Sloan Management Review</cite>.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/video/leaders-at-all-levels-how-argenx-scaled-to-4-billion-without-bureaucracy/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>What Global Turmoil Means for Company Structure</title>
				<link>https://sloanreview.mit.edu/article/what-global-turmoil-means-for-company-structure/</link>
				<comments>https://sloanreview.mit.edu/article/what-global-turmoil-means-for-company-structure/#respond</comments>
				<pubDate>Tue, 28 Apr 2026 11:00:42 +0000</pubDate>
				<dc:creator><![CDATA[Caterina Moschieri, Davide Ravasi, and Quy Huy. <p>Caterina Moschieri is an associate professor in the Strategy Department of IE Business School in Madrid. Davide Ravasi is a professor of strategy and entrepreneurship and director of the UCL School of Management at University College London. Quy Huy is a professor of strategic management at Insead.</p>
]]></dc:creator>

						<category><![CDATA[Foreign Markets]]></category>
		<category><![CDATA[Global Economy & Trade]]></category>
		<category><![CDATA[Global Operations]]></category>
		<category><![CDATA[Globalization]]></category>
		<category><![CDATA[Multinational Companies]]></category>
		<category><![CDATA[Narrated Article]]></category>
		<category><![CDATA[Politics]]></category>
		<category><![CDATA[Financial Management & Risk]]></category>
		<category><![CDATA[Global Strategy]]></category>
		<category><![CDATA[Strategy]]></category>
		<category><![CDATA[Supply Chains & Logistics]]></category>
		<category><![CDATA[Frontiers]]></category>

				<description><![CDATA[Chris Gash/theispot.com The international order is undergoing structural transformation. War in the Middle East, the prolonged conflict in Ukraine, and major shifts in U.S. trade and foreign policy that have altered the country’s traditional alliances are manifestations of a broader reconfiguration of power. Tariffs, export controls, sanctions, and the vulnerability of strategic choke points as [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/2026SUMMER_Moschieri-1290x860-1.jpg" alt="" class="wp-image-126796"/><figcaption>
<p class="attribution">Chris Gash/theispot.com</p>
</figcaption></figure>
<p></p>
<p></p>
<p><span class="smr-leadin">The international order is undergoing</span> structural transformation. War in the Middle East, the prolonged conflict in Ukraine, and major shifts in U.S. trade and foreign policy that have altered the country’s traditional alliances are manifestations of a broader reconfiguration of power.</p>
<p>Tariffs, export controls, sanctions, and the vulnerability of strategic choke points as diverse as maritime straits and semiconductor ecosystems are exposing the fragility of globally optimized supply chains and production networks.</p>
<p>The previously invisible contest over information flows is transforming as moves by state actors to establish digital sovereignty lead to significant technological consequences for multinational corporations. The European Union’s General Data Protection Regulation and Data Act, for instance, required social media and technology companies to redesign cloud infrastructure, reorganize compliance teams and legal entities, and relocate data storage and processing to ensure that European user data remains under EU jurisdiction.</p>
<p>Meanwhile, consider the recent controversy in Chile, which, under U.S. pressure, rescinded approval of an <a href="https://www.france24.com/en/live-news/20260312-the-chinese-cable-that-could-trip-up-chile-s-new-leader" target="_blank">undersea cable</a> that would link Santiago to Hong Kong. That situation illustrates how digital infrastructure projects in middle-power countries have become geopolitical flash points.</p>
<p>In fact, the very concept of neutrality has become fragile. The war in the Middle East shows that hitherto politically neutral countries are not immune to attack. Big Tech companies such as Amazon, Google, and Microsoft have invested hundreds of billions of dollars to develop gigantic data centers in the United Arab Emirates, Qatar, and Oman only to see them <a href="https://www.bloomberg.com/news/articles/2026-03-03/drone-strikes-damage-amazon-data-centers-in-the-uae-and-bahrain" target="_blank">damaged by drones and missiles</a>.</p>
<p></p>
<h3>Why Traditional Options Now Look Different</h3>
<p>How are businesses operating across borders adapting to this evolving reality — or, how should they be? Our research into waves of globalization and deglobalization since the beginning of the 20th century has found that the main options traditionally available to multinational corporations facing geopolitical turmoil — exit, relocate, or reorganize — are manifesting somewhat differently now than in previous crises.</p>
<p><strong>Exit: Is it advisable?</strong> In the past, companies operating in a country where their policy risk was increasing were likely to reassess and reduce their exposure or even <a href="https://doi.org/10.1002/smj.2509" target="_blank">exit the country</a> altogether. Such decisions are never easy, since they often mean relinquishing valuable assets and abandoning lucrative opportunities. As the exit of <a href="https://www.washingtonpost.com/business/2022/05/03/bp-profit-russia/" target="_blank">BP</a>, <a href="https://www.reuters.com/business/energy/shell-exit-russia-operations-after-ukraine-invasion-2022-02-28/" target="_blank">Shell</a>, and <a href="https://finance.yahoo.com/news/exclusive-norways-equinor-exited-russia-051015171.html" target="_blank">Equinor</a> from Russia soon after its invasion of Ukraine shows, divestitures can entail significant financial write-downs, legal complications, contractual disputes, and reputational spillovers. Host governments can use their coercive and regulatory power to make it far more costly for a foreign company to exit the market than it was to enter.</p>
<p>Divestitures can also cause a multinational company to permanently lose access to markets in regions that may remain strategically important over the long term. To avoid such a scenario, it may be wise to maintain a calibrated, minimal presence in those countries. In practice, this could consist of a legal entity and basic operational presence sufficient to preserve relationships, regulatory standing, and market intelligence while limiting commitment. This may be achieved through asset relocation or structural reorganization. For example, automakers, including <a href="https://www.automotivelogistics.media/ev-and-battery/nissan-is-to-cease-wuhan-production-by-march-2026-amid-fierce-competition-and-financial-strain-in-china/197673" target="_blank">Nissan</a> and <a href="https://www.auto123.com/en/news/volkswagen-reduces-investment-plan-2030/73470/" target="_blank">Volkswagen</a>, have reduced R&amp;D investment in China and slowed expansion plans there without fully exiting the market. By maintaining supplier relationships and distribution networks, they can preserve the option to reengage more fully if political or competitive conditions stabilize.</p>
<p>Not surprisingly, then, many multinationals are exploring ways to maintain <a href="https://sloanreview.mit.edu/article/multinationals-need-closer-ties-as-globalization-retreats/">broad international scale</a> and reach despite tectonic shifts in the global order. But what are the alternatives?</p>
<p></p>
<p><strong>Reorganize: Polynational structures and corporate diplomacy.</strong> The ongoing political turmoil is causing many leaders to question the traditional approach to organizing multinational operations, which is based on centralizing strategic direction and technology development and optimizing supply chains and technology flows. Traditional multinationals also tend to prioritize commercial considerations over political considerations, and efficiency over resilience.</p>
<p>As geopolitical tensions increase and disruptive events intensify, multinationals are adopting new structures to build resilience through separation, redundancy, and local embeddedness. In 2024, for instance, HSBC restructured its global operations by splitting its business into Eastern and Western divisions. The bank also joined China’s cross-border interbank payment system, strengthening its Eastern operations while separating their governance from Western operations.</p>
<p>Globally integrated operations are now giving way to <a href="https://view.mail.fortune.com/?vawpToken=3G54SGT7ZYNUFASXNNIUOOPRBE.130019" target="_blank">polynational organizations</a> — networks of semiautonomous units with strong in-country leadership, regional supply chains, and strong ties with local stakeholders. Interestingly, this signals a partial return to the multidomestic organization that some multinationals adopted in the pre-globalization era.</p>
<p>Nestlé and HSBC offer two examples of this approach. Both companies have distributed strategic authority and the monitoring and analysis of political and regulatory issues across regional hubs. They have also embedded operations deeply within local economic and regulatory systems to reduce their exposure to political shocks in specific locations while preserving their presence in multiple geopolitical blocs. Doing so allows Nestlé and HSBC to remain globally coordinated but politically adaptable.</p>
<p>The local anchoring that characterizes polynational organizations can also be pursued by localizing ownership — that is, by directly involving local actors in the ownership and governance of operations. Ceding significant ownership to the host government (as <a href="https://www.cnbc.com/2023/11/20/mcdonalds-increases-minority-stake-in-china-business-.html" target="_blank">McDonald’s did in China</a>) or listing local operations on the national stock exchange (as Hindustan Unilever did in India, and Heineken did in Malaysia) helps create local accountability and signals alignment with local interests. It also helps companies introduce legal and operational separation between a local subsidiary and the global parent.</p>
<p>Localizing ownership can also be an extreme response to widely diverging regulatory regimes and local concerns with data sovereignty. As the case of TikTok in the U.S. shows, redesigning internal governance and technological architectures may be insufficient to address a host government’s concerns about how data will be collected, processed, and used. Radically <a href="https://www.cbsnews.com/news/tiktok-deal-ban-oracle/" target="_blank">restructuring ownership</a> to create a separate legal entity to manage American operations, with majority ownership by non-Chinese investors, was the only way the social media platform could continue operating in the U.S. The Chinese parent, ByteDance, retained a 19.9% stake in TikTok.</p>
<p></p>
<p>Multinationals are also investing in more preemptive measures. Corporate headquarters are developing geopolitical capabilities that enable them to actively and constantly monitor political risk and take strategic action in real time. Such actions include creating or strengthening dedicated government-affairs corporate functions and developing specialized tools, such as BlackRock’s Geopolitical Risk Indicator, Allianz’s Political Stability Grid, and Siemens’ Value at Stake methodology. Such capabilities help multinationals formulate explicit geopolitical strategies, anticipate potential disruptions to supply chains and operations, and orchestrate responses to crises when they occur.</p>
<p>Some multinationals are also engaging in corporate diplomacy, a sign that they are moving from treating geopolitics as an external constraint to engaging proactively as independent actors. In 2025, Apple simultaneously lobbied the U.S. government against instituting tariffs, reassured local officials in China about its presence, and strengthened ties with Indian authorities, effectively using manufacturing investments as diplomatic currency. Also in 2025, <a href="https://blogs.microsoft.com/on-the-issues/2025/04/30/european-digital-commitments/" target="_blank">Microsoft made five major commitments</a> to support Europe’s digital stability, including expanding data center operations in 16 European countries, supporting digital sovereignty, defending its legal right to operate in Europe, protecting data privacy and cybersecurity in the region, and ensuring open access to its European AI and cloud platform and to infrastructure across Europe. The purpose of such efforts is twofold: to shelter companies from the consequences of political tensions between home and host governments, and to unlock opportunities for local investment by conveying a neutral stance.</p>
<p><strong>Relocate: From optimization to compliance.</strong> For decades, gradual regulatory alignment, integration of financial markets and payment systems, and the proliferation of free trade agreements encouraged multinational companies to let cost advantage and economies of scale dictate location choices. Now, fragmentation of regulatory regimes and the return of trade barriers are forcing a renewed emphasis on regulatory compliance and risk mitigation.</p>
<p>This is what happened in <a href="https://doi.org/10.1186/s41469-019-0047-8" target="_blank">post-Brexit Europe</a>. The United Kingdom’s withdrawal from the EU limited the free movement of goods, services, and labor and threatened the European operating licenses of multinationals whose regional headquarters were in the U.K. This forced them to reconsider European residency, relocate subsidiaries, and rebalance regional headquarters. Increased cross-border transaction costs disrupted integrated value chains and constrained labor mobility, prompting companies to restructure reporting lines and shift assets to preserve market access.</p>
<p></p>
<p>To reduce the risk of supply chain disruptions, many companies are increasingly <a href="https://www.mckinsey.com/mgi/our-research/geopolitics-and-the-geometry-of-global-trade-2025-update" target="_blank">adopting strategies</a> such as reshoring (moving production to the home country to avoid tariffs and other barriers), near-shoring (moving production closer to home), and friend-shoring (moving production to friendly nations to increase control over foreign operations and decrease exposure to potentially hostile countries). Several North American manufacturers in electronics and automotive components have near-shored production from China to Mexico to reduce tariff exposure and shorten supply chains. By relocating assembly and intermediate production closer to the U.S. market, these companies are sacrificing access to low-cost producers to avoid tariff wars and logistics problems.</p>
<p>Multinational manufacturers in apparel, consumer electronics, and industrial goods are adopting a middle-power anchoring strategy. This means that they are relocating production to countries that are less strongly aligned with blocs embroiled in trade tensions. Apple is shifting the bulk of iPhone production to India from China. Samsung built solid relations with Vietnam’s political authorities, which enabled the company to influence the development of industrial parks where it now produces the majority of its Galaxy smartphones. Intel has chosen to establish a manufacturing hub in Malaysia, taking advantage of the Southeast Asian country’s geopolitical neutrality and existing semiconductor expertise to establish a production base outside the U.S.-China rivalry.</p>
<p></p>
<p>Such moves allow multinationals to maintain access to low-cost supply networks while reducing their dependence on a single geopolitical bloc. The companies also benefit from early positioning within emerging middle-power corridors of trade.</p>
<p></p>
<p>The current geopolitical landscape reflects a rupture within globalization itself. Countries are trying to weaponize networks they cannot fully dismantle: They engage in techno-nationalism, impose sanctions in trade and finance, create data sovereignty regimes, and compete on the basis of industrial policy. At the same time, this competition and fragmentation are occurring within a context of deep economic interdependence.</p>
<p>To stay on top of this new landscape, companies need to redesign their portfolios, supply chains, data architectures, and governance models. Multipolarity is reshaping the strategic options of exit, relocation, or reorganization. Resilience now depends on adaptability to fast-changing geopolitical restructuring.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/what-global-turmoil-means-for-company-structure/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Why Adventure Matters in Long Working Lives</title>
				<link>https://sloanreview.mit.edu/article/why-adventure-matters-in-long-working-lives/</link>
				<comments>https://sloanreview.mit.edu/article/why-adventure-matters-in-long-working-lives/#respond</comments>
				<pubDate>Mon, 27 Apr 2026 11:00:09 +0000</pubDate>
				<dc:creator><![CDATA[Lynda Gratton. <p><a href="https://www.linkedin.com/in/lynda-gratton-3b179813/" target="_blank">Lynda Gratton</a> is a professor of management practice at London Business School and founder of HSM Advisory. Her most recent book is <cite>Redesigning Work: How to Transform Your Organization and Make Hybrid Work for Everyone</cite> (MIT Press, 2022).</p>
]]></dc:creator>

						<category><![CDATA[Chief Executive Officer]]></category>
		<category><![CDATA[Human Capital]]></category>
		<category><![CDATA[Human Psychology]]></category>
		<category><![CDATA[Leadership Development]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leadership Skills]]></category>
		<category><![CDATA[Managing Your Career]]></category>
		<category><![CDATA[Workplace, Teams, & Culture]]></category>

				<description><![CDATA[Emma Hanquist/Ikon Images In my ongoing exploration about the patterns and changes in how people approach their working lives, I’ve found myself looking back at my own memories from over five decades of work. What stands out is not simply the steady progression of roles and achievements but the disproportionate impact of recurring moments of [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Gratton-1290x860-1.jpg" alt="" class="wp-image-126804"/><figcaption>
<p class="attribution">Emma Hanquist/Ikon Images</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">In my ongoing exploration</span> about the patterns and changes in <a href="https://lyndagratton.com/thinking" target="_blank" rel="noopener noreferrer">how people approach their working lives</a>, I’ve found myself looking back at my own memories from over five decades of work. What stands out is not simply the steady progression of roles and achievements but the disproportionate impact of recurring moments of adventure that took me far beyond my usual experience. </p>
<p>At the time, these adventures each felt uncertain and sometimes even disruptive. More than that, they sat outside any clear narrative of progression. They did not register as forward movement. If anything, they felt almost indulgent: hitchhiking as a graduate student to Israel to research child-rearing practices in a kibbutz; traveling through Peru and Bolivia in my 30s; later, in my 50s, exploring countries across Africa; and now, in my 70s, journeying to India to better understand its religions.</p>
<p>Looking back, though, I now see these not as diversions from my working life. Instead, they were among the experiences that most shaped it.</p>
<p>My reflections are not unique. In conversations with others about their own long working lives, a consistent pattern emerges. People describe moments of adventure that took them beyond what was familiar. Some stepped away entirely by, for instance, spending time in a different country. Others made smaller but still disorienting shifts, such as moving into unfamiliar roles or entering settings where they were no longer the expert.</p>
<p></p>
<p>Taking these kinds of leaps becomes more important as longevity reshapes our lives. Longer lives bring both opportunity and risk. They offer more time — to learn, to contribute, to explore. But they also demand more than a single way of working, thinking, or being. In short working lives, ossification matters less. But as working lives stretch, the ability to change becomes critical. Without periods of deliberate adventure and exploration, we risk becoming locked into versions of ourselves that no longer fit the future we are moving into.</p>
<p>The challenge is not just endurance; it is reinvention. And reinvention does not happen accidentally.</p>
<h3>Why Adventure Matters</h3>
<p>Imagine that your own working life extends into your 70s. How will you make that sustainable? Many people focus on staying productive: becoming highly skilled and deeply experienced. Others recognize the <a href="https://sloanreview.mit.edu/article/calm-the-underrated-capability-every-leader-needs-now/">importance of cultivating calm</a> and explore the conditions and practices that sustain mental health and well-being.</p>
<p>Both strategies are wise. Yet the very structures that support productivity and the ability to stay calm — clear roles, established identities, well-worn habits — can, over time, make change harder.</p>
<p>When I talk to leaders about how they support their own longer working lives, they often emphasize the need for resilience, agility, and transformation. They rarely talk about adventure. It can sound frivolous: personal rather than organizational, or even risky in a corporate context.</p>
<p></p>
<p>Yet when people describe their own working lives, it is often the adventures that they describe. It becomes clear how profoundly such experiences support a long working life. Here are three reasons why.</p>
<h4>Adventure disrupts accumulated patterns.</h4>
<p>Stepping away entirely — by spending time in a different country or working in contexts where the usual expertise offers little guidance — changes up everything. The systems are different, the cues unfamiliar, and the markers of success less clear. In these situations, choices and actions that once felt automatic become visible again.</p>
<p>People who put themselves in these situations describe paying closer attention — observing more closely, questioning more readily, and adapting more deliberately.</p>
<p>What is disrupted is not just routine but the deeper patterns of thinking and acting that have been built over years. In that disruption, something important happens: People begin to see their own habits, assumptions, and default responses from the outside.</p>
<h4>Adventure expands who we can become.</h4>
<p>If continuity anchors identity, then adventure unsettles it. Research on identity points to <a href="https://psycnet.apa.org/doi/10.1037/0003-066X.41.9.954" target="_blank" rel="noopener noreferrer">the idea of “possible selves”</a> — the different ways we might imagine ourselves in the future. Most remain abstract. But experiences that take us beyond the familiar can make these possibilities more tangible.</p>
<p></p>
<p>This shift does not happen through reflection alone. It happens through action. Imagine, for instance, a senior executive stepping away from a well-established role to spend a year working in a small, unfamiliar venture in a different country, where her experience carries little authority. For the first time, she sees another version of herself — not as a leader defined by control but as someone learning, adapting, and uncertain. Or consider a technical specialist who begins teaching and comes to see himself not just as an expert but as an educator — an identity that reshapes his future.</p>
<p>What matters is not just what we do now but who we can become. New experiences expand the range of identities we can inhabit, and that expanded sense of self endures.</p>
<p></p>
<h4>Adventure creates markers across the life course.</h4>
<p>Our experiences do not sit in isolation. They become part of how we make sense of our lives over time. We construct a narrative of who we are, linking past experiences with present choices and future possibilities. Within that narrative, certain moments stand out. They are revisited, retold, and used as reference points.</p>
<p>Periods of adventure often have this quality. A decision to step away, a move into an unfamiliar context, a break from a defined path all become the moments that stand out. They become more than memories; they become anchors in the story we tell about ourselves.</p>
<p>Adventures often mark a passage. They’re a point of transition from one version of ourselves to another and mark the moment when we cannot fully return to our former self. It was the Greek philosopher Heraclitus who observed that we move through time like it’s a river: If we step out of the water, it is a river with different waters and a different flow when we return to it later.</p>
<p>I was reminded of this when I returned to the ancient city of Petra, in Jordan, many years after first visiting it as a young traveler. The place was recognizable, but I was not quite the same. The first time, I slept on the desert floor, wandered with little knowledge, and was open to everything. The second time, I arrived more informed and more comfortable. The experience was richer in some ways, but it did not replace the intensity of that first encounter.</p>
<p>Years later, we return to such moments, not simply recalling what happened but using them to understand what we are capable of and what matters to us. They connect earlier and later versions of our self, allowing change to feel less like disruption and more like something we have already lived through.</p>
<h3>The Organizational Paradox</h3>
<p>What is striking is how unevenly these adventures are distributed. We recognize — and often encourage — adventure early in life, as part of education or early career exploration. But as our careers progress, adventure becomes harder to justify, harder to accommodate, and easier to defer. We encourage adventure at 20. We discourage it at 40 and 50.</p>
<p>This pattern reflects the structure of the traditional three-stage life: a period of education, followed by continuous full-time work and then retirement. Within this model, exploration is largely confined to the beginning and the end. The middle is defined by continuity, progression, and increasing specialization. </p>
<p>Organizations have been built around this model. They optimize for efficiency, reward consistency, and rely on predictable performance. Roles become more defined and expectations more explicit, making periods of discontinuity feel costly — for both individuals and employers.</p>
<p>The result is a paradox. The very experiences that most expand perspective and capability are the ones most likely to disappear, just as longer working lives make them more necessary. </p>
<p>As I’ve explored in my research and writing for the past few decades, people’s working lives now regularly extend into their 60s and 70s — not just among those who need to work but those who want to work too. As that happens, that three-stage structure is under strain: It becomes harder to sustain a model based on decades of continuous, unbroken work.</p>
<p>Emerging in its place is a <a href="https://sloanreview.mit.edu/article/the-corporate-implications-of-longer-lives/">multistage life</a> — one with more transitions, more variety, and more choice. In this model, exploration and adventure are no longer confined to the edges of life. They can now occur at multiple points: between roles, across careers, or within them.</p>
<p>We can see this shift occurring. Sabbaticals, <a href="https://www.indeed.com/career-advice/career-development/what-is-secondment" target="_blank" rel="noopener noreferrer">secondments</a> (temporarily working a different job at the same company), portfolio careers (combining multiple jobs, income streams, and side gigs), and midlife transitions are all becoming more visible. What matters is not the specific form of this shift but the principle: that long careers require moments of discontinuity, not just continuity.</p>
<h3>Make Space for Adventure</h3>
<p>It is important to acknowledge that not all working lives offer the same scope for these experiences. In my own case, an academic career provided a degree of flexibility — periods of time between roles, or space to step away — that made some of my adventures possible. Many other people work within structures that offer far less room for breaks or risk-taking.</p>
<p>Making time for new experiences is not simply a matter of individual choice. It reflects how working lives have traditionally been organized.</p>
<p>So for organizations, the challenge is to legitimize exploration across the life course — to create space for movement without penalizing those who step away.</p>
<p></p>
<p>For individuals, the challenge is different but equally real. As careers progress, time becomes more constrained, responsibilities accumulate, and stepping away feels harder to justify. Adventure is postponed — until there is more time, more certainty, or fewer obligations. But in a working life, that moment rarely arrives.</p>
<p></p>
<p>Making space for adventure requires a shift in how we think about our lives and careers. We have become accustomed to <a href="https://sloanreview.mit.edu/article/building-mastery-what-leaders-do-that-helps-or-impedes/">valuing mastery and productivity</a>, and adventure is often treated as optional — something peripheral rather than essential.</p>
<p>In longer lives, that assumption no longer holds. Adventure is not simply a break from work. It is one of the threads that keeps a life — and a career — alive. It is what allows a career to remain open, adaptive, and capable of renewal over decades. The risk is not that people take too many detours but too few. </p>
<p>What would your 80-year-old self ask of you? Yes — walk many steps a day, eat sensibly, sleep well. But also: Give me adventures. Give me moments I can remember, stories I can tell, conversations I can have with my grandchildren. Carve out time for extended travel or cultural immersion. Volunteer in unfamiliar contexts, in roles below your capabilities — or much higher. Ask to try a new task at work. Plan a weekend trip to someplace you’ve never been. Undertake a physically or creatively demanding challenge. Try out a self you’ve always dreamed of being. Some of these adventures are dramatic. Others are deeply personal.</p>
<p>In long working lives, the question is not only how long we can continue but also how often we are willing to step beyond what we know.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/why-adventure-matters-in-long-working-lives/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>How to Slay the Chaos Dragon</title>
				<link>https://sloanreview.mit.edu/article/how-to-slay-the-chaos-dragon/</link>
				<comments>https://sloanreview.mit.edu/article/how-to-slay-the-chaos-dragon/#respond</comments>
				<pubDate>Thu, 23 Apr 2026 11:00:12 +0000</pubDate>
				<dc:creator><![CDATA[Melissa Swift. <p><a href="https://www.linkedin.com/in/swiftmelissa/" target="_blank">Melissa Swift</a> is the founder and CEO of organizational consulting firm Anthrome Insight. She is also the author of <cite>Work Here Now: Think Like a Human and Build a Powerhouse Workplace</cite> (Wiley, 2023) and the forthcoming <cite>Effective: How to do Great Work in a Fast-Changing World</cite> (Wiley, 2026).</p>
]]></dc:creator>

						<category><![CDATA[Change Management]]></category>
		<category><![CDATA[Human Behavior]]></category>
		<category><![CDATA[Leadership Advice]]></category>
		<category><![CDATA[Managerial Psychology]]></category>
		<category><![CDATA[Strategic Leadership]]></category>
		<category><![CDATA[Team Dynamics]]></category>
		<category><![CDATA[Culture]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leadership Skills]]></category>
		<category><![CDATA[Leading Change]]></category>
		<category><![CDATA[Workplace, Teams, & Culture]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR &#124; Getty Images In my first job out of college, I had a frenetic boss whom we’ll call Don. Don was all over the place in a quite literal sense: running from desk to desk across the office, talking to people here and there, dashing in and out for cigarettes all day. [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Swift-1290x860-1.jpg" alt="" class="wp-image-126772"/><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR | Getty Images</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">In my first job out of college,</span> I had a frenetic boss whom we’ll call Don. Don was all over the place in a quite literal sense: running from desk to desk across the office, talking to people here and there, dashing in and out for cigarettes all day. At the end of 1998, Don had been late for meetings so often that he announced an initiative called “On Time in ’99!” to kick off in the new year. </p>
<p>He didn’t get the chance to implement it. The company I worked for hired an organizational consultant, who, legend has it, identified Don and the cloud of chaos around him as the root cause of virtually all of the various process failures we were experiencing. </p>
<p>Don was fired.</p>
<p>As an organizational consultant myself today, I’m fascinated by this set of events. I feel bad for Don: It seems unlikely that all of the chaos traced back to him. And indeed, things remained pretty chaotic after he departed. </p>
<p>The goal resonates, though: Minimizing chaos is, in my professional experience, one of the healthiest goals an organization can set. Sadly, in today’s environment, this can seem impossible to leaders. Most organizations deal with both a chaotic external world (featuring wild daily gyrations in everything from geopolitics to weather to technology) and a chaotic internal landscape (featuring the level of shifting priorities that comes with the scale and complexity of so many companies today). If 2026 feels especially chaotic, you’re not wrong.</p>
<p>All hope is not lost, though. Leaders can take steps to help people handle chaos <em>before</em> things go off the rails, or at least before things go off the rails <em>entirely</em>. Let’s take a look at four of them. </p>
<p></p>
<h3>1. Constantly talk to the teams your team works with.</h3>
<p>Poet John Donne wrote, “No man is an island,” and no team is, either. You don’t have to be a big, messy matrix organization to operate in a teams-of-teams manner. Even relatively small companies feature incredible amounts of interdependency between groups. </p>
<p>This phenomenon causes chaos by generating competing priorities. It also exacerbates the chaos that comes in from the outside by multiplying and fragmenting the organization’s strategies to respond to any given event. Imagine a football team with multiple huddles: How would you ever pull off a well-run play?</p>
<p>The sanest organizations I’ve done consulting work with, and the healthiest leadership teams I’ve been a part of myself, all addressed this issue in the same fairly informal way: Leaders got to know who their teams were teaming with, and they stayed in contact with those teams’ leaders. </p>
<p>This may sound straightforward, but once you get to several-hundred-person chunks of organizations, the permutations of connections between teams pile up quickly. So leaders are challenged not to map every interaction for their team but to understand the “mosts”: most frequent, most strategic, and most charged team-to-team interactions.</p>
<p></p>
<p>Once leaders engage in a regular, everyday dialogue about the work their teams are doing together, chaos levels begin to modulate. Multiple leaders can work together to collectively shift people’s priorities to what the organization really needs. They can also minimize collisions between people doing the same or conflicting work. </p>
<p>Often, organizations attempt an emergency version of this as a crisis erupts, only to discover that the leaders they’re hurriedly pulling together have been working in such separate lanes that there’s an incredible amount of context that has to be shared and trust that has to be built before they can mobilize their teams jointly. As the leaders play catch-up, chaos mounts. Leaders who are already in a live conversation with one another have a tremendous edge in this scenario. </p>
<p></p>
<h3>2. Create and protect space in meetings for impromptu dialogue.</h3>
<p>In a prior role, pre-entrepreneurship, I was hired with the explicit mandate of soothing the waters of a chaotic team. I came in and immediately looked for levers I could hit to make things even a bit more predictable. </p>
<p>A clue came to me in the strangest place: I was asked to introduce myself during a recurring town hall meeting and was given such a short time slot in such a packed agenda that my remarks culminated with my effectively getting played off the stage like a verbose Oscar winner. To try to recover from the bizarre experience of getting Zoom-silenced by the group, I did a bit of an emotional audit. What I was feeling was pretty simple: I had things I needed to say, and I had not gotten the full chance to say them.</p>
<p>This was a lousy feeling — but indicative of a structural problem. The group had a complex array of meetings, matrixed by employees’ levels within the organization, and the meeting agendas were completely, almost compulsively, full. If a matter came up that needed to be discussed, additional meetings had to be frantically parachuted into already-packed calendars. This meant that even mildly chaotic events (say, a client being unhappy with a deliverable, which is a thing that happens frequently in consulting) turned into full crises quickly as discussions fragmented across tiny chunks of time within the subgroups that were available. </p>
<p>So I took some advice I frequently give clients and audiences: I killed a bunch of standing meetings. And I loosened the agendas for the gatherings that did remain, creating space for whatever was happening at that moment, for silence so that people could think, or for — brace yourself — the meeting to end early if we didn’t need the full time slot. </p>
<p>Did this step banish all chaos? No, of course not. Did our ability to handle chaos improve? Yes, it did. On average, we were able to address issues more quickly with more of the right people in the room — and we were able to lessen silent emotional burdens among the team by bringing issues up quickly and publicly — because we had already designated time to do so. Chaos was still there, but our resilience had increased thanks to having space for discussion. </p>
<p>Reserving space in meetings can feel uncomfortable when you first implement it. Just as nature abhors a vacuum, corporate environments hate blank space in meetings or on calendars. It may be tempting to delete that agenda bullet that says “AOB” (any other business). But resist the urge to pack every hour. When you need that extra five minutes, 10 minutes, 20 minutes because something has come up, it will feel like absolute magic to have time to talk about what you actually need to talk about.</p>
<h3>3. Explicitly guard against the bad behavior that chaos can cover.</h3>
<p>I discovered something disturbing when doing research for my forthcoming book, <cite>Effective: How to Do Great Work in a Fast-Changing World</cite>. <a href="https://doi.org/10.1177/0950017009344875" target="_blank" rel="noopener noreferrer">Academic research</a> explicitly links chaotic environments with every bad workplace behavior except for sexual harassment: Examples include bullying by supervisors, conflict between employees and customers, and infighting by colleagues. To fans of postapocalyptic science fiction like me, this tracks: After the asteroid hits Earth, or the zombies come out, many people seem to start acting like real jerks.</p>
<p>This raises a fascinating question: Are we making the experience of chaos worse than it needs to be by simply tolerating unpleasant behavior in chaotic times? After all, in the workplace, we often normalize crummy conduct in these sorts of moments. Results are suddenly bad? Of course the CEO is yelling. An unexpected deliverable is due ASAP? Of course the team is clashing. Conditions on the ground are wild? Of course folks are bickering with customers. All of this, of course, makes the chaos worse and the underlying issues less surmountable, but many organizations have come to accept it as a normal way of working in tough moments.</p>
<p>We shouldn’t.</p>
<p>A certain amount of back-and-forth is healthy and actually an <a href="https://medium.com/the-liberators/why-psychological-safety-improves-the-effectiveness-of-your-team-7592d76f3c9b" target="_blank" rel="noopener noreferrer">indicator of psychological safety</a>. But in chaotic moments, leaders must be vigilant about recognizing when strong statements have become bullying, when push and pull about roles and responsibilities have become toxic infighting, and when boundary-setting with customers has become too fraught. </p>
<p>SHRM offers a <a href="https://www.shrm.org/content/dam/en/shrm/topics-tools/news/employee-relations/Bullying.pdf" target="_blank" rel="noopener noreferrer">definition of bullying</a> that can be helpful in addressing any category of bad behavior: “Workplace bullying refers to repeated, unreasonable actions of individuals (or a group) directed toward an employee (or a group of employees), which are intended to intimidate, degrade, humiliate, or undermine; or which create a risk to the health or safety of the employee(s).”</p>
<p>This definition gives leaders some good questions to ask themselves when they witness heated moments in chaotic times. “Repeated” alone is a good test. Anyone can have a lousy day and spout off once; when the behavior happens again and again as the team wrestles with a crisis, it’s time to step in. “Unreasonable” also categorizes actions in a helpful way. Are people asking for, or criticizing others for not providing, things that can reasonably be provided or implemented? Or has panic tipped them over into overreaction? (“I expect you to be at your desk all night until this is finished!”)</p>
<p></p>
<p>Once you’ve identified truly over-the-line behavior, name the problem — contextualized to the chaotic situation to remove excuses: “I know this supply chain shortage is taxing us all, but the way you spoke to Sally was degrading and unhelpful.” Make it explicit that chaos does not issue everyone a blank check to indulge their worst impulses. </p>
<p>While chaos and bad behavior unfortunately often travel together, that’s not a coupling that sane leaders need to accept. </p>
<h3>4. It’s not all bad: Reap the <em>upsides</em> of chaos.</h3>
<p>You may have read the heading above and done a bit of a double take. “The upsides, you say? But I loathe chaos.” </p>
<p>Me too, honestly. But that’s why I force myself to remember a few things:</p>
<p><strong>Chaos accelerates personal development.</strong> It’s incredibly frustrating to deal with a million things happening at once in unpredictable ways. But some of that frustration is the feeling of your brain being challenged — and challenge equals growth. Many executives I’ve worked with have cited chaotic times as the crucible for the growth of some of their strongest skills. The chaos didn’t feel good at the time, but they were learning at exponential speed. </p>
<p></p>
<p><strong>Chaos can shake up the corporate chessboard in helpful ways.</strong> One C-suite executive (and certified chaos hater) sheepishly admitted to me the other day that “every decent opportunity I’ve gotten has been because things were in disarray.” Again, we may not love what Ashley Goodall so memorably called “<a href="https://ashleygoodall.com/excerpt" target="_blank" rel="noopener noreferrer">life in the blender</a>,” but the most chaotic events do sometimes tee up intriguing opportunities (or even new roles). Especially in an era where people increasingly value horizontal or diagonal growth — building lateral skills through different kinds of exposure, not just marching into more senior roles in a linear fashion — there’s definitely an upside to the corporate ladder getting a good shake now and then.</p>
<p><strong>Chaos can give us all the opportunity for a cleansing laugh.</strong> Think about some of your most memorable moments with the teams you’ve worked with. I bet at least one or two are downright silly. When things get chaotic and people choose to see comedy and not tragedy, we can all have some distinctly human fun together. The randomness of the universe is not just frustrating and annoying and exhausting. It can be goofy, too. </p>
<p>The reality of life at any organization is that you can’t fully shield your team from chaos, and per that last strategy, you <em>shouldn’t</em>, either. With the right team-to-team communication, the right space to have the right conversations, and the right protection from bad behavior, your team can grow, get new opportunities, and even chuckle together during chaos. </p>
<p>In Greek mythology, <a href="https://www.britannica.com/topic/Chaos-ancient-Greek-religion" target="_blank" rel="noopener noreferrer">chaos is defined</a> as simply the time before the world was formed. Under that framework, chaos itself is almost immaterial; it’s what comes after that matters. And leaders: That part is what you choose to make of it. </p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/how-to-slay-the-chaos-dragon/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Why Business Leaders Need to Champion Democracy</title>
				<link>https://sloanreview.mit.edu/article/why-business-leaders-need-to-champion-democracy/</link>
				<comments>https://sloanreview.mit.edu/article/why-business-leaders-need-to-champion-democracy/#respond</comments>
				<pubDate>Wed, 22 Apr 2026 11:00:04 +0000</pubDate>
				<dc:creator><![CDATA[Julie Battilana, Lakshmi Ramarajan, Matthew Lee, and Vincent Pons. <p><a href="https://sici.hks.harvard.edu/person/julie-battilana/" target="_blank" rel="noopener noreferrer">Julie Battilana</a> is the Alan L. Gleitsman Professor of Social Innovation at the Harvard Kennedy School of Government and the Joseph C. Wilson Professor of Business Administration at Harvard Business School. <a href="https://www.hbs.edu/faculty/Pages/profile.aspx?facId=496799" target="_blank" rel="noopener noreferrer">Lakshmi Ramarajan</a> is the Diane Doerge Wilson Professor of Business Administration at Harvard Business School. <a href="https://www.hks.harvard.edu/faculty/matthew-lee" target="_blank" rel="noopener noreferrer">Matthew Lee</a> is an associate professor of public policy and management at the Harvard Kennedy School. <a href="https://www.vincentpons.org/" target="_blank" rel="noopener noreferrer">Vincent Pons</a> is the Byron Wien Professor of Business Administration at Harvard Business School.</p>
]]></dc:creator>

						<category><![CDATA[Business Risk]]></category>
		<category><![CDATA[Corporate Leadership]]></category>
		<category><![CDATA[Human Rights]]></category>
		<category><![CDATA[Leadership Vision]]></category>
		<category><![CDATA[Politics]]></category>
		<category><![CDATA[Social Justice]]></category>
		<category><![CDATA[Corporate Social Responsibility]]></category>
		<category><![CDATA[Crisis Management]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Social Responsibility]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR &#124; Getty Images Democracy is in decline across the world. More countries are experiencing erosion of political rights and civil liberties than gains, according to Freedom House. As of 2025, 92 countries, representing 74% of the world’s population, were classified as autocracies by the V-Dem Institute. Democratic backsliding is a primary concern [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Battilana-1290x860-1.jpg" alt="" class="wp-image-126731"/><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR | Getty Images</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">Democracy is in decline across the world.</span> More countries are experiencing <a href="https://freedomhouse.org/report/freedom-world/2026/growing-shadow-autocracy" target="_blank" rel="noopener noreferrer">erosion of political rights and civil liberties</a> than gains, according to Freedom House. As of 2025, 92 countries, representing 74% of the world’s population, were <a href="https://www.v-dem.net/documents/75/V-Dem_Institute_Democracy_Report_2026_lowres.pdf" target="_blank" rel="noopener noreferrer">classified as autocracies</a> by the V-Dem Institute. </p>
<p>Democratic backsliding is a primary concern for business leaders, who largely agree on the importance of strong democratic institutions. In a <a href="https://www.businessanddemocracy.org/research/business-leaders-and-consumers-220519" target="_blank" rel="noopener noreferrer">2022 survey</a> by Morning Consult and the Business and Democracy Initiative, 96% of executives said a well-functioning democracy is important to a strong economy, and 75% said it mostly helps their business. Consumer attitudes point in the same direction: In a <a href="https://www.ppsi.org/insights/new-survey-democracy" target="_blank" rel="noopener noreferrer">2024 survey</a> by Morning Consult and the Public Private Strategies Institute, 76% of consumers said they believe that businesses should help ensure safe and fair elections, and 72% supported businesses speaking out against threats to democracy. </p>
<p>Despite this widely shared view of the importance of democracy, many leaders we’ve spoken with in both the U.S. and around the world have said that they’re unsure what they can do to counter the global rise of authoritarianism. Some fear backlash or would prefer to avoid what is often framed as a partisan issue. Others see democracy as the domain of politicians and doubt that the voices of business leaders can make a difference. </p>
<p>We believe instead that business leaders are uniquely positioned to help contain democratic backsliding. Building on our research on power, democracy, and change in organizations and society, we argue that business leaders can play an essential role in the protection and strengthening of democracy. Supporting democracy is not only a civic obligation; it is also a strategic business imperative. </p>
<p></p>
<h3>The Business Case for Democracy</h3>
<p>Democracy provides businesses with two essential ingredients for success: clear rules and freedom.</p>
<p>Democracy establishes clear rules through legal frameworks, transparent regulatory processes, and consistent enforcement mechanisms. It helps ensure stable property rights, reliable contract enforcement, and anti-corruption safeguards that enable long-term investment and planning. These systems provide the kind of predictability that markets need in order to function efficiently. This does not mean that rules are always followed or enforced, but they are generally known and shape behavior in predictable ways.</p>
<p>Democracy also protects freedom. It is essential not just for political freedoms, like free expression and assembly, but also for the economic freedoms that businesses need to innovate and compete within the rules that have been democratically determined. Independent courts, media organizations, universities, and civil society organizations create checks and balances that guard businesses from discriminatory treatment, state overreach, and cronyism.</p>
<p>Together, these ingredients create a system in which people have “<a href="https://doi.org/10.4324/9780203486214" target="_blank" rel="noopener noreferrer">power with</a>” one another, rather than a single party or person holding concentrated “power over” others.</p>
<p></p>
<p>The economic dividends of democracy are numerous and well documented. Research has shown that democratization increases GDP per capita by about <a href="https://doi.org/10.1086/700936" target="_blank">20% over time</a> and <a href="https://doi.org/10.1111/j.1468-0343.2005.00145.x" target="_blank">limits corruption</a>, while <a href="https://www.brookings.edu/articles/democracy-is-good-for-the-economy-can-business-defend-it/" target="_blank">democratic backsliding</a> leads to economic stagnation, policy instability, cronyism, brain drain, and violence. Democratic countries <a href="https://doi.org/10.1086/700936" target="_blank">make larger investments</a> in capital, education, and health and adopt more economic reforms. <a href="https://academic.oup.com/restud/article-abstract/92/5/3306/7899604" target="_blank" rel="noopener noreferrer">Electoral turnovers</a> — in which the incumbent party is defeated and a new party comes to power — are a key component of healthy democracies and also improve countries’ economic performance. Democratically elected governments are strongly incentivized to support businesses that will grow and serve the needs of their citizens. </p>
<p>Authoritarian regimes, in contrast, treat business as a means to achieve their own ends. State-aligned companies are seen as showcases of regime success and are forced to prioritize political loyalty over market performance. Authoritarian governments may require companies to propagate state narratives, enforce surveillance in their workplaces and on digital platforms, or channel capital to favored industries and groups of people. Instead of remaining independent, businesses are pressured to serve as extensions of the state’s power, whether by funding patronage networks, censoring inconvenient truths, or producing goods and services that reinforce regime goals.  </p>
<p>The weakening of democracy also spills over into the workplace, threatening vitality and performance. Threats to safety and free expression breed distrust that stifles the expression of new ideas, creativity, and innovation. Talented employees may begin to look elsewhere to build their careers and lives. </p>
<p>Today, these threats are sharpened by the rise of artificial intelligence, which is already reshaping both business and democratic governance. Historical examples attest to the way that authoritarian regimes have consistently weaponized technologies to consolidate power: The Nazi Party pioneered propaganda films and radio broadcasts; the Soviet Union exploited television and telecommunications for propaganda and surveillance. Today’s autocrats are already deploying AI for mass surveillance, disinformation campaigns, and social control at unprecedented scale. If left unchecked, AI will contribute to the concentration of power in the hands of a few government officials and company leaders, undermining free expression, destabilizing trust in information systems, and ultimately further weakening democracy. </p>
<p></p>
<h3>What Business Leaders Can — and Must — Do</h3>
<p>Business leaders occupy a unique and powerful role in modern democracies. They command substantial resources and influence over their employees, customers, investors, and policy makers. Consequently, they have both the power and responsibility to protect the institutional conditions that have supported decades of economic vitality. </p>
<p>Defending democracy should not be confused with advocating for any particular political party or ideology. It is about safeguarding and enhancing the institutional conditions that protect freedom, including the freedom of businesses to operate independently. </p>
<p>Our research on power and change (including Julie’s book <a href="https://www.powerforallbook.com/" target="_blank" rel="noopener noreferrer"><em>Power, for All</em></a>) shows that such large-scale resistance occurs through collective action among broad coalitions, not isolated individual efforts. Rather than leaving their peers to make solo statements or take action on their own, companies and their leaders must shift to acting together. Coalition-based approaches increase the perceived legitimacy of collective action and amplify its impact while also reducing risks to individual organizations and their leaders.</p>
<p>We see four critical domains in which businesses, working collectively, can strengthen democracy and safeguard the conditions for long-term business success. Importantly, all of these domains cross ideological and partisan boundaries and promote democratic practices rather than specific policy outcomes. </p>
<h4>1. Defend democratic institutions and processes.</h4>
<p>Business leaders should publicly support the foundational elements of democracy: free and fair elections and an independent judiciary. Around elections, this also means taking concrete action to remove barriers to employees’ civic participation. For instance, as of 2024, over 2,000 U.S. companies were part of the nonpartisan <a href="https://www.maketimetovote.org/" target="_blank" rel="noopener noreferrer">Time To Vote</a> movement, pledging to ensure that their employees have a work schedule that allows them to vote in U.S. elections. Some companies gave employees additional time off to become poll workers or to help register voters at public events. A <a href="https://ash.harvard.edu/resources/civic-responsibility-the-power-of-companies-to-increase-voter-turnout/" target="_blank" rel="noopener noreferrer">2019 study</a> found that corporate civic responsibility programs “were well received by employees, consumers, and shareholders,” and the companies that sponsored them reported higher employee and consumer satisfaction.</p>
<p>To reinforce the democratic infrastructure of independent courts, collective business action can also take the form of joint public statements. Resisting violations of the rule of law and government overreach against one’s organization, and speaking out when such overreach affects others, signals to employees, customers, and other partners that democracy is a shared responsibility.</p>
<p></p>
<p>Businesses involved in the development and deployment of AI technologies have a particularly important role to play. Like earlier major technological advances, AI has the potential to accelerate authoritarian consolidation. Businesses must commit to being transparent about how AI models are trained and deployed, and to collaborating with governments, universities, and civil society to ensure that AI accountability systems serve rather than undermine the public good.</p>
<p>The focus of all these efforts should not be on supporting particular political parties but on ensuring that healthy, independent civil society institutions <a href="https://www.nytimes.com/2026/04/07/opinion/political-power-citizens-assemblies.html" target="_blank" rel="noopener noreferrer">in which citizens exercise real voice</a> prevent the concentration and abuse of state power. This work benefits business by maintaining the stable, rules-based environment companies need to thrive.</p>
<h4>2. Support independent civil society organizations without exercising undue influence.</h4>
<p>Businesses can help support independent journalism, academia, and civil society organizations. However, this support must come with strict safeguards to protect the independence of these organizations. To avoid undue influence, businesses can collaborate to fund these institutions through mechanisms that ensure editorial and operational independence. These mechanisms include third-party intermediation and contributions to pooled funding, which have both been used to increase the impact of corporate support for humanitarian causes. </p>
<p>In addition, standards for transparency around funding, along with disclosures of conflicts of interest and intended uses, are necessary. By publicly affirming the autonomy of the organizations they support and committing to respect that autonomy in the future, businesses reinforce the principle that a thriving democracy depends on independent civil society organizations — even when those organizations challenge businesses’ own interests. </p>
<h4>3. Limit forms of political influence that are not aligned with democratic principles.</h4>
<p>While businesses have a role to play in supporting civic participation, democratic processes, and independent civil society organizations, they should not use their financial power to shape electoral outcomes, secure special treatment, or skew public decision-making to favor private interests. There is a critical difference between supporting democratic processes and using money to impose election or policy outcomes: The first helps protect democracy, while the second risks distorting it.</p>
<p>Lobbying and campaign spending should therefore be transparent, restrained, and aligned with democratic principles. Excessive corporate influence over election outcomes and government decision-making <a href="https://doi.org/10.1146/annurev-polisci-010814-104523" target="_blank" rel="noopener noreferrer">weakens democracy</a>. As President Abraham Lincoln declared in 1863, the United States’ “new birth of freedom” would come from a “government of the people, by the people, for the people.” When private interests <a href="https://doi.org/10.1017/S1537592714001595" target="_blank" rel="noopener noreferrer">exert disproportionate influence</a> over public institutions, democratic foundations are weakened. </p>
<p></p>
<p>In contrast, if businesses and policy makers jointly commit to making the relationship between business and government more visible and constrained, businesses can help support a system that rewards value creation and organizational performance over political spending and insider connections. In this context, industrywide agreements and democratic financing reforms, including strict donation caps, can help preserve democracy while reducing incentives for companies to engage in political spending arms races. </p>
<h4>4. Foster democratic practices within organizations themselves.</h4>
<p>Last, businesses can also help strengthen democracy by engaging in democratic practices <a href="https://doi.org/10.1177/26317877221084714" target="_blank" rel="noopener noreferrer">inside their own organizations</a>. When organizations include employees in governance and use more participatory decision-making, they model democratic processes internally. Research has found that these <a href="https://doi.org/10.1177/00018392251322430" target="_blank" rel="noopener noreferrer">internal practices can create spillover effects</a> beyond the workplace. Promoting <a href="https://sloanreview.mit.edu/article/when-employees-speak-up-companies-win/">employee voice</a> and participation in the workplace can enhance morale while also helping to cultivate habits and norms that <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5357023" target="_blank" rel="noopener noreferrer">reinforce employees’ civic engagement</a> as citizens. </p>
<p>Adopting a participatory form of governance can also strengthen the societal foundations for innovation and long-term prosperity. This was highlighted in a <a href="https://reportondemocracyatwork.org/en/the-report/" target="_blank" rel="noopener noreferrer">February 2026 report</a> by the International High-Level Expert Committee on Democracy at Work, a group (of which Julie is a member) that was tasked with advising the Spanish government on how to implement an article of Spain’s constitution. That article calls for public authorities to promote worker participation in their employers’ operational and strategic decisions, and to facilitate workers’ access to company ownership. Empowering workers in this way is especially important today because AI systems need to be developed and deployed in ways that benefit not just companies but their workers and society overall. </p>
<p></p>
<p></p>
<p>Democracy is both a moral cause and a strategic imperative. Without the democratic rule of law, checks on power, and independent institutions, the business environment becomes unpredictable and precarious. Companies cannot afford to build their futures on such an unstable foundation.</p>
<p>The time to act is now. The choices business leaders make today will determine not only the future of their companies but also that of democracy itself. At a time when democracy is under threat, business leaders across the political spectrum have an opportunity to act collectively to protect and strengthen the democratic guardrails that underpin both democracy and long-term prosperity. As central players in the economy, these leaders must recognize both their responsibility and their stake in stopping democratic decline, and work closely with partners across sectors to champion democracy with conviction and courage.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/why-business-leaders-need-to-champion-democracy/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Industrial AI for the Physical World: Siemens’s Peter Koerte</title>
				<link>https://sloanreview.mit.edu/audio/industrial-ai-for-the-physical-world-siemenss-peter-koerte/</link>
				<comments>https://sloanreview.mit.edu/audio/industrial-ai-for-the-physical-world-siemenss-peter-koerte/#respond</comments>
				<pubDate>Tue, 21 Apr 2026 11:00:40 +0000</pubDate>
				<dc:creator><![CDATA[Sam Ransbotham. <p><cite>Me, Myself, and AI</cite> is a podcast produced by <cite>MIT Sloan Management Review</cite> and hosted by Sam Ransbotham. It is engineered by David Lishansky and produced by Allison Ryder.</p>
<p><a href="https://sloanreview.mit.edu/sam-ransbotham/">Sam Ransbotham</a> is a professor in the information systems department at the Carroll School of Management at Boston College, as well as guest editor for <cite>MIT Sloan Management Review</cite>’s Artificial Intelligence and Business Strategy Big Ideas initiative.</p>
]]></dc:creator>

						<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Data Strategy]]></category>
		<category><![CDATA[Labor]]></category>
		<category><![CDATA[Rail Transportation Systems]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Quality & Service]]></category>
		<category><![CDATA[Skills & Learning]]></category>
		<category><![CDATA[Technology Implementation]]></category>

				<description><![CDATA[In this episode of the Me, Myself, and AI podcast, host Sam Ransbotham talks with Peter Koerte, a member of the managing board and chief strategy and technology officer of Siemens, about how industrial AI is quietly transforming the infrastructure that powers everyday life. While consumer AI grabs headlines, Peter explains how artificial intelligence is [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<p>In this episode of the <cite>Me, Myself, and AI</cite> podcast, host Sam Ransbotham talks with Peter Koerte, a member of the managing board and chief strategy and technology officer of Siemens, about how industrial AI is quietly transforming the infrastructure that powers everyday life. While consumer AI grabs headlines, Peter explains how artificial intelligence is improving factories, transportation systems, energy grids, and buildings behind the scenes. The conversation explores what makes industrial AI different — from the need for near-perfect accuracy to the challenge of working with proprietary, domain-specific data.</p>
<p>Peter shares examples like predicting train door failures days in advance, optimizing building energy use, and accelerating complex engineering simulations. Peter and Sam also discuss the importance of domain expertise, the value of data-sharing partnerships across companies, and why transformation is as much about people and workflows as it is about technology.</p>
<aside class="callout-info">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/MMAI-S13-E4-Koerte-Siemens-headshot-600.jpg" alt="Peter Koerte"></p>
<h4>Peter Koerte, Siemens</h4>
<p>As a member of the managing board, chief strategy officer, and chief technology officer of Siemens, Peter Koerte is responsible for developing the company’s strategy and leading its worldwide research and development activities. His current priorities include accelerating development of innovative sustainable technologies and continuing development of the Siemens Xcelerator business platform.</p>
<p>Koerte previously headed Digital Health, a Siemens Healthineers unit that develops AI-supported diagnostic procedures for health care. He joined the corporate strategy side of the company in 2007 after working for the Boston Consulting Group. Koerte holds a master’s degree in business and engineering from the Karlsruhe Institute of Technology and a doctorate in strategy and international management from the WHU-Otto Beisheim School of Management. He also completed the General Management Program at Harvard Business School.</p>
</aside>
<p>Subscribe to <cite>Me, Myself, and AI</cite> on <a href="https://podcasts.apple.com/us/podcast/me-myself-and-ai/id1533115958" target="_blank" rel="noopener">Apple Podcasts</a> or <a href="https://open.spotify.com/show/7ysPBcYtOPVgI6W5an6lup" target="_blank" rel="noopener">Spotify</a>.</p>
<h4>Transcript</h4>
<p><strong>Allison Ryder:</strong> Consumer AI makes headlines daily, but industrial AI increasingly enhances and enables nearly everything we do. Learn how one multinational company approaches data management and deployments at scale on today’s episode.</p>
<p><strong>Peter Koerte:</strong> I’m Peter Koerte from Siemens, and you’re listening to <cite>Me, Myself, and AI</cite>.</p>
<p><strong>Sam Ransbotham:</strong> Welcome to <cite>Me, Myself, and AI</cite>, a podcast from <cite>MIT Sloan Management Review</cite> exploring the future of artificial intelligence. I’m Sam Ransbotham, professor of analytics at Boston College. I’ve been researching data, analytics, and AI at <cite>MIT SMR</cite> since 2014, with research articles, annual industry reports, case studies, and now 13 seasons of podcast episodes. In each episode, corporate leaders, cutting-edge researchers, and AI policy makers join us to break down what separates AI hype from AI success.</p>
<p>Today we’re talking with Peter Koerte, chief technology officer at Siemens. Siemens is a German multinational technology company focused on industrial automation, smart infrastructure, and mobility systems, all increasingly important topics. We’ll discuss industrial AI, what it means for the workforce, and what the implications are for data sharing across industry. Peter, welcome.</p>
<p><strong>Peter Koerte:</strong> Thank you, Sam, for having me.</p>
<p><strong>Sam Ransbotham:</strong> Let’s start at a high level. Some of our listeners may not be familiar with Siemens. Can you give us a brief overview?</p>
<p><strong>Peter Koerte:</strong> Sure. Siemens [has been] out there [for almost] 180 years. What we say is, “We transform the every day of everyone.” What that means is if you think about the chair right now that you’re sitting on, the clothes that you’re wearing, the water that you’re drinking, the electricity that you’re using, the transportation systems such as trains that you’re using every day, all of that was enabled by Siemens. When it [comes] down to the way we design these things, we produce them, how we actually make sure electricity is safe and distributed, how transportation runs smoothly and safely, all of that is coming through Siemens. </p>
<p>As a consumer, usually you don’t see us, but in the industrial world, Siemens is a very, very big brand name, and we are well recognized for high quality but also for the great solutions we bring and the simplicity to our customers. </p>
<p><strong>Sam Ransbotham:</strong> I think that’s a great example. Because so much of the world we rely on, we just don’t pay attention to. We don’t notice it unless it isn’t working for some reason. You talked about industrial AI. What exactly is the difference between industrial AI and consumer AI that most people would be familiar with? </p>
<p><strong>Peter Koerte:</strong> The big difference is today, of course, consumer AI is making the headlines, while we think industrial AI is quietly but profoundly changing the physical infrastructure, the physical world that we know of. </p>
<p>So think about, for example, the building that you’re sitting in right now. That building has, of course, some climate control. About 30% to 40% of all the electricity that we’re using today goes into buildings. What we’re saying is, “what if we actually can take all the sensors that we have in these buildings, then develop an AI that automatically learns every minute — or 15 minutes, in that case — and then automatically adjusts all the temperature settings, all the lighting settings, and everything in order to cut costs and energy?” That’s exactly what we’re doing. </p>
<p>We just launched an application that saves 30% of your energy bill and therefore reduces greenhouse gases by 30% just by doing that. It runs autonomously in the background. This is what we do for grids. We do this for factories. We do this for machines. We do this for, of course, buildings, and we do this for trains. So everything in the real world, we are making it more efficient, simply by what we say: “Connecting the real world and the digital world.” We try to optimize and make things better. </p>
<p><strong>Sam Ransbotham:</strong> That makes a lot of sense. I mean, I’m sitting here on a university campus. It’s spring break. I guess we are probably heating this place about the same as we would be if it was full of people. I don’t even want to ask. I don’t want to know. </p>
<p><strong>Peter Koerte:</strong> That’s it. </p>
<p><strong>Sam Ransbotham:</strong> Well, I think we’re all familiar with consumer applications, and I think the failures of AI in consumer applications get a lot of attention, you know, with the hallucinations and these sorts of things. Somehow that seems very different if you’re connecting this to the physical world. It’s not just a funny anecdote that goes across the internet when AI screws up. It could have some real-world consequences when you make that connection. How is Siemens thinking about that? </p>
<p><strong>Peter Koerte:</strong> You’re absolutely right. Sam, thank you for saying that. When we compare consumer AI to industrial AI, there [are] three things at the very least that are profoundly different, and the first one you already alluded to is the level of precision and accuracy of those models. Obviously, you don’t need any hallucination when you make recommendations for an engineer to design the next part for, let’s say, your smartphone. Or you certainly don’t need an AI mistake when you think about how to optimize an electricity grid, because that’s critical infrastructure. </p>
<p>So what we need to ensure is the highest level of quality of those models, which, as you can imagine, that’s where we get into 99, 99.9, and so on [for the] percentage of accuracy of the models. And a lot of work goes into that to make sure that these are reliable, safe, and trustworthy. That’s the first part. </p>
<p>The second part is, actually, how do you train these models? Because all of us, we are very familiar with what we call large language models. Now in industry, we’re not necessarily talking about large language models. We’re usually talking about specific data when it comes to — going back to the building example — temperature settings. So we have a lot of time series data. We have construction data. We have engineering data. We have simulation data. This is very different. These are geometries, pictures, vectors, what have you. We have to make these models available in a very, very different way. </p>
<p>The third difference is how do we get that data? Because when we build these models for the physical world, we cannot go on the internet and just download a bunch of data from sensors for your buildings or CAD data or whatever. This is very often even very proprietary data. Customers are only willing to share that data if we are able to express an incremental benefit of when they use our model, then they in return [will] share the data with us. So, of course, in your case, [there will be] better energy savings in the building, but, also, for designers, [they’ll experience a] faster time to market because we can get them designed faster and so on. That way of how you actually get to the data is very different. So the language you’re speaking, the accuracy that we need, how we get the data, this is in the industrial world quite different than, of course, what we use in consumer AI every day.</p>
<p><strong>Sam Ransbotham:</strong> That’s pretty fascinating. My naive reaction when you first started talking was, “Oh, what you’re describing is much more structured data,” so I was pretty excited when you were [saying] a lot of this data is temperature data or structured data, but the idiosyncratic nature, and how it applies only to your building or only to your machine and only to your setting, seems very difficult. Tell us a little bit about how you’re getting people to give that data to train machines and how that transfer works between organizations. </p>
<p><strong>Peter Koerte:</strong> It’s a very good question. And you’re absolutely right. Because if you think about it, if I say, “It’s a great day” or “The day is great,” the LLM does understand the meaning that actually it’s a great day. In engineering terms, it’s very different, so therefore, we need to adjust and cater for that. The way this works in the industrial settings is you go, of course, after the industries, step by step and say, “OK, what is the semantics in there?” I alluded to buildings, and in buildings there [are] certain standards, and there [are] certain data formats and what we call ontologies. It’s the semantics. </p>
<p>There we try to get that understanding about what is it, that data that is there? It is more structured, as you say, but as you can imagine, right now you’re sitting in a room with Fahrenheit. I’m sitting in a room with Celsius. Therefore, if you then say, “Well, even this is a temperature setting,” actually it is, but it’s quite different if I’m talking 20 [degrees] and you’re talking 20, right? For me it’s warm, and then for you it’s actually really freezing cold. And that’s something to adjust for. </p>
<p>So it’s not a slam dunk, but understanding these use cases industry by industry is really key. In buildings it’s all about energy consumption. But as I said in engineering, very often it’s time to market. It’s in production. It’s usually quality and throughput. Understanding the data and the key variables that drive that is important, which brings us to a keyword that I want to mention, and that is called <em>domain know-how</em>. Because you can argue, “Well, any data scientist can do that.” It’s true. However, you really need to understand the domain that you’re operating in and the key parameters. </p>
<p>I’ll give you just one very simple example, but I find it fascinating. I’m not sure when you last used a train, but maybe the next time you use a train and I ask you, “What is the most critical component of a train?” probably you would say, “Well, probably the brakes.” That’s true; it’s safety critical. But it turns out it’s the doors. </p>
<p>And why is that? Because if you think of the job to be done of a train, [it] is to move people from A to B. That means it stops. It gets people on and off. You go from station to station to station. So the whole day, indeed, yes, the doors of the train open and shut, and thereby they break down. So the most critical part in that regard for the operations is the door. This is the main knowledge; you need to understand that part. </p>
<p>Once you understand that, then it’s fascinating, because then what you can do is you can say, “Give me the voltage reading of that motor that drives the door. Look at, of course, the profile of how that motor operates.” Meanwhile, today our models can predict any door failure 10 days prior [to] its [failure], so therefore we can get it into the depot, and you can fix it, which means higher uptime, higher reliability, all of it, and better passenger comfort. So these are the examples where we have to combine the domain know-how together with the technical know-how, meaning AI, and that’s how you create customer value, industry by industry. </p>
<p><strong>Sam Ransbotham:</strong> I like that because I can get my mind around that example. Some of the things that I was reading about Siemens were complicated to understand, but that makes a lot of sense. I think everyone has some sort of application where they would like to know ahead of time that something is going to break before it breaks. Because when it does, it’s a mess. </p>
<p>Siemens doesn’t necessarily own trains though. So how do you get that data about those voltages into your systems versus your customer who has purchased and bought that train? They have to have some sort of way to send that data. They’ve got to share information with you somehow. Weirdly enough, they would benefit from someone else’s train data for a train they don’t own. How do you manage that infrastructure? </p>
<p><strong>Peter Koerte:</strong> It’s a great question. That’s why I said it’s very different [from] the way you collect data in the industrial world. Let’s stay in the train example. Truth be told, those customers, they simply don’t. They say, “Give me the train and I’m fine, and then I’ll build my own model.” So we have operators like that. Usually, however, they are not the ones that are most successful. </p>
<p>Usually, the ones that [do consider this:] If you look at the total cost of ownership across the entire life cycle of a train, which is, let’s say, 30 years — in terms of CapEx, the investment is about 10% of the TCO, 90% is operations. So what if I go to you as the OEM? You know your system best. I share the data with you, and you help me to optimize. So you help me to optimize with regards to reliability. That’s the door example. You help me on the efficiency. This really goes down to, of course, the way you operate the train. </p>
<p>Believe it or not, we have AI that helps you to think about how to accelerate and decelerate or brake that train in order to save energy. Energy is one of the biggest operating costs that you have on the train. This is where we then take that data. It’s connected. All of these devices are then connected, of course, reliably and encrypted. And then we have the data, and then we make use out of this data, and we build our own models in that regard. And we do this customer by customer, and very often we do have a data-sharing agreement, so we can use that data. We don’t own the data. That’s important. It’s still our customers’ data, but we can use it and train our models for their purposes.</p>
<p>Then, as you said, we can combine it with other data so everybody gets better in that regard. And that’s exactly what’s happening not just in, let’s say, trains, but you see this in many machines. But it turns out it’s not enough data to build your own models because you need to have much more data across different settings. And this is where Siemens comes into play, because usually we don’t build machines, and we don’t build all the trains. Usually, we build components that go into it. So we work with car manufacturers. We work with aerospace manufacturers. We’ve worked with life sciences companies. We work with food and beverage companies, and so on, in order to help enable them. And so they come to Siemens and naturally say, “You know what, how can you help our specific industry to become better?”</p>
<p></p>
<p><strong>Sam Ransbotham:</strong> I hadn’t quite thought about it that way, that if one person has insufficient data to train a model by themselves and another person has insufficient data to train a model, but together they do, then the idea of connecting those people together creates value that neither of them could. We had <a href="https://sloanreview.mit.edu/audio/big-data-in-agriculture-land-olakes-teddy-bekele/">a guest from Land O’Lakes on a prior episode</a>. They’re sharing information with farmers. Farmers build things, they have a lot of data about their crops but how they share that data — I feel like there’s a lot of that going on where we are recognizing that idiosyncratic data is more valuable when combined with other data. At the same time, I’m not naive. People don’t want to share stuff. How do you encourage people to do this? </p>
<p><strong>Peter Koerte:</strong> There’s a simple — not an easy but a simple — answer to this, and that is the value. So if I’m not able to translate that and say, “You know what, share the data with me, and then thereby you’re going to improve your availability of the train to stay there, or I [will] improve the efficiency of your building,” then they will not share the data. It’s as simple as that. But if you do, then that’s great. Then they say that’s fine. </p>
<p>Sometimes it’s built into your solution. It’s built into the contract where they say, “Well, we don’t care. It’s fine; you can just use it.” Others are saying, “Hey, I want to also have a negotiated discount,” which is also possible. But the simple answer is you only share your data if you get some value in return. So that’s a little bit like the model. Depending on the industry, it’s slightly different in terms of the kind of value we’re creating, but still there’s some value in return. </p>
<p><strong>Sam Ransbotham:</strong> You’re describing largely a partnership but sort of between customers or with customers, but you’ve also done some recent connections with industry, like your partnership with Nvidia. Can you describe what you’re thinking there? I think the goal there is an industrial operating system. How does that work? What’s the plan there? What’s the thinking? </p>
<p><strong>Peter Koerte:</strong> With Nvidia we have a very, very close relationship for many reasons. One, of course, is you lose a lot of GPUs in order to train some of our models. Second, [for] tools that we’re providing today, Siemens is the leader in industrial software. So we [have] about 10 billion euros of digital sales. People forget about that. We’re among the top 20 software companies in the world, so we have a lot of simulation software, where you can simulate cars, trains, rockets in the digital world. </p>
<p>Of course, all these simulations take an awful long time when you think about computational fluid dynamics, which is very complex. But [it] turns out you really can accelerate them. So what we’re doing together with Nvidia is to say, “What if instead of waiting eight hours for a complex computational fluid dynamic simulation, let’s say, of the air drag on a car — we can reduce that to minutes?” And that’s exactly what we’re looking at. </p>
<p>So it’s accelerating simulation, accelerating design, when it comes to chip design, which is really interesting as we get to lower nanometers — two nanometers and less — the complexity of verifying those chip designs is enormous. It exponentially really rises. So instead of having human engineers going through every circuit and really testing it to every gate array, actually you can start to have an AI go through this and do this over and over and over again. So the chip design verification is one. </p>
<p>Then, lastly, the design transfer to manufacturing is a key issue because these really hold you up in how fast you can get these chips out there. There again, as you are the designer, we can have the AI in the background verify whether what you’ve designed is correct and whether it can be manufactured. </p>
<p>These are examples that we have announced also at [the Consumer Electronics Show] earlier this year with Nvidia. We are really excited about [them] because we think we can further <em>accelerate</em> — and this is always the keyword: acceleration of design, acceleration of manufacturing, acceleration of operations. That’s why we are so excited about it.</p>
<p><strong>Sam Ransbotham:</strong> I get the appeal of switching eight months to eight minutes. It doesn’t take much quantification; we can do that in Fahrenheit or in Celsius, either way that works. But the other thing it makes me think about is that you probably have a lot of processes designed around the idea that it was going to take eight months to do that. And when it takes eight minutes, it feels like, sure, it compresses it, but it also might change the types of things you do, the order that you do them in. It seems like it could just have this ripple of upheaval. How do you manage that? Or maybe am I extrapolating too much? It feels like it could be a mess. </p>
<p><strong>Peter Koerte:</strong> That is very true. That’s why I tend to say, always, AI is about 20% technology and 80% is actually transformation. What that means is, we talked a lot about data, that’s one thing, but then it is really changing the processes of how you do things. And, usually, what the AI is now doing is it really changes workflows. So instead of thinking sequentially, where I do one task, let’s say I do the design. The next one is doing the verification. Then the next one is looking at how do I design to transfer, and transfer it to manufacturing. It’s very sequential.</p>
<p>Now what if you could do this all in one step because the AI is doing it? Obviously you’re disrupting a very well-established workflow process. The first question that comes is, who is doing this? Is that the designer from the very end [or] from the beginning? Is it somebody completely else? Who’s the persona that you’re actually talking to? Some very interesting questions. </p>
<p>Second, how is that process then going to go? And who is verifying that whatever the AI is doing is really correct? Then a third question is, where do I do this? Where is the AI sitting? Is that a new application? Is that embedded into an existing application? Is it talking to all applications? All of these interesting questions arise, and they are not usually all technical. Very often, we find this is very much about the people [who] use it every day and involve them, and then start to think — rethink — what wasn’t possible before and thereby addressing also some anxieties, because many would then argue, “The AI is going to take my job away.” So then you have a lot of resistance. Then all of a sudden a technology conversation becomes a cultural-change transformation conversation. We find this time and again. </p>
<p><strong>Sam Ransbotham:</strong> Now, the natural follow-up is free for me to ask about workflow and these types of issues. They’re all important, and I don’t want to discount those or whatever, but you’re pretty fired up about smart glasses and workers wearing smart glasses. What’s next for them? How do you see them in the industrial world? </p>
<p><strong>Peter Koerte:</strong> I’m very excited about smart glasses. If you think about, in particular, U.S. manufacturing: I just spoke to a major new electrical vehicle car manufacturer, and they told me in their manufacturing, their churn rate — so the attrition of their blue-collar workers — is 35%. What that means is you constantly have to retrain your employees. And it’s not just retraining them, but also the other question is, “How do you capture that knowledge?” What if you can take your glasses, you have that camera, and, let’s say, you are a specialist in operations and you are a maintenance engineer for a specific machine. </p>
<p>That camera and that AI is [looking over] your shoulders, literally, and really checking off what you’re doing. Maybe you’re narrating it even. You record this. You do this over and over again, thereby, you’re democratizing that knowledge, actually. You can capture this for future people coming in. But even better than for the new worker, working the night shift, 2 a.m. in the morning, a machine breaks down, usually people are just tinkering around having no idea. But what if you had those glasses on now, and those glasses are saying, “This is a CNC machine. Usually the failure code of E345 means actually it is a Jam 2. Check that lid and open this one, two, three, four, five,” and off you are. How amazing is that? </p>
<p>I really think in terms of the keyword <em>augmentation</em>. So augmenting the workers, the blue-collar workers, but also white-collar workers on the shop floor and, of course, capturing that knowledge as they are exiting. Isn’t that amazing? I think it’s going to make us all much more productive and much more enjoyable, because you get faster time to results, and thereby you get the factory running, and so on and so on. And you reduce a lot of anxiety and fear, because very often people don’t know what to do. Now all of a sudden they have a companion. They have a copilot, colleague, whatever you want to call it, that helps them, and that is there for them 24-7, as opposed to calling somebody who’s probably somewhere home and sleeping. </p>
<p><strong>Sam Ransbotham:</strong> That makes a lot of sense. I want to draw a little contrast though. Earlier we were talking about data, and you were talking about a need for deep expertise and deep domain knowledge. But it sounds like this is maybe a push against, or you’re not needing to know that the E345 error code means this, that, or the other. Is it deeper? Is it more specialized? Those seem in conflict to me in some ways.</p>
<p><strong>Peter Koerte:</strong> Obviously, we need both. But, actually, the example is pretty comparable if you think of it. So yes, I can tell you the door is going to break down, and this is now preventative maintenance. The other case was more as a reaction. But in both cases it’s maintenance. So the preventive maintenance means that still a worker has to go out there and replace the motor. Now, on the other hand, in our case here, it’s the same thing. It just gives you the intelligence of what to do. And the doing itself still has to be done by somebody who’s operating that machine. So I think it’s pretty comparable. </p>
<p>The interesting thing about this is because it still requires humans, could we at some point automate that through the whole conversation about robotics and humanoids and everything? This is certainly then also a big push right now that we’re seeing in the market. Whether this is going to come soon or not, we don’t know, but for sure we’re missing at least 2 million people in the workforce in the United States already today … on the shop floor. So the only way to stay productive is by automation. This is where Siemens helps many companies to automate their processes in the factories. </p>
<p><strong>Sam Ransbotham:</strong> Maybe I’m reading too much into it, but I read something you’d written about humanoid robots and some skepticism about the actual humanoid shape, and you were kind of hinting at that right there. For one, I’m totally with you. The human shape is not anything magical, and there are a lot better shapes for industrial machinery in particular. Are things going to look like humans, or are they going to look like machines, or different? </p>
<p><strong>Peter Koerte:</strong> Well, that’s the big debate. To be honest, it’s too early to tell. I’ve seen both. As a matter of fact, today I just had two conversations of that sort. One of them [was] going in the direction of we need to have humanoids, the other one [was] saying “No, no, no.” I think in the end it comes down to the ROI and the value, again, that we’re creating. </p>
<p>Let’s take a very simple example. Let’s say material handling is a big one in a factory. You have to always make sure that there’s an ample supply of material. Let’s say, in particular, if you’re in a stamping plant, it’s metal sheets, and so it’s heavy. Taking a humanoid is probably not a good idea, although there [are] use cases; I’ve seen them. And there [are] many reasons. One, the payload is very, very, very limited. Number two, humanoids are quite slow if you look at them, at least today. The question is, can you accelerate them? But today they are slow. And then lastly, up to 30% of the energy consumed in a humanoid is just to make sure that you’re standing upright. What if you actually had different form factors that would give you higher payload, faster speed, less energy consumed, and then it becomes an ROI conversation? It depends. It’s very hard to generalize. </p>
<p>In this case, though, I almost would bet probably a different form factor to a humanoid is a better one. But there [are] others where you could argue a humanoid could do a better job, for example, wiring harnesses, clipping them together, where you need to learn dexterity and versatility and all of it. Maybe, but that’s exactly why it’s a fascinating field. I think anybody who claims [to] know it, I think it’s too premature, but it’s a fascinating field. </p>
<p><strong>Sam Ransbotham:</strong> Actually, I like that because I think so many things are increasingly “it depends,” because we don’t have these one-size-fits-all models that are going to work. And you know that defeats our ability to make some sort of prognostications here. </p>
<p>Thanks for taking the time to talk with us and sharing your insights about industrial AI, which is probably a different idea for some people, and also data sharing in the future of work. And listeners, thanks for joining us on <cite>Me, Myself, and AI</cite>. </p>
<p><strong>Peter Koerte:</strong> Thank you, Sam. It was great. </p>
<p><strong>Sam Ransbotham:</strong> Thanks again for listening today. Next time, Vineet Khosla, CTO at <cite>The Washington Post</cite> joins us for a conversation about AI innovation in publishing. Please join us then.</p>
<p><strong>Allison Ryder:</strong> Thanks for listening to <cite>Me, Myself, and AI</cite>. Our show is able to continue, in large part, due to listener support. Your streams and downloads make a big difference. If you have a moment, please consider leaving us an Apple Podcasts review or a rating on Spotify. And share our show with others you think might find it interesting and helpful.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/audio/industrial-ai-for-the-physical-world-siemenss-peter-koerte/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Beyond the Model — Why Responsible AI Must Address Workforce Impact</title>
				<link>https://sloanreview.mit.edu/article/beyond-the-model-why-responsible-ai-must-address-workforce-impact/</link>
				<comments>https://sloanreview.mit.edu/article/beyond-the-model-why-responsible-ai-must-address-workforce-impact/#comments</comments>
				<pubDate>Tue, 21 Apr 2026 11:00:29 +0000</pubDate>
				<dc:creator><![CDATA[Elizabeth M. Renieris, David Kiron, Steven Mills, and Anne Kleppe. ]]></dc:creator>

						<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Employee Safety]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Human-Machine Collaboration]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[IT Governance & Leadership]]></category>
		<category><![CDATA[Managing Technology]]></category>
		<category><![CDATA[Technology Implementation]]></category>
		<category><![CDATA[Workplace, Teams, & Culture]]></category>
		<category><![CDATA[Responsible AI]]></category>

				<description><![CDATA[For the fifth year in a row, MIT Sloan Management Review and Boston Consulting Group (BCG) have assembled an international panel of AI experts that includes academics and practitioners to help us understand how responsible artificial intelligence (RAI) is being implemented across organizations worldwide. In prior years, we examined organizational RAI maturity; third-party, generative, and [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/BCG-RAI_2026_ExpertPanel01-1290x860-1.jpg" alt="" /><br />
</figure>
<p>For the fifth year in a row, <cite>MIT Sloan Management Review</cite> and Boston Consulting Group (BCG) have assembled an international panel of AI experts that includes academics and practitioners to help us understand how responsible artificial intelligence (RAI) is being implemented across organizations worldwide. In prior years, we examined organizational <a href="https://sloanreview.mit.edu/article/mature-rai-programs-can-help-minimize-ai-system-failures/">RAI maturity</a>; <a href="https://sloanreview.mit.edu/article/responsible-ai-at-risk-understanding-and-overcoming-the-risks-of-third-party-ai/">third-party, generative, and agentic AI risks</a>; and <a href="https://sloanreview.mit.edu/article/a-fragmented-landscape-is-no-excuse-for-global-companies-serious-about-responsible-ai/">core AI governance pillars</a>, including accountability, explainability, and oversight. Since our project began, AI use has rapidly spread among organizations of every size, sector, and geography. At the same time, early fears have begun to materialize related to its impact on the workforce, with several companies announcing <a href="https://www.wsj.com/tech/ai/the-week-the-dreaded-ai-jobs-wipeout-got-real-3ba5057b" target="_blank" rel="noopener">substantial layoffs</a> while citing AI-enabled efficiency gains.                  </p>
<p>Given the growing concerns over how much human workers will be affected by AI, we asked our panel to react to the following provocation: <em>Responsible AI practice should address workforce impact, not just AI system risk</em>. Nearly 80% of our panelists agree or strongly agree with the statement. Our panel previously highlighted that sound AI governance asks not only <em>how</em> a technology is designed or deployed but <em>whether</em> it should be used at all. This year’s panel extended that logic, stressing that responsible AI must look beyond safe systems to the real-world consequences for workers and economic stability. Below, we share our panelists’ insights and offer our practical recommendations for organizations seeking to address workforce impact as part of their responsible AI governance.</p>
<div class="callout-highlight callout-highlight--transparent">
<aside class="l-content-wrap">
<article>
<h4>Responsible AI programs should include addressing the technology’s displacement of human workers.</h4>
<p class="caption mb30">Eighty percent of panelists agree or strongly agree that responsible AI should include considering the technology's impact on human workers.</p>
<p><img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/RAI2026-Human-Article1.png" alt="Bar Chart: Strongly disagree: 3%; Disagree: 7%; Neither agree nor disagree: 10%; Agree: 20%; Strongly agree: 60%"/></p>
<p class="attribution">Source: Panel of 31 experts on artificial intelligence strategy.</p>
</article>
</aside>
</div>
<p><strong>Responsible AI must be sociotechnical, not just technical.</strong> Our experts believe that AI will change the future of work. Katia Walsh, AI lead at Apollo Global Management, argues that “we are on the precipice of a societal revolution that will profoundly alter ways of working,” and MIT professor Sanjay Sarma agrees that “implications on jobs will be significant.” In fact, Mike Linksvayer, vice president of developer policy at GitHub, points out that “as AI is rapidly incorporated into day-to-day work, it is already reshaping how judgment is exercised, how quickly people learn, and what individuals can reasonably attempt,” and he used software development as a clear example. Because AI reorganizes workflows, fragments tasks, and redistributes power between workers and organizations, our experts argue that RAI cannot be defined in solely technical terms.</p>
<p>As senior AI executive David Hardoon explains, “Far too often, AI is mistaken for a mere technology when in reality it is a much broader ecosystem involving people, processes, governance, and society at large.” Simon Chesterman, National University of Singapore’s vice provost, says that “if responsible AI only means making the model safe, accurate, and compliant, we’ve defined the problem too narrowly,” adding, “If we don’t address the human consequences, responsible AI becomes a technical checklist with a moral halo.” Ranier Hoffmann, chief data officer of EnBW, puts it another way: “Responsible AI is ultimately about governing sociotechnical systems, not just compliant algorithms.” For Jai Ganesh, Ph.D., vice president of technology, connected services, engineering, at Wipro Ltd, “responsible AI is about ensuring innovation benefits society as a whole, including the people whose work it transforms.” In other words, responsible AI is not just about what a system does but about what it does to people; overlooking this distinction carries real socioeconomic risks.</p>
<p><strong>The current RAI discourse has not kept pace.</strong> Renato Leite Monteiro, vice president of privacy, data protection, AI, and intellectual property at e&, regrets that the “conversation has been dominated by system-level concerns like bias, explainability, and safety.” While these considerations are important, he says, they are “incomplete” because AI “reshapes how people work, what skills matter, who gets opportunities, and who gets left behind.” Bruno Bioni, founder and director of Data Privacy Brasil, agrees, cautioning that by focusing on narrow technical and model-centric risks like bias mitigation, privacy, robustness, or model safety, “governance frameworks risk collapsing into a narrowly technocratic approach.” Naomi Lariviere, ADP’s chief product owner, expands on that, saying, “If we only focus on guardrails, we miss how AI reshapes accountability, advantage, and day-to-day experience.”  </p>
<p></p>
<p><strong>Workforce impact is a core AI risk to social and economic stability.</strong> Although proponents of rapid AI adoption frequently cite efficiency and productivity as core motivations, our experts warn that a failure to address workforce impact could undermine these goals and exacerbate economic issues. OdiseIA president Idoia Salazar illustrates the scope of the problem, noting that “AI can reshape tasks and roles, intensify monitoring and productivity pressure, shift decision-making power away from workers, and produce uneven impacts across different groups.” As Yan Chow of Automation Anywhere puts it, “If AI maximizes efficiency but decimates consumer purchasing power or sparks unrest, it fails as a sustainable business tool.” Hoffmann goes further, arguing that “workforce impact is not a ‘soft’ concern but rather a core system design parameter” and cautioning that organizations that “deploy AI where it adds little value but creates organizational strain ... risk weaker oversight and poorer outcomes.”   </p>
<p></p>
<p>The business case for taking workforce impacts seriously may already be playing out in practice. Alyssa Lefaivre Škopac, director of trust and safety at Alberta Machine Intelligence Institute, raises the issue of companies declaring to be “AI first” as they cut workers only to “rehire when the capabilities don’t match the hype.” She says this “fundamental misunderstanding of AI capabilities and human talent” comes with “real economic and human cost.” She adds, “Thoughtfully navigating workforce impact may be foundational to whether AI actually delivers the positive impact we’re all hoping for.” Pierre-Yves Calloc’h agrees that “workforce integration thinking is a critical factor in the long-term success of any AI initiative,” while Stanford CodeX fellow Riyanka Roy Choudhury cautions that “ignoring the impact on jobs may eventually contribute to broader economic instability.”  </p>
<p>In response to that concern, many experts emphasize that reskilling and upskilling workers is crucial to mitigating AI’s potentially negative workforce effects. Ganesh recommends implementing a two-pronged strategy that focuses on bias, safety, privacy, and security issues along with the workforce impact by “upskilling, educating employees to work confidently alongside intelligent systems, and being transparent about how AI is used in decision-making.” University of Helsinki professor Teemu Roos similarly emphasizes that “the primary concern is ensuring sufficient support for upskilling and reskilling among the workforce to address rapid change and increasing complexity.” Not all experts are optimistic about this approach, however. Chow observes that “technological progress is exponential, while human reskilling remains linear,” warning that “unless responsible AI explicitly mandates accelerating workforce readiness to match this velocity, the skills gap will become an unbridgeable chasm, rendering upskilling a hollow promise.”</p>
<p><strong>Responsibility for workforce impact should be distributed.</strong> Given the substantial challenges that AI poses to the future of work, Kirtan Padh, scientific collaborator at AI Transparency Institute, asks, “Who is responsible for any negative impacts on the workforce?” Are businesses, governments, or both? IMD Business School professor Öykü Işik believes that addressing AI’s workforce impact “is a matter of formal corporate governance” that “undoubtedly rests with the board and executive leadership.” GovLab cofounder and chief research and development officer Stefaan Verhulst agrees that “companies must improve corporate policies that protect and nurture their employees.” Yet Nasdaq’s head of AI research and engineering Douglas Hamilton calls for a division of responsibilities, arguing that AI-related job displacement should be the primary concern of “governments, universities, and nonprofits,” whereas “responsible companies need to fully capture its value in unequivocal ways.” </p>
<p>Several experts argue that companies cannot be expected to bear this burden alone, while pointing to the role of policy and lawmakers. Wharton School professor Kartik Hosanagar argues that “policy makers hold the primary responsibility” for the workforce impacts of AI. At the policy level, Ganesh calls for “preparing the labor market for collaborating with AI by identifying future skills, adapting curricula, and supporting transitions,” while Sarma argues that this preparation requires “everything from completely rethinking our educational paradigms to reskilling, unemployment support, and fundamental questions about the future of the economy.” Hardoon says, “A truly responsible approach demands holistic governance, AI literacy training, and policies that protect workers and preserve human agency.”    </p>
<p></p>
<p>Several experts also caution that the stakes of inaction are potentially high. ForHumanity founder Ryan Carrier warns that failure to address workforce impact “will result in increased economic inequality as the wealth created by AI would be increasingly concentrated.” He believes that “a legislative policy response and consumer choice have a role to play in signaling whether we want corporations to continue to employ humans, and to what degree.” Bioni adds that “labor unions and worker associations can play a critical role through collective bargaining agreements [including] provisions on prior consultations before AI deployment, access to information about automated decision-making systems, and limits on algorithmic surveillance.”</p>
<h3>Recommendations</h3>
<p>In summary, we offer the following recommendations for organizations seeking to address workforce impact as part of their responsible AI efforts:</p>
<p><strong>1. Increase the scope of RAI practices beyond models.</strong> Expand the definition of responsible AI to encompass not just model performance but the full ecosystem of people, processes, and institutions that shape how AI is built, deployed, and experienced. Workforce impact is a core organizational design parameter that should be proactively embedded in AI governance frameworks from the outset. Governance frameworks that focus exclusively on technical performance miss the deeper question of what AI does to workers, organizations, and economic life. Workforce impact must be evaluated at the board level alongside business outcomes.</p>
<p><strong>2. Include workforce impact as part of your AI strategy.</strong> Organizations are racing to create strategies for deploying AI tools and upskilling staff on their use. Plans for AI that change the nature of work should be accompanied by plans for human reskilling, redeployment, and transition strategies. However, as Chow suggests, reskilling can’t or won’t keep pace with technological advances, so companies need to look at other options to address workforce impact. Include workforce metrics, such as displacement rates and reskilling completion, alongside technical performance and value measures when tracking implementation. Companies should ensure their strategy accounts for the hidden costs of large-scale workforce impact, including reputational damage, reduced consumer trust, and growing regulatory risk. These potential downsides may ultimately outweigh the short-term efficiency gains.</p>
<p><strong>3. Evaluate worker impact alongside other product-level risks.</strong> Product evaluations must move beyond technical performance to include workforce effects, including overreliance, skills atrophy, disempowerment, “AI brain fry,” and work intensification. These factors should be part of risk identification and mitigation development. Transparency about how AI is used in decision-making, what tasks it will reshape or eliminate, and mitigation plans (e.g., transition support) should be built into deployment plans and considered as part of the business case for the AI use. Workforce impacts must be explicitly considered as part of go/no-go decisions before pursuing specific AI tools.</p>
<p><strong>4. Make employees part of the conversations about workforce impact.</strong> Organizations have an obligation to communicate openly with workers who may be affected by AI — not as a courtesy but as a core governance responsibility. Workforce impact statements should be part of organizational AI strategies, alongside business value statements. Otherwise, responsible AI remains a conversation that happens above workers rather than with them. And in some jurisdictions, this engagement may not be optional. Workers’ councils are increasingly important to shaping AI strategy, especially in cases where worker displacement may occur.</p>
<p><strong>5. Assign clear leadership accountability for workforce impact.</strong> Addressing workforce impact cannot be treated as a shared responsibility that belongs to everyone — and therefore no one. While it requires coordinated effort across human resources, operations, legal, technical, and business leadership, cross-functional collaboration without named ownership is how consequential issues fall through the cracks.</p>
<p>Organizations must designate a specific leader, with real authority and board-level visibility, who is accountable for developing and executing a workforce impact strategy. To address externalities, they’ll need to proactively engage with policy makers, industry bodies, and labor organizations. This leader should be prepared to make the case, to shareholders and executives alike, that the hidden costs of large-scale displacement — the erosion of in-house expertise needed to verify AI outputs, reputational damage, eroded consumer trust, and mounting regulatory exposure — will outweigh the short-term efficiency gains that drove the cuts in the first place. If no single leader owns workforce impact, it will remain a talking point in governance documents rather than a genuine organizational commitment.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/beyond-the-model-why-responsible-ai-must-address-workforce-impact/feed/</wfw:commentRss>
				<slash:comments>1</slash:comments>
							</item>
					<item>
				<title>﻿How AI Helps the Best and Hurts the Rest</title>
				<link>https://sloanreview.mit.edu/article/how-ai-helps-the-best-and-hurts-the-rest/</link>
				<comments>https://sloanreview.mit.edu/article/how-ai-helps-the-best-and-hurts-the-rest/#comments</comments>
				<pubDate>Mon, 20 Apr 2026 11:00:24 +0000</pubDate>
				<dc:creator><![CDATA[Nicholas Otis, Rowan Clarke, Solène Delecourt, David Holtz, and Rembrand Koning. <p>Nicholas Otis is a Ph.D. candidate at the University of California, Berkeley’s Haas School of Business. Rowan Clarke is a Ph.D. candidate at Harvard Business School. Solène Delecourt is an assistant professor in the Management of Organizations group at the Haas School of Business. David Holtz is an assistant professor in the Decisions, Risk, and Operations division at Columbia Business School, affiliated faculty at the Columbia University Data Science Institute, and a research affiliate at the MIT Initiative on the Digital Economy. Rembrand Koning is the Mary V. and Mark A. Stevens Associate Professor at Harvard Business School and codirector of the Tech for All Lab at the Digital Data Design (D³) Institute at Harvard.</p>
]]></dc:creator>

						<category><![CDATA[AI Strategy]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Business Development]]></category>
		<category><![CDATA[Decision-Making]]></category>
		<category><![CDATA[Entrepreneurship]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Narrated Article]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leading Change]]></category>
		<category><![CDATA[Managing Technology]]></category>
		<category><![CDATA[Technology Implementation]]></category>
		<category><![CDATA[Frontiers]]></category>

				<description><![CDATA[Mark Shaver/theispot.com Can generative AI serve as an effective adviser for business owners and entrepreneurs? Intuitive chat-based natural language interfaces mean that anyone who can read and write can use GenAI tools for a wide range of tasks, even if they lack technical skills. This has obvious appeal for entrepreneurs and small business owners, many [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/2026SUMMER_Delecourt-1290x860-1.jpg" alt="" class="wp-image-126678"/><figcaption>
<p class="attribution">Mark Shaver/theispot.com</p>
</figcaption></figure>
<p></p>
<p></p>
<p><span class="smr-leadin">Can generative AI serve</span> as an effective adviser for business owners and entrepreneurs? Intuitive chat-based natural language interfaces mean that anyone who can read and write can use GenAI tools for a wide range of tasks, even if they lack technical skills. This has obvious appeal for entrepreneurs and small business owners, many of whom could benefit from an on-demand adviser able to help with marketing, pricing, operations, and strategy.</p>
<p>Improving the performance of entrepreneurs at scale has proved to be <a href="https://doi.org/10.1093/oxrep/grab002" target="_blank">challenging</a>. The most effective interventions tend to be high touch, such as <a href="https://doi.org/10.1093/qje/qjs044" target="_blank">hands-on consulting</a>, <a href="https://doi.org/10.1257/app.20170042" target="_blank">individualized mentorship</a>, and <a href="https://doi.org/10.1002/smj.2987" target="_blank">in-person networking</a>. However, they are expensive to deliver and difficult to scale. In emerging markets specifically, this constraint is often even tighter: High-quality business support can be scarce, and its cost can be prohibitive relative to organizational resources. A low-cost and always-available AI mentor could potentially deliver, at scale, the type of business guidance that has historically been limited by the availability and cost of human experts.</p>
<p>To test whether accessing generative AI can actually help small businesses, we ran a field experiment with hundreds of small business owners in Kenya. We randomly gave half of them access to a WhatsApp contact that connected them to a version of OpenAI’s GPT-4 that we had prompted to act as a Kenyan business adviser, and then we tracked business performance over time. The key factor driving either an increase or decrease in profits and revenues? Whether an entrepreneur had the judgment to distinguish good AI advice from bad.</p>
<p></p>
<h3>Testing AI Advice in the Real World</h3>
<p>Many previous studies of generative AI have focused on narrow, well-defined tasks, such as <a href="https://doi.org/10.1126/science.adh2586" target="_blank">drafting emails</a>, <a href="http://dx.doi.org/10.2139/ssrn.4573321" target="_blank">developing business strategy</a>, or <a href="https://doi.org/10.1287/mnsc.2023.03014" target="_blank">generating marketing ads</a>. For such tasks, the tool’s output can often be used with little modification, allowing even less-skilled users to benefit from AI assistance. Consistent with this idea, <a href="https://doi.org/10.1126/science.adh2586" target="_blank">studies have found</a> that the workers who were struggling the most before using AI benefited the most from using such tools.</p>
<p>Managing a business is not a narrow or well-defined task, though. Entrepreneurs often face vague and ambiguous problems. They do not just need help with writing an email; they need help deciding what problem to tackle, what strategy to pursue, and which advice applies to their specific context and then choosing what to implement under real constraints. On its own, AI does not typically handle those kinds of problems well. When Anthropic gave its Claude Sonnet 3.7 large language model total control of a small vending business in its San Francisco office, the LLM sold items at a loss, gave away free products, and quickly <a href="https://www.anthropic.com/research/project-vend-1" target="_blank">ran the shop into the red</a>. But what happens when, instead of leaving AI to run a business on its own, it advises a human entrepreneur who can then decide when to implement or ignore its ideas?</p>
<p>To test how AI impacts a broad task like running a business, we designed a study to evaluate it in the messy reality that entrepreneurs face. We recruited 640 small business owners in Kenya from a range of sectors — including food and beverage, agriculture, and car-wash services — and ran a randomized controlled trial from May to November 2023. Since most of the country’s population communicates via mobile phone, half of the participants were given access to a GPT-4-powered AI business adviser delivered via WhatsApp, the dominant messaging platform in Kenya. Eighty percent had never used ChatGPT or any other generative AI tool. Both groups received brief onboarding training, but the control group received an online business training guide instead of AI access.</p>
<p></p>
<p>Business owners in the experimental group could ask any business-related question of their choosing and use the assistant as much or as little as they wanted. We tracked sales and profits over time, comparing entrepreneurs who got the AI assistant against the control group, who did not. On average, the difference between the control group’s and the experimental group’s business performance was ﻿close to zero and not statistically significant. But the average for the experimental group masked a striking split: Having access to generative AI boosted revenues and profits by 15% among business owners who had already been doing well (that is, they were in the top 50% of performance before the experiment), but among those in the bottom 50%, AI use led to a nearly 10% decline in revenues and profits.</p>
<p></p>
<h3>Same Advice, Different Choices</h3>
<p>Why would a tool capable of producing high-quality business suggestions harm the entrepreneurs it was supposed to help? We found that both high- and low-performing entrepreneurs asked a similar number of questions, asked similar types of questions, and even received similar advice from the AI tool. The difference was in what they chose to act on.</p>
<p>In our data, we saw that every entrepreneur, regardless of baseline performance, received generic suggestions like “lower your prices” or “invest in advertising” alongside more tailored, context-specific ideas. Low performers disproportionately acted on the generic advice, cutting prices and increasing spending on advertising. These one-size-fits-all moves often eroded margins and raised costs without generating enough new business to offset the costs.</p>
<p>High performers, in contrast, used GenAI to discover and implement changes specific to their situation: A cybercafe owner started renting out gaming accessories to customers; a car-wash owner introduced a new in-demand detergent and started selling cold sodas to waiting customers; and another entrepreneur found alternative power sources to withstand electricity blackouts. Both groups had access to the same quality of AI advice. The difference was whether the entrepreneurs had the judgment to sift through AI-generated suggestions, pick the ideas that fit their business, and ignore the rest.</p>
<p>Our takeaway from the study is that in contexts where problems are broad and fuzzy, generative AI amplifies the role of human judgment. The value created by an open-ended AI adviser is critically dependent on the human judgment that guides its use and application. In open-ended contexts, a positive effect of AI on performance relies on <a href="https://mitsloan.mit.edu/ideas-made-to-matter/study-generative-ai-results-depend-user-prompts-much-models" target="_blank" rel="noopener noreferrer">asking good questions</a>, interpreting suggestions, and choosing which actions to implement. For users with strong judgment, the tool helps surface new ideas and think through trade-offs. Users with weak judgment can end up following plausible-sounding but misleading advice that leads to worse outcomes.</p>
<p>For managers and policy makers, recognizing this nuance is essential. Without it, well-intentioned AI deployments risk widening performance gaps, because the people who often need the most help are also the least equipped to filter and apply advice.</p>
<h3>How Leaders Should Implement AI Advice for Open-Ended Problems</h3>
<p>Our experience prototyping and launching a WhatsApp-based AI adviser shows how quickly and cheaply generative AI tools can be rolled out and made widely accessible. But a fast implementation of a GenAI tool may also raise the risk that organizations roll out open-ended AI tools without strong guardrails or evaluation. As the cost of deployment falls, AI is being applied to an <a href="https://aleximas.substack.com/p/what-is-the-impact-of-ai-on-productivity" target="_blank">ever-wider range of open-ended tasks</a>. For example, engineers at Google now use AI coding tools in their day-to-day work, and there is evidence that the most experienced developers <a href="https://doi.org/10.48550/arXiv.2410.12944" target="_blank">benefit the most</a> from these tools. In book publishing, <a href="https://www.nber.org/papers/w34777" target="_blank">established authors</a> have been able to increase their output with AI while AI-assisted entrants have flooded the market with lackluster prose. For leaders managing AI within their organizations, these findings reinforce the importance of careful design and rigorous measurement to ensure that AI does not inadvertently lead to worse performance.</p>
<p>What can leaders do? First, cultivate awareness. Leaders should not assume that AI will boost performance for everyone. Evaluations that focus only on average effects can be misleading, because the mean can conceal meaningful harms for specific groups.</p>
<p>Next, leaders can design for heterogeneity. For workers with experience and judgment, open-ended AI tools can have real returns. Junior or weaker performers might need tighter guardrails to avoid following harmful suggestions. One promising direction is feeding the AI tool more context about the user’s specific situation — their business data, financials, or competitive environment — so that it can better filter out generic advice that doesn’t fit their situation. Building that kind of contextual awareness into AI tools remains an open challenge that GenAI vendors are actively exploring.</p>
<p></p>
<p>In the meantime, it is more likely that most people will find generative AI useful for specific, narrow tasks — such as summarizing documents, writing more clearly, or reviewing code for efficiency — rather than tasks that require a great deal of contextual knowledge to determine the applicability of its output and skill to implement well.</p>
<p>Organizations should also invest in human judgment and scaffolding around AI use. For high-stakes decisions, escalation to human support is a critical safeguard, especially when advice is open-ended, context-dependent, or difficult to evaluate in advance. Organizations can build supports that make these tools safer, such as structured onboarding that elicits context, decision checklists, or warnings about margin-destroying tactics.</p>
<p></p>
<p>The third step is to audit for uneven effects by asking questions in three areas:</p>
<ul>
<li><strong>Adoption:</strong> Are some groups avoiding the tool entirely or using it far less than others?</li>
<li><strong>The interactions themselves:</strong> Are different users asking different kinds of questions, providing different amounts of context, or receiving meaningfully different outputs?</li>
<li><strong>What happens next:</strong> Is the tool changing real-world decisions, and are those decisions producing better results for some users than others?</li>
</ul>
<p>Asking those questions can help leaders pinpoint where inequality may emerge, which allows for intervention through targeted training, workflow redesign, or tighter controls.</p>
<p>AI shows real potential to increase business performance at scale, but the benefits are not guaranteed. Our research results suggest that GenAI can inadvertently increase inequality in business performance by helping stronger performers more than others and, potentially, actively harming lower performers. When deploying AI tools at scale, a central design challenge is to not merely make AI available but to make its use effective so that scaling AI does not scale inequality.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/how-ai-helps-the-best-and-hurts-the-rest/feed/</wfw:commentRss>
				<slash:comments>1</slash:comments>
							</item>
					<item>
				<title>Lessons From Innovation Pioneer Florence Nightingale</title>
				<link>https://sloanreview.mit.edu/article/lessons-from-innovation-pioneer-florence-nightingale/</link>
				<comments>https://sloanreview.mit.edu/article/lessons-from-innovation-pioneer-florence-nightingale/#respond</comments>
				<pubDate>Thu, 16 Apr 2026 11:00:42 +0000</pubDate>
				<dc:creator><![CDATA[Scott D. Anthony. <p><a href="https://www.linkedin.com/in/scottdanthony/" target="_blank">Scott D. Anthony</a> is a clinical professor at the Tuck School of Business at Dartmouth College and a senior adviser and managing partner emeritus at growth strategy consultancy Innosight. He is the author of <cite><a href="https://epicdisruptions.com/" target="_blank">Epic Disruptions</a></cite> (Harvard Business Review Press, 2025).</p>
]]></dc:creator>

						<category><![CDATA[Communication]]></category>
		<category><![CDATA[Data & Analytics]]></category>
		<category><![CDATA[Disruptive Innovation]]></category>
		<category><![CDATA[Health Care]]></category>
		<category><![CDATA[Data & Data Culture]]></category>
		<category><![CDATA[Disruption]]></category>
		<category><![CDATA[Innovation]]></category>
		<category><![CDATA[Leadership]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR &#124; Wellcome Collection Florence Nightingale may be best remembered as the epitome of a kind, caring nurse, but she was also a force for disruptive innovation in health care. Three distinct elements of her work — communicating data compellingly, publicizing clear and simple instructions, and expanding professionalized training — carry timeless lessons [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Anthony-1290x860-1.jpg" alt="" class="wp-image-126611"/><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR | Wellcome Collection</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">Florence Nightingale may be best remembered</span> as the epitome of a kind, caring nurse, but she was also a force for disruptive innovation in health care. Three distinct elements of her work — communicating data compellingly, publicizing clear and simple instructions, and expanding professionalized training — carry timeless lessons for today’s leaders.</p>
<p>Born in 1812 in Florence, Italy, Nightingale announced in the 1840s that she intended to become a nurse. Her well-to-do parents protested; at the time, nursing was a lower-class profession. Nightingale persisted, ultimately receiving tutelage in nursing and related topics from Theodor Fliedner, a pastor, in what is now Germany.</p>
<p>In 1854, as the Crimean War raged, Nightingale and a brigade of 38 nurses arrived at the war hospital in Scutari (now Üsküdar) in Türkiye. During the conflict, the first since the advent of the telegraph, newspaper reporters provided updates on the conflict in close to real time. In 1855, John MacDonald of the <cite>London Times</cite> reported on Nightingale, describing her as “a ‘ministering angel’ without any exaggeration in these hospitals. … When all the medical officers have retired for the night, and silence and darkness have settled down upon these miles of prostrate sick, she may be observed alone, with a little lamp in her hand, making her solitary rounds.”</p>
<p>Thus, Nightingale became “The Lady With the Lamp” — and, perhaps, the world’s first social media star. In 1854, 5,000 babies were named Florence. In 1855, after MacDonald’s article was published, 20,000 were.</p>
<p></p>
<h3>A Three-Front Strategy of Influence</h3>
<p>Nightingale’s impact far exceeded her influence on baby names, of course. She and her fellow nurses encountered dire, squalid conditions and infectious diseases that ran rampant in military hospitals. The prime minister of Britain sent a sanitary commission to clean up the hospital after Nightingale telegraphed him for support, and she would continue to champion cleanliness in medical settings after the war. When she returned to England in 1856, she met with Queen Victoria to help spur the creation of a royal commission for hygiene in military hospitals. </p>
<p>Thus commenced Nightingale’s three-front disruptive battle in nursing and sanitation, using the tactics of data-driven communication, clear and accessible instruction, and standardized professional training.</p>
<h4>Compelling Communication</h4>
<p>Nightingale’s experience convinced her of the importance of following proper hygiene and sanitation practices in hospitals. But how to make people viscerally feel that importance when germ theory hadn’t yet been widely accepted? The answer: through data, visuals, and stories. (“Whenever I am infuriated, I revenge myself with a new diagram,” Nightingale wrote.) </p>
<p>She collaborated with physician William Farr, one of the founders of the Statistical Society of London, crunching numbers to show the obvious impact of poor sanitation policies. Critically, they created powerful ways to communicate their findings. </p>
<p></p>
<p>Their most compelling diagram was an 1858 <a href="https://www.nam.ac.uk/explore/florence-nightingale-lady-lamp" target="_blank" rel="noopener noreferrer">polar area chart</a> titled “Diagram of the Causes of Mortality in the Army in the East.” It clearly illustrated that in 1854, soldiers were more likely to die of an infectious disease in a hospital than on the battlefield. After the sanitary commission helped improve conditions, deaths by infectious diseases at the hospital dramatically declined. The chart made a stunning impact, with one reporter remarking, “Terrible do the death ‘wedges’ swell out.”</p>
<p>Nightingale also developed persuasive metaphors to illustrate the extent of the problems caused by poor sanitation in military hospitals. “It is as criminal to have a mortality of 17, 19 & 20 per 1000 in the Line Artillery & Guards in England … as it would be to take 11000 Men per annum out upon Salisbury plain & shoot them,” she wrote.</p>
<p></p>
<h4>Clear and Accessible Instruction</h4>
<p>In 1859, Nightingale released a groundbreaking book titled <cite>Notes on Nursing: What It Is, and What It Is Not</cite>. The first print run of 15,000 copies in England sold out within months. The book was quickly translated into multiple languages, and an American version was published in 1860.</p>
<p>In <cite>Notes on Nursing</cite>, Nightingale provided clear, practical guidance about how to care for patients. It wasn’t meant for someone seeking a career in nursing; rather, it targeted laypeople who might have to provide caretaking and similar services. Chapter titles like “Taking Food,” “Light,” “Personal Cleanliness,” and “Bed and Bedding” show the book’s practical bent, expressed clearly and plainly. </p>
<p>As usual, Nightingale stressed sanitation and prevention. “One duty of every nurse is prevention,” Nightingale wrote. “The surgical nurse must be ever on the watch, ever on her guard, against want of cleanliness, foul air, want of light, and of warmth.”</p>
<p>Her book enabled a broader population to learn to provide proper hygiene and ward off infectious diseases — classic disruptive innovation. In parallel, Nightingale turned her focus to increasing the number of skilled nurses.</p>
<h4>Standardized Professional Training</h4>
<p>In 1857, the Nightingale Fund was established to oversee donations that had poured in, in support of Nightingale’s work, which had become widely known. She used a portion of the funds to help open the world’s first formal nursing school at St Thomas’s Hospital in London. </p>
<p>Prior to Nightingale’s efforts, training was disorganized and nursing was inconsistently practiced. Before her book was released, “there were no schools for nurses and therefore no trained nurses,” wrote Virginia Dunbar, former dean of the Cornell University School of Nursing.</p>
<p></p>
<p>The first students arrived in 1860. The curriculum blended formal knowledge of areas such as biology and physiology along with practical skills. Would-be nurses worked side by side with experienced ones. Nightingale handpicked the staff and helped to shape the curriculum. The graduates from that program, known as “Nightingales,” spread their wings throughout the world.</p>
<p>A key driver of disruption is allowing a broader population to do what once required specialized expertise. Nightingale herself had to receive one-on-one teaching to learn the art of being a skilled nurse. Her school played a pivotal role in turning such lessons from art to science, enabling more people to effectively provide nursing services.</p>
<h3>Timely Lessons From a Timeless Story</h3>
<p>Compelling communications. Comprehensive instructions. Standardized training. Nightingale’s contributions drove societal improvements we take for granted today, like washing hands to help prevent the spread of infectious diseases, circulating the air in places where sick people are gathered, and removing and treating wastewater. </p>
<p>In 1875, Britain passed the Public Health Act, which called for well-built sewers, clean running water, and regulated building codes. Life expectancy, which had stagnated at about age 40 in the United Kingdom for centuries, increased by 38% over the next 50 years. </p>
<p></p>
<p></p>
<p>Nightingale’s story has three timely lessons for modern leaders.</p>
<p>First, one of the powers of disruptive innovation is doing things differently, not just better. By educating a broader population about hygiene and nursing practices — which had previously been poorly understood — Nightingale enabled more decentralized and accessible health care. </p>
<p>Second, sophisticated technology is not required for significant impact. Nightingale and Farr used early adding machines for their groundbreaking analysis, but what’s striking about the story of their compelling “death wedge” diagram is how little technology was involved. </p>
<p>Third, disruption doesn’t require superpowers or a larger-than-life leadership presence. Nightingale demonstrated timeless qualities and behaviors that fuel disruptive success, such as curiosity, collaboration, and persistence. </p>
<p>You likely have Nightingales inside your organization. Give them space and support, and watch them kindle their own lamps to spread light.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/lessons-from-innovation-pioneer-florence-nightingale/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>The Human Side of AI Adoption: Lessons From the Field</title>
				<link>https://sloanreview.mit.edu/article/the-human-side-of-ai-adoption-lessons-from-the-field/</link>
				<comments>https://sloanreview.mit.edu/article/the-human-side-of-ai-adoption-lessons-from-the-field/#respond</comments>
				<pubDate>Tue, 14 Apr 2026 11:00:06 +0000</pubDate>
				<dc:creator><![CDATA[Ganes Kesari. <p><a href="https://www.linkedin.com/in/gkesari/" target="_blank">Ganes Kesari</a> is founder and CEO at <a href="https://tensorplanet.com/" target="_blank">Tensor Planet</a>, a software product company focused on predictive maintenance for commercial vehicle fleets.</p>
]]></dc:creator>

						<category><![CDATA[AI Strategy]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Change Management]]></category>
		<category><![CDATA[Employee Psychology]]></category>
		<category><![CDATA[Leadership Advice]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Disruption]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leading Change]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR Not a day goes by without another article being published about how AI could disrupt yet another aspect of our business or personal lives. In recent years, AI adoption has indeed taken off. However, if you pay close attention, you’ll notice a dichotomy. Many examples of successful early adoption of artificial intelligence [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Kesari-1290x860-1.jpg" alt="" class="wp-image-126585"/><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">Not a day goes by without</span> another article being published about how AI could disrupt yet another aspect of our business or personal lives. In recent years, AI adoption has indeed taken off. However, if you pay close attention, you’ll notice a dichotomy. </p>
<p>Many examples of successful early adoption of artificial intelligence tend to come from a small cluster of industries that are heavily digitized or are pro-technology. The usual suspects include banking, financial services, e-commerce retailers, and the like. However, some other industrial sectors, many of which are big contributors to our economy, don’t show the same level of progress or enthusiasm when it comes to AI adoption. </p>
<p>Take the example of specialty and essential services industries such as construction, mining, or waste management. Some of these companies are part of a robust economy but largely powered by legacy software from decades ago, with some processes still happening through pen and paper. While AI has made nascent inroads here, the levels of adoption leave much room for growth.</p>
<p>Leaders in these industries often assume that they have stable processes that have served them well for decades. Yes, things might break once in a while, leading to customer service disruption, rework for the team, and internal process disruption. But then, they have always recovered. People in these industries may view AI as gimmicky, too much work, and/or not trustworthy.</p>
<p></p>
<p>Having spent more than 15 years helping dozens of industries embrace AI, I’ve been curious to study what distinguishes the two sets of leaders and the quite different levels of AI adoption they achieve. And, importantly, I’ve spent years in the trenches experimenting with techniques that help address adoption challenges.</p>
<p>Here, I’ll share what’s at the root of the leadership challenge and how leaders in industries that have been conservative about AI can orchestrate meaningful change. Let’s examine some grounded examples and no-nonsense tips for AI adoption.</p>
<h3>Why AI Adoption Lags in Some Industries</h3>
<p>My experience in the field points to three prevalent factors holding back some industries from moving forward with artificial intelligence.</p>
<h4>1. AI feels inaccessible and scary.</h4>
<p>When you can’t comprehend something, you start developing a fear of it. When everyone around you seems to talk about it and you feel left behind, the fear only grows. When the technology feels intrusive and uncomfortable, you draw back into your shell.</p>
<p>This is exactly what’s happening with AI when it comes to a majority of late adopters in both private and public sectors. The hype around AI and the seemingly irrational excitement of tech pundits only alienates people in cautious companies. To make matters worse, anytime there’s news about an uninformed AI investment backfiring or machine learning algorithms going rogue, it solidifies the narrative that AI is inaccessible and not ready for the masses yet.</p>
<p>Driver-facing AI-enabled cameras in freight vehicles are a case in point. For truck drivers, a camera inside the cab feels intrusive and disciplinary long before it’s perceived as a safety or performance-aiding tool. A <a href="https://truckingresearch.org/2023/04/new-atri-research-identifies-strategies-for-improving-driver-facing-camera-approval-and-utilization/" target="_blank" rel="noopener noreferrer">report by the American Transportation Research Institute</a> shows that truck drivers’ approval of driver-facing cameras tends to be low: just 2.24, on average, on a 0-to-10 scale among 650 current users from across the industry.</p>
<h4>2. AI looks like a lot of avoidable work.</h4>
<p>AI is often touted as a savior that automates drudgery. But people on the ground who are tasked with making the AI tools work and integrating them into workflows may perceive AI as creating <em>extra</em> work, not relieving them of it. </p>
<p>With front-line teams in labor-intensive industries often feeling overstretched and under-supported, the need for more training or changes to existing workflows just adds friction before adding any value. In many late-adopting industries, AI is immediately associated with capital-heavy hardware and forced operational change. </p>
<p></p>
<p>It doesn’t help that organizational memories are often clouded by many failed or painfully stretched technology rollouts — think enterprise resource planning systems, safety tools, telematics systems, and so on. People wonder whether this AI-tools wave is another fad that’s worth waiting out. When you take a deeper look, you realize that change fatigue, not an aversion to technology, is the real blocker.</p>
<h4>3. AI benefits don’t really seem worth the pain.</h4>
<p>Most technology evangelists and leaders commit the blunder of communicating AI value in the wrong currency. Improved accuracy or productivity boosts mean little to front-line operators, who care more about customer escalations, rework, or operating costs.</p>
<p>In a 2025 <a href="https://www.deloitte.com/se/sv/Industries/technology/perspectives/ai-roi-the-paradox-of-rising-investment-and-elusive-returns.html" target="_blank" rel="noopener noreferrer">executive survey by Deloitte</a>, although 65% of leaders said that AI is part of their corporate strategy, many also acknowledged that the ROI is neither immediate nor purely financial. From a front-line worker perspective, the cost of learning and adopting an intimidating technology like AI feels personal, but the benefits feel abstract and impersonal. </p>
<p>When it’s difficult to articulate tangible business outcomes from AI for the next quarter, such initiatives struggle to secure or sustain sponsorship and are easily deprioritized. Every time AI implementations fail to deliver on vague goals, which is quite often, the trust deficit only grows.</p>
<p></p>
<h3>Three Pillars for Successful AI Adoption</h3>
<p>How can you, as a leader, address those challenges and set your organization up for success? Consider these three essential strategies.</p>
<h4>1. Use everyday analogies to make AI less threatening.</h4>
<p>Education is a prerequisite for meaningful AI adoption. When your end users don’t understand why they should use or trust AI, the initiative is dead on arrival. How can you make AI accessible to an audience that’s not digital-native?</p>
<p>We are no longer in a period where there are few notable uses of AI. Some people don’t realize that they already use AI dozens of times every single day. Don’t we unlock phones with facial recognition? Aren’t even unbranded smartwatches good at detecting workout activities or flagging an irregular heart rhythm? Don’t some people delight at discovering long-lost school buddies through Facebook or Instagram friend recommendations?</p>
<p>Each of these examples is an instance of AI at work. In conversations with leaders, when I share these as examples of sophisticated AI use by the general public, it surprises them every single time. Once the technology is reframed this way, conversations can begin to shift from fear of AI to a curiosity around where else it might be at play. You make real progress when you demystify AI through familiar experiences rather than technical lectures.</p>
<p>This framing also enables a more honest discussion about the potential of AI and the threat to jobs. In many professions, people then begin to appreciate that they are more likely to lose opportunities not to the AI itself but to other humans who know how to use AI better. This strengthens AI’s positioning as assistive and AI tool use as another skill to acquire.</p>
<p>Take the case of AI platform Hey Bubba, designed for trucking owner-operators and small trucking companies. Instead of using dashboards or complex workflows, the system operates entirely through voice. Drivers can search and book freight, negotiate with brokers, find parking, and book hotels through natural conversations, with the help of AI. This service works because it builds on familiar uses of AI assistants, such as Siri and Alexa, and thus feels natural.</p>
<h4>2. Integrate AI into systems people already use.</h4>
<p>Is it easier to renovate a house or ask people to move into a brand-new one with unfamiliar rooms, rules, and routines? With AI adoption, you want to take the renovation approach. It’s a blunder to try a big-bang approach to roll AI into an organization.</p>
<p>Always start with incremental changes to existing workflows and software. Remember that your teams already use dozens of software tools. These are the best starting points where leaders can inject AI and gently nudge user adoption.</p>
<p></p>
<p>For example, most front-line teams already live inside software, such as billing systems, customer relationship management systems, dispatch tools, maintenance software, or safety logs. Some of these systems may be clunky, but they are heavily used and largely unavoidable. The pain points within these systems could act as perfect entry points to introducing AI — places where users could see the value and welcome the initiative with open arms. When AI meets people where they already work, curiosity replaces resistance.</p>
<p>Take the case of fleet maintenance. Most technicians and supervisors already spend their days inside a computerized maintenance management system. Work orders are logged there. Inspections are recorded there. Breakdowns are investigated there. </p>
<p>An effective approach to introducing AI that can predict vehicle failures, for example, is to embed AI directly into the maintenance systems users already trust. AI can flag recurring fault codes, highlight assets with rising failure risk, or suggest prioritizing certain work orders before a breakdown occurs. </p>
<h4>3. Quantify AI’s impact using metrics people already track.</h4>
<p>Once you make AI accessible and identify familiar avenues to inject it, the quickest way to earn buy-in is to lead with the business result it unlocks. </p>
<p>Start by anchoring AI value to outcomes that stakeholders really care about and are judged on. Usually, there are two perspectives: creating upside (growth or throughput) or preventing downside (lost revenue or risk reduction). Examples of upside metrics are win rates, or asset utilization, while downside metrics include cost leakage or service disruptions. Remember: New KPIs always trigger debate and delay action, whereas familiar metrics accelerate alignment.</p>
<p>Next, pick a combination of short-term impact and long-horizon projections. Sticking just to lag metrics could disillusion stakeholders, who need to see quicker momentum to retain confidence and excitement for AI. For example, reduction in customer complaints is an example of a lead metric to validate short-term progress, while incremental revenue from repeat customers is a lag metric that might need a few quarters to start materializing.</p>
<p>Consider the <a href="https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/unlocking-profitable-b2b-growth-through-gen-ai" target="_blank" rel="noopener noreferrer">example of an industrial materials distributor</a> focused on accelerating growth. The company struggled to systematically identify and act on new business opportunities. Field sellers relied on manual, time-intensive methods, such as driving through cities to visually spot new construction projects. The process was inconsistent, slow, and difficult to scale.</p>
<p>The company built an AI engine that combined internal sales data with external signals to score and prioritize potential opportunities and recommend relevant products. Generative AI was then applied to extract insights from unstructured public data, such as construction permits, to identify upcoming capital projects.</p>
<p>These insights were embedded into existing sales workflows to personalize outreach at scale. The approach unlocked new opportunities in the first year, significantly expanding the sales pipeline and improving success rates for email outreach — both of which were existing sales metrics that stakeholders already cared about.</p>
<p></p>
<h3>Where AI Adoption Is Really Won or Lost</h3>
<p>In late-adopting industries, AI doesn’t fail because the technology falls short. AI often fails because leaders underestimate the human and operational context in which AI tools are introduced. We must remember that front-line skepticism is not resistance to progress — it’s just a rational human response that can be influenced when tackled strategically.</p>
<p>The organizations that move fastest follow a clear progression. They demystify AI by promoting understanding among people; embed AI into existing workflows before forcing new ones; and prove AI’s value using metrics that are already being used to reward or penalize people. When these conditions are met, adoption becomes a pull factor as opposed to a hard push.</p>
<p>The way forward for late-adopter industries is not to imitate tech-first sectors but to adopt AI on their own terms. Successful leaders treat AI as a capability to be woven incrementally into daily work rather than a system to be rolled out abruptly. In these environments, user comfort and trust, not algorithms, ultimately determine whether AI delivers on its promise.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/the-human-side-of-ai-adoption-lessons-from-the-field/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Managing Up: A Skill Set That Matters Now</title>
				<link>https://sloanreview.mit.edu/article/managing-up-a-skill-set-that-matters-now/</link>
				<comments>https://sloanreview.mit.edu/article/managing-up-a-skill-set-that-matters-now/#comments</comments>
				<pubDate>Mon, 13 Apr 2026 11:00:31 +0000</pubDate>
				<dc:creator><![CDATA[Phillip G. Clampitt and Bob DeKoch. <p>Phillip G. Clampitt is the Blair Endowed Chair in Communication at the University of Wisconsin-Green Bay. Bob DeKoch is the founder of the leadership consulting firm Limitless and a former president of The Boldt Company. They are the coauthors of <cite>Leading With Care in a Tough World: Beyond Servant Leadership</cite> (Rodin Books, 2022).</p>
]]></dc:creator>

						<category><![CDATA[Communication]]></category>
		<category><![CDATA[Human Psychology]]></category>
		<category><![CDATA[Leadership Advice]]></category>
		<category><![CDATA[Leadership Development]]></category>
		<category><![CDATA[Leadership Style]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leadership Skills]]></category>
		<category><![CDATA[Managing Your Career]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR &#124; Getty Images Are you skilled at managing up? If your talents are lacking when it comes to managing and dealing with the people above you in the organizational hierarchy, you can find yourself mired in some unpleasant and career-harming situations. Maybe you’re frustrated by a micromanaging supervisor or feeling marginalized by [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Clampitt-1290x860-1.jpg" alt="" class="wp-image-126588"/><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR | Getty Images</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">Are you skilled at managing up?</span> If your talents are lacking when it comes to managing and dealing with the people above you in the organizational hierarchy, you can find yourself mired in some unpleasant and career-harming situations. Maybe you’re frustrated by a micromanaging supervisor or feeling marginalized by them. Maybe you feel constantly in the dark about your manager’s expectations, or you’re tired of absorbing an outsize number of shocks for your team. Any of these can be a warning signal that you need to work on effective upward communication and leadership. </p>
<p>It’s an important set of skills right now. With some organizations using artificial intelligence to eliminate middle layers of management, the ability to manage up has become even more vital to your career — and your organization’s success. Leaders above are often unaware of what they don’t know, and they might be misled by AI.</p>
<p>If you want to strengthen your ability to lead up, you need to know how to assess your skills — and bolster them.</p>
<p>We define effective managing up, or upward leadership as “listening to those higher in rank and influencing them to assist you and your team to better embody the organization’s values and fulfill its mission, strategy, and goals.”<a id="reflink1" class="reflink" href="#ref1">1</a> Successful upward leaders create sustainable wins for the boss, team, and organization.</p>
<p> </p>
<p>Notice that this definition starts with listening. Just because someone wrote down the organization’s values, mission, strategy, and goals on ever-available, wallet-sized notecards or displayed them in a flashy PowerPoint graphic does not ensure that everyone will interpret the ideas in a similar and synergistic fashion. The written word is not enough. Understanding the nuances of interpretation requires active listening for unstated sentiments. </p>
<p>Leading up also, of course, involves influencing. Effective upward leaders establish connections, circumvent problems, and convince those in power to embrace opportunities, innovations, and novel insights. But assisting is equally important. Think of an NBA assist wizard like LeBron James who knows when and where to deliver the ball to other players so they can score. Assisting requires proper alignment between team members, knowledge of who is in position to score, and a willingness to let others shine.</p>
<h3>Three Roles You Play While Managing Up</h3>
<p>Based on surveys of thousands of employees and hundreds of interviews with midlevel managers, we discerned that people leading up assume three interrelated roles: </p>
<p><strong>Buffer.</strong> The buffer dampens frustrations from above (and below), absorbing complaints, gripes, annoyances, and, potentially, offensive remarks. Successful buffers actively listen for underlying (often unstated) sentiments and seek understanding of key (but often vague) goals to protect others from irrelevant or unintended messages.</p>
<p><strong>Translator.</strong> The translator receives information, directives, and perspectives from above (and below). Then they convey the meaning in the language of the audiences at those levels, minimizing potential misunderstanding while respecting the sensibilities of the audience. </p>
<p><strong>Advocate.</strong> The advocate seeks to persuade or dissuade others in positions above (or below) their own. This could mean sharing differing opinions, arguing for a new direction, or pushing back on a new idea or policy.<a id="reflink2" class="reflink" href="#ref2">2</a></p>
<p>It’s not enough to be skilled at one of these roles. Artfully leading upward requires an integration of all three. For example, advocates must translate a pushback comment into a language understood by others while buffering away minor issues. Likewise, a buffer must act as a translator when anticipating how pushback language might be misinterpreted by people above. The translation may, in turn, result in advocating for a change in the directive’s wording to increase the odds of acceptance. </p>
<p>There is no magic formula to determine the right balance, because it will vary with each situation. However, leaning too heavily into one role usually signals problems. If you, as a leader, spend most of your time buffering employees from verbal storms from on high, then it might be time to augment your role as an advocate. </p>
<p>Leading upward does not come naturally to most people. In fact, in his 2001 book, <cite>Leading Up: How to Lead Your Boss So You Both Win</cite>, Wharton professor Michael Useem suggested that just one-third of managerial employees had the necessary skills and desire to do so.<a id="reflink3" class="reflink" href="#ref3">3</a> But you can rewrite your own story by properly assessing your upward leadership talents and then strategically applying them. </p>
<h3>Assess Your Ability to Manage Up</h3>
<p>The best way to improve your upward leadership acumen starts with assessing your current talent level. These three questions can help you judge.  </p>
<p><strong>What role do you primarily perform when you are most frustrated?</strong> Aggravation, frustration, and irritation go with any job but can also signal role imbalance. For example, if you feel micromanaged, you may be overplaying the buffer role and not voicing concerns (the advocate role) about optimizing your own working environment.</p>
<p><strong>What role do you primarily perform when you are in a state of flow?</strong> In his seminal 2008 book, <cite>Flow: The Psychology of Optimal Experience</cite>, Mihaly Csikszentmihalyi describes flow as “a sense that one’s skills are adequate to cope with challenges at hand. … Concentration is so intense that there is no attention left over to think about anything irrelevant.”<a id="reflink4" class="reflink" href="#ref4">4</a> Ideally, your state of flow involves the skillful and seamless fulfillment of all three roles. But that mastery rarely happens, because we all have a tendency to lean too heavily on a role or skill that comes naturally to us. For example, selling or advocating may be your “happy place,” but leaning on that ability alone will not allow you to excel at upward leadership. For that, you’ll need to master the skills of buffering and translating.</p>
<p><strong>Are you equally comfortable performing these roles in both directions (upward and downward)?</strong> Many people selectively employ their buffering, advocating, and translating skills when communicating with people at higher authority levels. This might be healthy in some cases, but it could also be a red flag, revealing that you lack a healthy relationship with those in power and are unwilling to engage in candid, if sometimes difficult, conversations.</p>
<h3>Build Three Key Skills to Manage Up Better</h3>
<p>Once you’ve thought through your role tendencies, it is time to build your buffering, translating, and advocating skills. </p>
<h4>Buffering</h4>
<p>Buffering skills and sensibilities are largely self-taught. Take cues from politicians, coaches, or leaders you admire. Watch successful leaders during press conferences. Some of them ignore the passion of the critic, others deflect unpleasant issues, and some selectively listen for words that they can turn to their advantage. Building up this emotional thick skin takes time and perspective. </p>
<p></p>
<p>Alida Al-Saadi, a former senior executive at Korn Ferry and Accenture, shared this incident: “A manager repeatedly pushed me to be ‘more concise,’ despite being famously long-winded himself. At first it felt unfair. Eventually I understood that thick skin isn’t arguing the irony; it’s hearing what someone needs from you and deciding, deliberately, how to strategically adjust.”<a id="reflink5" class="reflink" href="#ref5">5</a> In short, buffering her reactions and deferring the debate about the accuracy of his critique enhanced their working relationship. </p>
<p>However, buffering does not mean just passively absorbing blows. After all, a shock absorber can only absorb so many shocks before the source of the trouble has to be addressed. Good buffers learn to have productive conversations with their superiors by identifying key issues and rephrasing concerns that might be red flags for their team. Skilled buffers actively listen to engage in productive conversations that support team motivation and performance. This means tuning your antenna to what’s not being said and homing in on ideas that need further development.</p>
<h4>Translating</h4>
<p>Turning your own or your team’s reactions, concerns, or feelings into words that a superior can understand may be all it takes to shift that leader’s position, tweak an idea, or change a disagreeable behavior; it’s one step short of advocacy. This requires an underappreciated ability to convey emotional reactions in a respectful manner. </p>
<p>For example, sometimes employees who first hear about a major organizational change react with colorful and offensive language.<a id="reflink6" class="reflink" href="#ref6">6</a> In those cases, effective leaders accurately relay those sentiments to the higher-ups without sharing personal invectives. A descriptive statement like, “They weren’t very happy” or “They expressed their displeasure in strong language” allows for further discussion that focuses on the substantive issues driving the reactions. </p>
<p>Building your translating skills sometimes means learning new vocabulary. That’s because you should shift your reporting from a direct to an indirect approach for more contentious issues. Directly pushing back with a comment like “I disagree” isn’t always the best option. An indirect and often more effective approach could be to say, “If someone were to play devil’s advocate, they might say …” or “Is there another way to look at this issue?” These phrases distance the pushback in a manner that does not directly challenge the egos of the people above.</p>
<p></p>
<h4>Advocating</h4>
<p>Speaking up for your team, say, by nudging superiors in a different direction represents the most challenging role. What are the best ways to do it? For starters, link to the superior’s underlying motivations, sensibilities, and mental framework. Successful upward leaders frame their team’s reaction to an idea or policy change by first acknowledging the positive intentions of the idea or policy before sharing the team’s suggested tweaks. </p>
<p>They also provide evidence that their superiors find credible. Different supervisors value different kinds of evidence to arrive at conclusions. Some put more faith in statistics, AI projections, or models, while others trust case studies, expert advice, personal testimonies, or historical analogies. </p>
<p>Finally, sense when to back off. Some leaders mistakenly expect quick or even instantaneous agreement from their superiors after proposing initiatives, program tweaks, personnel changes, or innovative suggestions. However, persuasion often requires patience and a willingness to back off at the right time to allow others time to shift the tumblers in their minds before locking something new in place. Pushing too hard or too soon can close the door on any new ideas.</p>
<h3>Habits of Successful Upward Leaders</h3>
<p>Skill-building sets the stage, but successful upward leaders also use the following strategies regularly to maximize their performance and help their organizations thrive.</p>
<h4>Actively build a relationship of candor and trust with people above you in the hierarchy.</h4>
<p>Do you reflexively assume that you are fully trusted by those above? A misreading of interpersonal dynamics can prove to be frustrating, befuddling, and problematic, and can introduce relationship troubles: You might excessively buffer the superior from challenges you face in your department (unwarranted buffering), be overly candid about your own reactions or your employees’ outbursts (unedited translating), or offer unwelcome advice (inappropriate advocating). Instead, consider taking the following actions to establish an empowering relationship of trust.</p>
<p><strong>Take the first step.</strong> Ideally, superiors would seek out and build robust, healthy relationships with direct reports. But in our research, we’ve found that to be more the exception than the rule. Consequently, leaders in subordinate positions must often take active steps to build strong, candid relationships.<a id="reflink7" class="reflink" href="#ref7">7</a> Sometimes that requires the assertiveness and subtlety of a mixed martial arts fighter like Ronda Rousey. Yes, <em>subtlety</em>: Rousey was able to persuade the CEO of the Ultimate Fighting Championship, Dana White, to create a women’s division — even though he had publicly declared that he’d never do it. She took the first step by requesting a 15-minute meeting with Dana, seeking career advice, and then effectively advocated for her idea. The meeting morphed into a 45-minute discussion and resulted in the new UFC women’s division.<a id="reflink8" class="reflink" href="#ref8">8</a>  </p>
<p><strong>Mind the cadence and robustness of meetings with your supervisors.</strong> Your investment in establishing a relationship with superiors can dwindle away without routine and robust communications. The communication cadence needs to keep pace with the fast-changing organizational dynamics. And discussions need to be robust enough to allow the relationship to emerge beyond a position-to-position discussion to more of a person-to-person dialogue. Ideally, that means regularly scheduled face-to-face discussions with your boss, plus skip-level meetings with other people above you in the hierarchy. Advocating for such a time commitment may require some lobbying, but it will spawn benefits by minimizing disconnects and maximizing organizational alignment.<a id="reflink9" class="reflink" href="#ref9">9</a></p>
<p></p>
<p><strong>Avoid assuming that what worked with one supervisor will work with another.</strong> Just because a previous supervisor trusted you to be a great buffer, translator, or advocate, it doesn’t mean a different person in the organization will. While working with various people in the hierarchy above you, you must seek out signals about what problems you can handle on your own without reporting above (buffering). Additionally, you need to search for cues about what issues are off-limits when considering offering unsolicited advice (buffering and advocacy). Your supervisor might welcome tweaks to organizational strategy, but those higher up may not be as open to the pushback.</p>
<h4>Adopt an educational mindset.</h4>
<p>George Reed served as a dean at the University of Colorado — Colorado Springs and an instructor at the U.S. Army War College. He smilingly reminded us, “I’ve had to educate more than a few new chancellors and commanders in my career.”<a id="reflink10" class="reflink" href="#ref10">10</a> When someone new assumed command, Reed started from zero by providing background about his department or division and then sought to earn trust with the newcomer to buffer, advocate, and translate as he saw fit. </p>
<p>Emotionally, this may seem like going backward, but it is essential to establishing a productive working relationship. Sometimes a well-selected list of “10 things everybody should know about our department” does the trick and starts an illuminating educational discussion.<a id="reflink11" class="reflink" href="#ref11">11</a> </p>
<p>Take the following actions to bolster your educational mindset. </p>
<p><strong>Assess the risks of advocacy.</strong> Deciding how and when to advocate revolves around the question “How open will my superior be to my influence attempt?” Correcting a client’s misspelled name on a pending document typically would be zero risk. On the other hand, drawing your supervisor’s attention to an annoying personal habit of theirs, such as always being late to meetings, would be a higher risk (as outlined in the table below).</p>
<div class="callout-highlight">
<aside class="l-content-wrap">
<article>
<h4>Common Conversation Points: Mind the Risk Level</h4>
<p class="caption">
<table id="Chart1" class="chart-vertical-stripes no-mobile">
<thead>
<tr>
<th><strong>Higher-Risk Issue</strong>s</th>
<th><strong>Lower-Risk Issues</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>
<ul>
<li>Annoying personal qualities (such as interrupting others or pettiness)</li>
<li>Character flaws (such as arrogance or impulsiveness)</li>
<li>Competency concerns</li>
<li>Ethical issues (such as dishonesty)</li>
<li>Personal-life concerns</li>
<li>Policy disagreements</li>
<li>Poor performance (such as missed goals)</li>
<li>Unsolicited pushback</li>
</ul>
</td>
<td>
<ul>
<li>Positive operational results</li>
<li>Minor policy tweaks</li>
<li>Differing technical interpretations</li>
<li>Praise</li>
<li>Differing data interpretations</li>
<li>Solicited pushback</li>
<li>Recognition of personal/professional accomplishments</li>
<li>Small changes on documents/presentations</li>
<li>Fresh insights on challenges</li>
<li>Requests for career advice</li>
</ul>
</td>
</tr>
</tbody>
</table>
<p><!--IMAGE FALLBACK FOR MOBILE BELOW --><br />
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Clampitt_Upward_Essay_Table_REV.jpg" alt="A two-column table comparing higher-risk and lower-risk issues. Higher-risk issues include: annoying personal qualities (such as interrupting others or pettiness), character flaws (such as arrogance or impulsiveness), competency concerns, ethical issues (such as dishonesty), personal-life concerns, policy disagreements, poor performance (such as missed goals), and unsolicited pushback. Lower-risk issues include: positive operational results, minor policy tweaks, differing technical interpretations, praise, differing data interpretations, solicited pushback, recognition of personal/professional accomplishments, small changes on documents/presentations, fresh insights on challenges, and requests for career advice." class="no-desktop">
</p>
</article>
</aside>
</div>
<p>Issues can shift from one column to the other, depending on the particular supervisor-report relationship and the organizational culture. Your goal over time, of course, is to move as many issues as possible to the second column.</p>
<p>As a relationship matures, people learn to better identify others’ touchy subjects and anticipate their likely responses to a direct style of advocacy. A high-quality relationship between leaders allows a high degree of candor and a high volume of advocacy.</p>
<p>But lower-quality relationships or newer ones often improve with the deft use of more indirect advocacy and thoughtful translation. </p>
<p>Regardless of relational quality, a strong mutual commitment to shared values allows for more direct advocacy. For example, on a construction site or factory floor that has a strong safety culture, candid advocacy about potential safety concerns can be successful regardless of rank or relationship status. </p>
<p><strong>Reserve private conversations for more delicate matters.</strong> Unfortunately, not all leaders welcome pushback in public forums. Advocating for a shift or a tweak to superiors’ pet project in front of a group will often shut down further discussion because it may threaten the leader’s ego.</p>
<p>For example, consider a supervisor who occasionally launches into an annoying behavior like overselling initiatives to others and not allowing time for further discourse. Enlightening the supervisor about this off-putting tendency should usually be reserved for private, one-on-one, ego-protecting conversations. Discussions like these are particularly tricky because selling may be the supervisor’s forte. Often, someone’s greatest ability has an unrecognized downside that needs to be throttled back in certain situations or offset with other skills. </p>
<h4>Routinely rebalance your upward leadership role profile.</h4>
<p>Your upward leadership role profile should not be static. Ideally, relationships between leaders at different levels improve, and their mutual commitment to shared values evolves. Consequently, the amount of energy devoted to the roles of buffer, translator, and advocate will become more balanced and shift away from more dysfunctional allocations, like excessive advocacy or heavy buffering. Consider the following tactics when periodically rebalancing your profile: </p>
<p><strong>Reflect on how your allocation maximizes both your professional fulfillment and organizational contribution.</strong> The ideal allocation of the roles you play depends on your specific situation, goals, and the managerial style of your supervisor. Ask yourself, “What is the optimal percentage of my energy that should be devoted to buffering, translating, and advocating to optimize my growth and organizational performance?” </p>
<p>As a general rule, aim to build relational trust so that the percentage of your time devoted to buffering decreases to 10%-20% while advocating and translating (40%-45% each) assume more predominant roles. This type of allocation maximizes professional development and organizational growth but leaves enough time for you to serve as a proper shock absorber for the inevitable miscues, frustrations, and rumors that occur.</p>
<p><strong>Test and recalibrate.</strong> Shifting your role balance requires courage, particularly when everything seems to be going well. And, as with any new skill, both mastering and feeling comfortable with it will require some practice. For example, making the conscious effort to advocate more or throttling back can be unsettling; monitoring results allows you to tweak both the skills and the balance between the three key roles. Other people on your team may notice your behavior change as well. If questioned, you could say, “I’m experimenting with a different approach to exert influence.”</p>
<p></p>
<p><strong>Entertain other opportunities.</strong> Our multiyear research consistently revealed that employees’ relationships with their direct supervisor greatly influence their level of job satisfaction, engagement, and productivity.<a id="reflink12" class="reflink" href="#ref12">12</a> So, assuming that you’ve tried the strategies above and your role profile as a buffer, translator, and advocate continues to be unfulfilling, it may be time to look for other job opportunities that will allow you to flourish. After all, successful upward leadership requires superiors who are also willing to change. </p>
<p></p>
<p>Leading upward represents one of the most significant and least appreciated talents you can master. It requires courage tempered with discretion, thoughtful advocacy coupled with inquisitive listening, and an eagerness to debate peppered with a zeal to engage in calculated silences. </p>
<p>Practicing when and how to use these polarized aptitudes allows leaders to seamlessly integrate the roles of buffer, translator, and advocate. Learning to do so may not bring many accolades or trophies attesting to your “upward leadership excellence.” But mastering upward leadership will, at the very least, ensure career fulfillment and, at the very best, organizational excellence. Think of midlevel leaders you know who rose through the ranks or ensured great outcomes for their teams: Most have mastered the difficult art form of respectfully and resolutely leading up. And perhaps improving your own upper leadership acumen will spur you to further cultivate a climate within your own team that encourages upward leadership, improving employee engagement and work outcomes.<a id="reflink13" class="reflink" href="#ref13">13</a></p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/managing-up-a-skill-set-that-matters-now/feed/</wfw:commentRss>
				<slash:comments>1</slash:comments>
							</item>
					<item>
				<title>﻿The Trap That Skilled Negotiators Miss</title>
				<link>https://sloanreview.mit.edu/article/the-trap-that-skilled-negotiators-miss/</link>
				<comments>https://sloanreview.mit.edu/article/the-trap-that-skilled-negotiators-miss/#comments</comments>
				<pubDate>Sun, 12 Apr 2026 11:00:25 +0000</pubDate>
				<dc:creator><![CDATA[Monica Wadhwa and Krishna Savani. <p>﻿Monica Wadhwa is an associate professor in the Department of Marketing and Supply Chain Management at Temple University’s Fox School of Business. Krishna Savani is a professor of management at Hong Kong Polytechnic University. Both authors contributed equally to this article.﻿</p>
]]></dc:creator>

						<category><![CDATA[Decision-Making]]></category>
		<category><![CDATA[Human Psychology]]></category>
		<category><![CDATA[Managerial Psychology]]></category>
		<category><![CDATA[Narrated Article]]></category>
		<category><![CDATA[Negotiations]]></category>
		<category><![CDATA[Pricing]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leadership Skills]]></category>
		<category><![CDATA[Frontiers]]></category>

				<description><![CDATA[Brian Stauffer/theispot.com Say you walk into a car dealership determined to stay within budget. The salesperson shows you a car you like and quotes a price of $41,435. You know there’s room to negotiate, but when it’s time to counter, that first number quietly takes over. Your counteroffer, the concessions, and the final deal all [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/2026SUMMER_Savani-1290x860-1.jpg" alt="" class="wp-image-126477"/><figcaption>
<p class="attribution">Brian Stauffer/theispot.com</p>
</figcaption></figure>
<p></p>
<p></p>
<p><span class="smr-leadin">Say you walk into a car dealership</span> determined to stay within budget. The salesperson shows you a car you like and quotes a price of $41,435. You know there’s room to negotiate, but when it’s time to counter, that first number quietly takes over. Your counteroffer, the concessions, and the final deal all end up orbiting around $41,435.</p>
<p>That’s anchoring at work. In negotiations, first offers become psychological reference points, and people often fail to adjust far enough away from them, even though they are free to counter with any amount they want.</p>
<p>Although the anchoring effect is well documented, what makes this bias so frustrating is that it persists even among skilled and experienced negotiators. It shows up in procurement, strategic deals, and executive compensation conversations — any situation in which one party gets a number on the table early and the other party must respond under time pressure.</p>
<p>If you’re preparing for an important negotiation, the standard advice is familiar: Do your homework, know your target, and don’t reveal too much too soon. Those suggestions are useful, but none of them changes the fact that when the first offer lands, your mind starts thinking of counteroffers close to that number. Our <a href="https://doi.org/10.1016/j.jesp.2023.104575" target="_blank">recent research</a>, published in the <cite>Journal of Experimental Social Psychology</cite>, identified a simple way to reduce the anchoring effect when you don’t control the first offer: Adopt a <em>choice mindset</em> right when you see the first offer.</p>
<p></p>
<h3>The Power of Choice Reminders</h3>
<p>A <a href="https://doi.org/10.1016/j.obhdp.2019.05.003" target="_blank">choice mindset</a> is a state of mind in which people perceive the availability of more choices than they are presented with. When in this mindset, people are more likely to recognize the options available to them, including nonobvious options (such as delaying a decision or changing the structure of a deal), particularly in situations in which they feel constrained (such as difficult negotiations).</p>
<p>In everyday life, a choice mindset is the difference between thinking “I have no choice; I have to take what I can get” and thinking “I have choices and can even consider options that have not been presented to me.” The key insight is that <em>feeling</em> constrained is not the same as <em>being</em> constrained, and the subjective perception of choice can be nudged.</p>
<p>When someone quotes a price of $41,435, your brain starts searching for a reasonable counter in the neighborhood of that number rather than exploring the full range of possible counteroffers. Our research tested the idea that a choice mindset can widen that search. The mechanism is cognitive: A choice reminder leads people to think of other potential counteroffers, which weakens the anchor’s dominance and helps negotiators move further away from the first offer.</p>
<p></p>
<p>We tested the effect of this reminder across seven studies with U.S. participants recruited through online research platforms. The intervention was intentionally minimal. In the choice condition, after seeing a seller’s quoted price, participants received a simple reminder that they could choose their offer (“You can choose to offer any amount that you want. It’s your choice!” for example). The control condition received standard negotiation instructions without that explicit choice reminder. The practical translation is straightforward: A small prompt pushed people to counter more aggressively and rely less on the seller’s opening number.</p>
<p>For example, in one of the studies, based on a used-car bargaining scenario, participants were shown cars along with detailed information and were quoted prices ranging from $15,599 to $19,781 — intentionally precise numbers because <a href="https://doi.org/10.1037/0022-3514.81.4.657" target="_blank">prior research</a> suggests that precise first offers serve as potent anchors. As expected, the choice reminder reduced anchoring: Participants in the choice condition countered with lower offers than those in the control condition. The implication for leaders is that this isn’t just a trick for minor purchases; it can be applied in real negotiations, where the other side’s opening offer is presented as a carefully calculated figure.</p>
<p></p>
<h3>Having More Options Helps Negotiators</h3>
<p>Why is such a simple reminder so effective? We investigated the mechanism directly by measuring whether a choice reminder changes what negotiators think about before they commit to a counteroffer. In a study that tasked participants with negotiating the price of a painting, we asked participants to list all of the offers they could imagine making instead of a single figure. The choice reminder led participants to generate a small but significant increase in the number of counteroffer options. This matters because anchoring is fundamentally a cognitive spotlight problem: The anchor dominates the focus, and any nudge that expands the set of options you consider can loosen that grip.</p>
<p>We further tested whether simply thinking of more offers could trigger this de-anchoring, by randomly assigning participants to generate either two or eight potential offers before making their final counter. Generating eight offers significantly reduced anchoring, because of the breadth of the range they produced. Participants who generated eight offers produced a much wider set of options, and that variance statistically explained why their final counteroffer moved further away from the initial anchor. Ultimately, the way out of an anchor is not just grit or negotiation bravado; it hinges on widening the decision space before you make your move.</p>
<p>Negotiators in a choice mindset can avoid anchoring on first offers not only by generating more counteroffers but also by shifting the negotiation to other points of discussion. A book publisher negotiating with an agent who is asking for a $100,000 advance, for instance, can weaken the effect of the first offer by pivoting to negotiating other variables, such as royalty tiers and payment structures, thereby expanding the scope of the discussion and reframing an adversarial exchange into a collaborative problem-solving session.</p>
<p>This mechanism points to a simple practice you can use in negotiations. When the other side makes a first offer, you should aim to create a brief choice pause. This moment is not about theatrics; it’s about preventing the first number from becoming your default starting point. During this pause, try to think of multiple counteroffers that are within the bounds of reason, including a few that might appear aggressive but can still be defended based on relevant reference points. The goal is not to counter with the most aggressive number possible but to generate credible options that are not influenced by the first offer. If you have come to the negotiation table with your own first offer prepared, but your counterparty makes the first offer, rather than using their offer as a baseline for negotiations, counter with your preplanned first offer (and the accompanying rationale) even if it appears quite far from theirs.</p>
<p>This practice is even more effective when integrated into your preparation. Rather than just setting a single target and a walk-away point, prepare a set of counters that spans a meaningful range. This broader map protects you against the pull of a surprising anchor. By shifting the focus from a single point to a prebuilt range of possibilities, you change the tone of the internal deliberations before you ever respond externally.</p>
<h3>How Distractions Can Derail Negotiations</h3>
<p>There is an important caveat, and it’s one that will resonate with any executive who has had to negotiate a deal while juggling a dozen competing priorities: This strategy depends on attention and cognitive bandwidth. We predicted that if the choice reminder works by prompting people to think through more counteroffers, then it should be weaker when cognitive resources are constrained. That’s exactly what we found. In a study that used a divided-attention paradigm, participants negotiated while brand logos were flashed on the screen; they were asked to count certain logos, a task designed to mimic distraction and multitasking.</p>
<p>Under normal conditions, the choice reminder reduced anchoring. Under high cognitive load, the effect disappeared: Participants in the choice condition were just as anchored as those in the control condition.</p>
<p>This boundary condition has an immediate managerial implication. If you want to benefit from a choice mindset, you can’t treat negotiation as a task you do while triaging email, scanning Slack, or squeezing a call into a depleted part of your day. The moment you receive the first offer is exactly when you need enough bandwidth to generate alternatives. When you’re distracted, your mind reverts to the easiest available path, which is to negotiate around the anchor. In practice, that may mean setting norms (such as “We don’t counter on the spot for high-stakes deals”) or simply buying time (like asking for a short break or a follow-up call) so that you can do the brief work of generating your set of counteroffers.</p>
<p></p>
<p>We also tested and ruled out an alternative explanation that leaders sometimes assume: that a choice reminder simply makes people more self-interested or more motivated to win, leading them to make tougher offers. In one study, we measured motivation to get a low price and perceived task importance. Those measures did not differ between conditions, even though the choice reminder still reduced anchoring.</p>
<p>That pattern is consistent with a cognitive understanding of negotiation: The choice nudge changes how people think, not just how hard they want to bargain.</p>
<p>The implication is that a choice mindset is most useful when you already know which way you want to move (price down, salary up, liability down, scope up, and so on). When the right direction is uncertain, you should pair this approach with independent benchmarks and analysis so that you’re not simply widening the range without clarifying your strategic aim.</p>
<p>Anchoring is one of those biases that is easy to recognize in others but hard to avoid, especially because it operates in the flow of everyday work life. Yet the practical lesson from our research is encouraging: You don’t always need complex negotiation tactics to reduce it. Sometimes you just need a tiny moment of cognitive reframing. When you remind yourself that you have a choice, you’re more likely to generate alternatives, expand the range of possible counters, and move further away from the first number put in front of you.</p>
<p></p>
<p>The next time you receive a first offer, whether it’s from a supplier, a job candidate, a partner, or a counterpart in a strategic deal, try the following steps:</p>
<ol>
<li>Pause to consider the offer. Ask your counterpart for a moment to think.</li>
<li>Remind yourself: I have a choice.</li>
<li>Give yourself just enough time to create a few options for a counteroffer before you pick one.</li>
</ol>
<p>In many negotiations, that small shift can be the difference between your counteroffer being anchored to the initial offer and setting your own terms. Indeed, <a href="https://doi.org/10.1111/iere.12719" target="_blank">research has found</a> that the most likely outcome is the midpoint between the first offer and the first counteroffer.</p>
<p>Once you have a counteroffer in mind, you can draw from other research that has identified some best practices for ensuring that the negotiation that follows is successful. Aim to <a href="https://doi.org/10.1017/jmo.2020.47" target="_blank">shift the conversation</a> from haggling over a number to building a shared rationale for concluding a deal. When sharing your counteroffer, make the underlying criteria explicit (using market comparables, outside options, or precedents, for example), and invite the other side to respond with alternative objective criteria rather than a competing anchor. If several terms are on the table, move quickly from a single counter to two or three <a href="https://doi.org/10.1016/j.obhdp.2019.01.007" target="_blank">package offers</a> that are equally attractive to you but trade price against other issues; this helps surface priorities and unlocks value. Then concede slowly and deliberately, labeling each concession and tying it to a reciprocal move so that the negotiation stays organized around your counteroffer rather than drifting back toward the original offer. All of this is made possible by a brief moment of cognitive reframing — pausing to remind yourself that you have a choice — that loosens the anchor’s grip and lets you negotiate on your own terms.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/the-trap-that-skilled-negotiators-miss/feed/</wfw:commentRss>
				<slash:comments>1</slash:comments>
							</item>
					<item>
				<title>﻿Rethink ﻿Responsibility in the Age of AI</title>
				<link>https://sloanreview.mit.edu/article/rethink-responsibility-in-the-age-of-ai/</link>
				<comments>https://sloanreview.mit.edu/article/rethink-responsibility-in-the-age-of-ai/#respond</comments>
				<pubDate>Thu, 09 Apr 2026 11:00:22 +0000</pubDate>
				<dc:creator><![CDATA[François-Xavier de Vaujany and Aurélie Leclercq-Vandelannoitte. <p>François-Xavier de Vaujany is a full professor in organization studies at Université Paris Dauphine-PSL and a senior researcher at DRM. Aurélie Leclercq-Vandelannoitte is a ﻿﻿CNRS ﻿researcher at LEM — Lille Économie Management, which comprises ﻿Univ. Lille, the CNRS, and the IESEG School of Management.</p>
]]></dc:creator>

						<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Decision-Making]]></category>
		<category><![CDATA[Leadership Advice]]></category>
		<category><![CDATA[Narrated Article]]></category>
		<category><![CDATA[Organizational Culture]]></category>
		<category><![CDATA[Risk Management]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Crisis Management]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Ethics]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leading Change]]></category>
		<category><![CDATA[Frontiers]]></category>

				<description><![CDATA[Mark Airs/Ikon Images Early one morning in 2018, a self-driving Uber vehicle fatally struck a pedestrian in Tempe, Arizona. The world had questions: Who was responsible? Was it the safety driver behind the wheel? The engineers who designed the algorithms? Uber’s leadership? Or the regulators who had allowed autonomous-vehicle testing? The inability to name a [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/2026SUMMER_Vaujany-1290x860-1.jpg" alt="" class="wp-image-126474" /><figcaption>
<p class="attribution">Mark Airs/Ikon Images</p>
</figcaption></figure>
<p></p>
<p></p>
<p><span class="smr-leadin">Early one morning in 2018</span>, a self-driving Uber vehicle <a href="https://www.nytimes.com/interactive/2018/03/20/us/self-driving-uber-pedestrian-killed.html" target="_blank">fatally struck a pedestrian</a> in Tempe, Arizona. The world had questions: Who was responsible? Was it the safety driver behind the wheel? The engineers who designed the algorithms? Uber’s leadership? Or the regulators who had allowed autonomous-vehicle testing? The inability to name a single culprit signaled a profound shift in how responsibility must be understood and attributed in the age of intelligent technologies.</p>
<p>As organizations deploy increasingly autonomous systems such as drones, trading bots, or algorithmic decision makers (like automated resume screeners or credit assessment tools), agency becomes distributed, emerging from the complex interplay of human and machine actions. Decisions, once linear and traceable, now unfold across networks of people and artificial intelligence systems, introducing new forms of influence and unpredictability.</p>
<p>For today’s leaders, this means that the old search for a culprit loses relevance. The real challenge is not to assign blame but to instead construct a shared narrative — to uncover not only what went wrong but how collective activities, assumptions, and technologies shaped the outcome. As our recent research, <a href="https://doi.org/10.25300/MISQ/2025/17970" target="_blank">published in <cite>MIS Quarterly</cite></a>, shows, forging organizational learning and resilience depends on this collaborative revisiting of how decisions happen and how stories of responsibility are constructed. We call this process <em>narrative responsibility</em>.</p>
<p></p>
<h3>Why Classic Models of Responsibility No Longer Work</h3>
<p>Classic theories of responsibility have rested on three core assumptions: that the world is fundamentally linear, with events following clear cause-and-effect logic; that decision makers act in a shared space and time, making the link between actions and consequences traceable; and that responsibility can be precisely attributed backward to an individual whose intentions and choices drive outcomes.</p>
<p>Consistent with these assumptions, when something goes wrong, organizations often enact traditional models of accountability by holding a senior leader personally responsible. For instance, after two fatal crashes of Boeing’s 737 MAX aircraft killed 346 people in 2018 and 2019, <a href="https://www.nytimes.com/2019/12/22/business/boeing-dennis-muilenburg-737-max.html" target="_blank">CEO Dennis Muilenburg</a> was swiftly dismissed as a visible response to the crisis. However, despite this action and promises of cultural change from his successor, the underlying quality and safety failures persisted — culminating in a door plug blowing off a 737 MAX midflight in 2024 and the departure of yet another CEO. Removing one individual rarely addresses the deeper, complex causes of organizational failure. </p>
<p>Such approaches to accountability have always faced limits, even before the rise of digital technologies. What’s new in the age of AI and automation is how much faster, more complex, and opaque decisions are becoming, making old models of accountability less tenable than ever. </p>
<p>Take the <a href="https://www.businessinsider.com/amazon-drone-crash-oregon-fire-2022-3" target="_blank">crash of Amazon’s Prime Air delivery drone</a> in Oregon in 2022. While <a href="https://www.faa.gov/uas/advanced_operations/nepa_and_drones/20250827_Amazon_Pendleton_OR_Written_ReEvaluation.pdf" target="_blank">official reports</a> focused on technical or operator errors, the reality is that accountability for such incidents is inherently distributed — across coders, approval teams, and operations or project managers. Actions and consequences are distributed in ways that old models of accountability simply cannot address. </p>
<p>This challenge demands a fresh approach to responsibility that moves from blame to narrative responsibility.</p>
<p></p>
<h3>Making Narrative Responsibility Real: Three Actionable Moves</h3>
<p>Translating narrative responsibility from theory to practice requires that leaders reframe how accountability is constructed, sustained, and experienced so that every incident becomes a catalyst for collective learning and continual improvement. To make this shift, organizations must embed narrative responsibility at every level. Here’s how leaders can put the principles of narrative responsibility into action:</p>
<p><strong>1. Map the real story — beyond the obvious.</strong> In the aftermath of an incident, organizational reviews — whether technical, legal, or managerial — often aim to converge toward a coherent causal account that enables closure and action. While such convergence is common and often necessary, it can also narrow the scope of responsibility by privileging stabilized explanations over contested or ambiguous ones. A narrative responsibility approach does not reject conventional audits but complements them by attending to how responsibility is constructed, anticipated, distributed, and gradually fixed through organizational storytelling, decision rationales, and silences over time.</p>
<p>Google’s response to its Gemini image-generation failure in early 2024 offers a partial model. When the tool generated historically inaccurate images, Google published a <a href="https://blog.google/products-and-platforms/products/gemini/gemini-image-generation-issue/" target="_blank">detailed public explanation</a> tracing the root cause to flawed diversity tuning and misguided model behavior. Meanwhile, in an <a href="https://www.npr.org/2024/02/28/1234532775/google-gemini-offended-users-images-race" target="_blank">internal memo</a>, CEO Sundar Pichai committed to structural changes, improved launch processes, and expanded red-teaming. This was genuine story mapping — naming what broke and why. </p>
<p>But a more comprehensive exercise might have identified competitive pressure to ship quickly, organizational incentives that discouraged cautious testing, and the gap between known risks and the decision to launch as factors to consider. Mapping the real story means going beyond the technical postmortem to surface the human and organizational dynamics that allowed failure in the first place. It means going beyond individual errors or broken code to understand how assumptions, data, and organizational routines interact — and where ambiguity, a lack of relevant anticipations, and misalignment take root.</p>
<p> </p>
<p><strong>2. Distribute ownership, not blame.</strong> In today’s complex AI-enabled organizations, decisions and outcomes emerge not from a single hand on the wheel but from dynamic interactions over time, which calls for a collective and distributed notion of responsibility. Real accountability depends on ongoing engagement and sensemaking across teams and functions. Too often, warnings or objections that were ignored or never voiced play as big a part as active missteps.</p>
<p>Forward-thinking organizations are creating formal structures, such as steering committees, incident review panels, traceability systems, and cross-functional advisory groups, to institutionalize narrative responsibility. These forums are designed as open, psychologically safe spaces where staff members at all levels can reflect on what happened, voice difficult truths, and collectively reconstruct how incidents unfolded. In health care, this shift is well underway: UCLA Health, for example, <a href="https://www.healthcareexecutive.org/archives/march-april-2020/the-promise-and-practice-of-a-just-culture" target="_blank">established a network</a> of trained culture champions and incident review committees that examine adverse events to surface systemic patterns and drive improvement across the organization. The aviation sector offers a proven model of this collective-learning approach: After an automation-related failure, airlines like Air France and KLM, in line with European Union Aviation Safety Agency regulations, convene multidisciplinary panels as part of their safety management systems. These panels, aligned with the principles of “just culture,” focus not on blaming but on extracting lessons and adapting systemically. This approach has demonstrably strengthened airline safety and customer trust.</p>
<p><strong>3. Embed reflection in everyday practice.</strong> For narrative responsibility to thrive, it must not be practiced only post-crisis; it must become organizational routine. Sustainable learning emerges when teams habitually review how stories of accountability are constructed — and reconstructed — across daily operations and the use of technologies like AI.</p>
<p>Some organizations add narrative review points to recurring meetings, asking, “What did we learn?” “Where did our assumptions or processes fail?” or “How did our actions contribute to the outcome?” (See, for instance, the chapter “<a href="https://sre.google/sre-book/postmortem-culture/" target="_blank">Postmortem Culture: Learning From Failure</a>” in Google’s book <cite>Site Reliability Engineering</cite>.) Others routinely include responsibility narratives in management reports, not only after incidents but as an ongoing practice — turning lessons learned into living documents that support continuous learning. <a href="https://www.academia.edu/41536010/Transformation_at_ING_A_Agile" target="_blank">ING Bank</a>, for instance, has built regular reviews and “retrospective learning sessions” directly into its <a href="https://www.bcg.com/publications/2018/human-resources-pioneering-role-agile-ing" target="_blank">agile routines</a>. After each sprint, teams discuss what went well, what could be improved, and how lessons learned from critical events can inform future work, to ensure that key insights connect day-to-day operations to broader conversations about ethics and risk.</p>
<p></p>
<p>When the three principles are enacted, they reshape not just day-to-day operations but how organizations collectively respond to failure at all levels. Returning to the opening example of Uber’s tragic self-driving car incident, the official response centered on individual fault: The safety driver was prosecuted, and <a href="https://www.nytimes.com/2018/03/26/technology/arizona-uber-cars.html" target="_blank">Uber halted its autonomous-vehicle program</a>. However, as far as we know, organizational and systemic factors like design decisions, safety culture, and regulatory gaps were extensively documented in the <a href="https://www.ntsb.gov/investigations/accidentreports/reports/har1903.pdf" target="_blank">official investigation</a> but received limited attention in subsequent public and judicial responses. A narrative responsibility approach — one that maps the real story with all stakeholders and techniques involved, distributes ownership beyond blame, and embeds ongoing reflection — would have invited all key actors to collectively examine what shaped the anticipated and realized outcomes. While this wouldn’t have reversed past harm, it could have surfaced deeper lessons, enabled more meaningful accountability, and driven more systemic change for the future.</p>
<p></p>
<h3>From Blame to Shared Narrative</h3>
<p>Sustaining narrative responsibility requires more than scattered initiatives. It must become part of an organization’s DNA.</p>
<p>As businesses adopt AI agents, they can no longer rely on compliance teams or retroactive audits to assign accountability. Instead, establishing a shared practice of responsibility by constructing, questioning, and evolving the organizational narrative, together, is a strategic, forward-looking imperative for all leaders and teams. </p>
<p>Embracing narrative responsibility is critical for today’s organizations, but it’s not a panacea. There are real risks, particularly if the process is used to diffuse or obscure accountability — especially when leaders control the story. It cannot substitute for legal or regulatory obligations: Frameworks like the <a href="https://www.nytimes.com/2025/07/10/business/ai-rules-europe.html" target="_blank">European Union’s AI Act</a> remain essential safeguards. And when responsibility is distributed across organizations, constructing shared accountability is complex and demands intentional openness and collaboration. For narrative responsibility to be transformative, it must complement — never replace — robust ethical and legal standards.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/rethink-responsibility-in-the-age-of-ai/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Gain Consumer Insight With Generative AI</title>
				<link>https://sloanreview.mit.edu/article/gain-consumer-insight-with-generative-ai/</link>
				<comments>https://sloanreview.mit.edu/article/gain-consumer-insight-with-generative-ai/#comments</comments>
				<pubDate>Wed, 08 Apr 2026 11:00:30 +0000</pubDate>
				<dc:creator><![CDATA[Neeraj Arora, Ishita Chakraborty, and Yohei Nishimura. <p>Neeraj Arora is the Arthur C. Nielsen Jr. Chair in Marketing Research and Education at the University of Wisconsin-Madison’s Wisconsin School of Business. Ishita Chakraborty is an assistant professor of marketing and the Thomas and Charlene Landsberg Smith Faculty Fellow at the Wisconsin School of Business. Yohei Nishimura is a doctoral student in the marketing department at the Wisconsin School of Business.</p>
]]></dc:creator>

						<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Customer Behavior]]></category>
		<category><![CDATA[Data-Driven Marketing]]></category>
		<category><![CDATA[Marketing Analytics]]></category>
		<category><![CDATA[Marketing Innovation]]></category>
		<category><![CDATA[Marketing Research]]></category>
		<category><![CDATA[Narrated Article]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Analytics & Business Intelligence]]></category>
		<category><![CDATA[Customers]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Marketing]]></category>
		<category><![CDATA[Marketing Strategy]]></category>

				<description><![CDATA[Stuart Kinlough/Ikon Images Marketing leaders often face a dilemma: Deriving the insights they need in order to make confident decisions can cost tens of thousands of dollars and involve several months of data gathering and analysis, by which time market conditions may have shifted. Can generative AI fundamentally reshape this calculus? Drawing on recent research, [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/2026SUMMER_Arora-1290x860-1.jpg" alt="" class="wp-image-126470" /><figcaption>
<p class="attribution">Stuart Kinlough/Ikon Images</p>
</figcaption></figure>
<p></p>
<p></p>
<p><span class="smr-leadin">Marketing leaders often face a dilemma:</span> Deriving the insights they need in order to make confident decisions can cost tens of thousands of dollars and involve several months of data gathering and analysis, by which time market conditions may have shifted. Can generative AI fundamentally reshape this calculus?</p>
<p>Drawing on recent research, including our own study published in the <a href="https:/https://journals.sagepub.com/doi/10.1177/00222429241276529" target="_blank"><cite>Journal of Marketing</cite></a>, as well as interviews with marketing leaders from major organizations, we have identified five ways that large language models (LLMs) are beginning to transform the marketing function and reshape the $153 billion insights industry.<a id="reflink1" class="reflink" href="#ref1">1</a> LLMs can viably compress marketing research timelines from months to days by introducing new approaches for rapid concept testing, such as the use of synthetic consumer “digital twins,” and enabling qualitative research at scale. These techniques allow companies to better harness unstructured data and smaller research teams to conduct much larger studies than they could previously.</p>
<p>Organizations conduct marketing research to uncover consumer insights that guide strategic and tactical business decisions. Historically, insight generation has been a multistage, time-consuming, and labor-intensive process.</p>
<p>A typical marketing research pipeline includes problem definition, research design, study design, sample selection, data collection, data analysis, and insights delivery. Some aspects of marketing research are qualitative (such as interviews and focus groups), and others (surveys, for example) are quantitative in nature. These studies may be conducted by in-house marketing research teams or outsourced to agencies with specialized expertise. A research project can take a few weeks to several months, depending on its scope, and can cost anywhere from tens to hundreds of thousands of dollars.</p>
<p></p>
<p>Generative AI is making the consumer insight generation process substantially more efficient while also presenting novel ways to make the <a href="https://hbr.org/2025/05/how-gen-ai-is-transforming-market-research" target="_blank">research more effective</a>. In short, it is making the marketing research process faster and cheaper.</p>
<p>Just as AI-driven drug discovery has shortened the timeline from candidate screening to clinical-trial readiness, generative AI is shortening timelines from exploration to insights.<a id="reflink2" class="reflink" href="#ref2">2</a> AI is being integrated into the market research process with humans in the loop, as illustrated in the figure “How AI Is Integrated Into the Marketing Research Process.” In the early stages of research, problem definition and design are primarily guided by the decision maker. This is because critical factors — such as client experience, market intuition, and practical constraints like budget and timing — are human-led and challenging for AI to infer. Although the AI can help refine problem statements or brainstorm design options, its role during these early stages is typically minimal. In contrast, AI serves as an excellent collaborator in the remaining stages of marketing research.</p>
<p></p>
<p>In the study design phase of qualitative research, LLMs can be used to generate initial drafts of discussion guides for exploratory work. During sample selection, they can help identify respondent characteristics that align with the research goals. In the analysis phase, LLMs summarize long interviews, extract themes, and organize unstructured text into interpretable insights. As Paul Metz, CEO of C+R Research, said, “AI tools process and synthesize large volumes of transcript data within hours, detecting patterns and themes that previously took days to uncover.”</p>
<p>Such efficiencies allow teams to handle large volumes of qualitative data and work more productively. The speed and cost savings allow companies to shift from large, infrequent studies that take months to complete to smaller, more frequent studies aligned with decision cycles. This also empowers managers to test more ideas, iterate quickly, and adopt an experimentation-oriented mindset.</p>
<p>For quantitative research, LLMs can be used to quickly generate the first draft of a survey, report summary statistics, visualize the data, and debug analysis code as needed. These GenAI use cases allow the research team to delegate many of the rote tasks to the AI, use that time to focus on answering the business questions more effectively, and deliver insights faster.</p>
<div class="callout-highlight callout-highlight--transparent">
<aside class="l-content-wrap">
<article>
<h4>How AI Is Integrated Into the Marketing Research Process</h4>
<p class="caption">Early stages of marketing research are human-led; LLM-based AI tools can aid in the completion of tasks in later stages of the pipeline.</p>
<p><img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/SU_26_RF_Arora.png" alt="This figure shows the various stages of marketing research, with humans defining the problem and developing the high-level research design and AI working in partnership with humans through the later phases."/></p>
<p class="attribution">
</article>
</aside>
</div>
<aside class="callout-info">
<h4>The Research</h4>
<p><span class="blue">&bull;</span> In their <cite>Journal of Marketing</cite> paper, the authors tested how well the large language model GPT-4 could replicate qualitative and quantitative marketing research projects conducted in 2019 by a Fortune 500 food manufacturing company and its market research partner.</p>
<p><span class="blue">&bull;</span> To replicate the qualitative study, the LLM was used to generate synthetic respondents that matched the profiles of human respondents in the original study. These synthetic respondents were asked a subset of the questions from the original study, and their responses were evaluated and compared against the original human responses by crowd workers on attributes such as depth, clarity, and insightfulness.</p>
<div class="callout-toggle">
<p><span class="blue">&bull;</span> The LLM and experienced human analysts from the partner company then conducted separate thematic concept analyses on the original human response transcripts, and their findings were compared in a blind evaluation by senior qualitative researchers.</p>
<p><span class="blue">&bull;</span> To replicate the quantitative study, which asked respondents to rate pet food product concepts, the LLM was used to generate synthetic responses to the same questions based on the demographic and screening data from the original study’s participants. The synthetic data was then compared with the original study’s results.</p>
<p><span class="blue">&bull;</span> Additionally, the authors conducted semistructured interviews with five industry leaders affiliated with the Marketing Leadership Institute at the Wisconsin School of Business to contextualize their findings: Chauncey Holder (senior expert, McKinsey), Chuck Hwang (vice president of analytics and insights, Procter & Gamble), Lisa Gudding (president, Ipsos), Paul Metz (CEO, C+R Research), and Kajoli Tankha (senior director of consumer, brand, and AI insights, Microsoft).</p>
</div>
</aside>
<h3>Generate Consumer Insights With Synthetic Digital Twins</h3>
<p>An important way in which LLMs enable data generation for consumer insights is using digital twins. A digital twin is a synthetic, data-driven representation of an object or process that enables simulation and what-if experimentation at low cost. A range of fields, such as drug discovery, climate science, and supply chain management, were using digital twins well before the rise of LLMs.</p>
<p>In marketing, LLMs are enabling the use of consumer digital twins — personas that can simulate decision-making, preference shifts, and responses to marketing stimuli — as testbeds for premarket experimentation.<a id="reflink3" class="reflink" href="#ref3">3</a> Instead of waiting for new-data collection, analysts can simulate concept tests, assortment decisions, pricing moves, or campaign reactions in silico before making a significant financial commitment.</p>
<p>AI market research companies like Evidenza and academic initiatives such as Columbia University’s digital twin data set highlight the growing ecosystem around AI-driven consumer emulation.<a id="reflink4" class="reflink" href="#ref4">4</a> Evidenza partnered with a German information and communications technology company to study whether B2B buyers would trust the company to handle cybersecurity and cloud infrastructure for sensitive data. The research team used synthetic samples of decision makers to simulate a study and quickly test hypotheses around spending trajectories, the products most likely to drive vendor switching, and other questions. Validation against an existing human survey revealed strong correlations (0.75-0.88) across metrics, confirming that the synthetic samples provided directionally accurate insights. The synthetic approach enabled the B2B company to obtain valuable input at a fraction of the time and cost of traditional marketing research.</p>
<p>Consumer digital twins can be generated from a variety of demographic, psychographic, and behavioral data from various internal and external sources that companies may have access to. To generate digital twins in our study, we obtained detailed profiles of respondents in our research partner’s original study, including their demographics and product use. We then prompted the LLM by providing it with the research context and the persona we wanted it to assume based on a human respondent’s profile. Finally, we asked it to perform a task, such as giving a detailed answer to an open-ended question or picking from multiple response options for a survey question. We generated hundreds of synthetic respondents in that manner using the API for an LLM.</p>
<p>Our study found that LLMs can generate high-quality, information-rich qualitative data. Both LLM- and human-generated data look and feel remarkably similar, although LLM responses are superior in terms of depth and insightfulness, since they are unconstrained by time or a willingness to elaborate. They can also help reach niche or hard-to-reach segments, thus complementing human respondents in meaningful ways. For quantitative survey research, we found that an LLM does a good job of replicating the direction and magnitude of the human answers well.</p>
<p></p>
<p>Additionally, our findings revealed that digital twins add significant value to develop the research process. An LLM can be used to generate synthetic response data to a survey before it is administered to human respondents. By turning the typical research flow on its head, this “backward” marketing research approach allows researchers to test their survey design before fielding a survey.<a id="reflink5" class="reflink" href="#ref5">5</a> They can look at the synthetic survey results to answer fundamental questions, such as the quality of insights the survey is likely to reveal and survey questions that could be removed or added. In some circumstances, synthetic data may even obviate the need to conduct the survey; this could occur, for example, when one concept clearly dominates all of the concepts tested, or when the main insight from the survey is not new.</p>
<p>The gains from digital twin data are likely to be higher for hard-to-reach respondents, such as doctors or senior managers. Decision makers would much rather work with data from digital twins than have no data at all for these hard-to-reach groups. An attractive aspect of digital twins is that they do not get tired or have time constraints and can provide lengthy answers for many questions.</p>
<p>In addition to generating useful data, LLMs can be helpful in collecting and analyzing unstructured data from human or synthetic participants.</p>
<p></p>
<h3>Unlock Qualitative Research at Scale</h3>
<p>The traditional model for conducting marketing research is to begin with unstructured qualitative research (such as ethnographies, in-depth interviews, or focus groups) involving a small number of respondents and use it as the foundation for a large sample survey. Because unstructured, qualitative data involves a small sample size, is labor intensive, and is therefore expensive to collect and analyze, companies have historically relied more heavily on survey data. However, LLMs are proving to be useful in making qualitative data much easier to collect and analyze.</p>
<p><strong>AI as the data collection engine. </strong>An impressive use case for generative AI in data collection is as an interviewer of human respondents, where it is used to perform three key tasks:</p>
<ul>
<li><strong>Interviewer:</strong> The LLM follows a discussion guide to ask specific questions.</li>
<li><strong>Scorer:</strong> The LLM then evaluates the human answer against metrics such as clarity and depth, and provides a score on a scale of 1-100.</li>
<li><strong>Prober:</strong> If the evaluation score is below a preestablished threshold, the LLM asks the respondent to elaborate further.</li>
</ul>
<p>This three-step approach is not limited to conducting interviews with humans; it can also be applied to generating synthetic data. In testing this idea, we determined that synthetic data from AI-moderated interviews preserves the meaning and essence of human-generated data. Importantly, independent evaluation by human raters scored the AI-generated data significantly higher on measures of depth and insight.</p>
<p>AI-moderated interviews are powerful additions to a marketing researcher’s toolkit and permit data collection for qualitative research at scale. Unlike a human moderator, an AI moderator can collect detailed unstructured data (video, audio, or text) from many respondents across the globe, and at a fraction of the cost of a traditional in-person in-depth interview. Although an experienced human moderator may be better at reading respondents’ tone, body language, and visual cues, the advantage of AI moderators is the ability to quickly conduct interviews at scale, across geographical boundaries. AI moderators may offer an additional advantage in situations where humans feel uncomfortable talking about a product because of social desirability biases or fear of judgment.</p>
<p>Suppliers such as Outset and Nexxt Intelligence have commercially available products with AI-moderated functionality for conducting interviews. In one <a href="https://outset.ai/resources/stories/how-hubspot-ran-100-interviews-in-days-with-outset-and-shaped-their-ai-roadmap" target="_blank" rel="noopener noreferrer">case study</a>, Outset claimed to have completed 100 interviews in just a few days — a task that normally would have taken weeks. The resulting qualitative data revealed problems its client had not known existed and helped shape messaging for its brand campaigns. The AI moderator approach also gave the client the ability to conduct research continuously rather than just once or twice a year.</p>
<p><strong>AI as the analysis engine. </strong>The traditional approach to qualitative data analysis is largely manual and performed by expert analysts, who sort through large volumes of unstructured text and audiovisual data. The analysis task for text data, for example, involves thematic concept analysis, which includes reading the text, excluding fillers, highlighting key phrases or sentences, clustering them into related concepts or themes, iterating to remove repetitive ideas, and consolidating the themes into a concise summary. Our research finds that LLMs have made many of these analysis tasks easier to perform without sacrificing quality.</p>
<p>At the process level, we find that humans tend to highlight more sentences than LLMs when analyzing data and that there is significant overlap in the sentences that humans and the LLM highlight as important. LLMs uncover most of the same themes that humans do and identify new themes that humans do not. Overall, LLMs are comparable to humans in identifying key ideas, grouping them into themes, and summarizing them. In practice, suppliers such as Voxpopme offer excellent tools to analyze multimodal (video, audio, and text) qualitative data. In one case study, Voxpopme claimed a 30% to 50% reduction in the cost of qualitative research projects, a 50% increase in the use of existing research insights, and an impressive 60-times-faster research analysis.</p>
<p></p>
<p>AI-enabled marketing research makes it possible to conduct both qualitative and quantitative research at scale. This was previously infeasible with traditional qualitative research (small samples, deep insights) and quantitative research (large samples, broad insights). Given LLMs’ effectiveness, low cost, and ease of use, we expect that they will play an increasingly critical role during the data collection and analysis stages for unstructured data. Companies, in turn, are quickly discovering how much more they can do with unstructured data than was previously possible.</p>
<p>In addition to traditional qualitative research data (from in-depth interviews and focus groups, for example), there is also rich information in unstructured data such as online reviews, call center transcripts, and social media posts. Chauncey Holder, a senior expert at McKinsey, noted that “AI agents can interrogate multimodal data — like social media, category features, and behavioral signals — to uncover unmet needs and emerging trends, identifying white-space opportunities more efficiently than traditional methods.” The inability to mine this information-rich data quickly and inexpensively was a constraint for marketing researchers because past natural language processing models relied heavily on expensive, labor-intensive human labeling.<a id="reflink6" class="reflink" href="#ref6">6</a> Pretrained LLMs have changed this by enabling low-cost semantic summarization, topic extraction, sentiment classification, and narrative insight generation from massive multimodal data far more easily than previously available tools could. This change marks a massive shift in how the field of marketing research can unlock the value of unstructured data to inform business decisions.</p>
<h3>Connect Siloed Data Using Retrieval-Augmented Generation</h3>
<p>Although today’s LLMs have an impressive set of capabilities, their performance on complex tasks that require domain knowledge (in-house marketing research by a brand, for example) can be limited. For situations in which the LLM lacks the requisite information, <a href="https://sloanreview.mit.edu/article/a-practical-guide-to-gaining-value-from-llms/">retrieval-augmented generation (RAG)</a> is a cost-effective method that can improve its output quality. RAG incorporates information from an external knowledge source, such as a company’s existing qualitative data, as input <em>in addition </em>to the user prompt.</p>
<p>In our own research, we had mixed results when generating synthetic survey data using an LLM alone (without RAG). Although the LLM correctly captured the direction and magnitude of consumer attitudes, it exhibited two key weaknesses evident in many basic AI applications. First, the responses lacked heterogeneity; there was less variation in the AI’s answers compared with the human data. Second, the LLM answers lacked the internal consistency found in human answers; for example, the LLM’s answers did not rate attributes such as “healthy ingredients” and “safest food” similarly, as humans would. Both of those shortcomings were partially overcome when we used RAG to draw on existing qualitative data.</p>
<p>More broadly, RAG can be particularly useful for marketing research, where managers rely on multiple external information sources for decision-making. Effectively integrating siloed insight streams is a challenging task for marketing organizations: Survey trackers, customer relationship management (CRM) systems, social listening, and third-party intelligence rarely “speak” to one another in a cohesive way. LLMs using RAG offer “connective tissue” across disparate sources and enable cross-source synthesis. RAG can also be used to integrate multiple sources of information — such as in-house CRM, survey, and demographic data — and create an AI-enabled chatbot, or persona bot, that brand managers can use to gain a deeper understanding of their customers.</p>
<p>Lisa Gudding, president of strategic growth at consulting firm Ipsos, echoed the argument above, adding that “companies are now blending their own behavioral data with syndicated studies and trend signals that we supply to build richer, more dynamic insight ecosystems. This shift has given rise to data as a service [DaaS], where AI is enabling a new kind of consultative intelligence.” Market Logic and Stravito are two examples of DaaS-based knowledge management companies that integrate multiple sources of information to deliver insights to market researchers.</p>
<p>Although RAG is useful for integrating siloed, multimodal marketing data, it is not without limitations. First, it faces scalability challenges where retrieval accuracy and processing speed degrade as the knowledge base gets very large. Second, the inherent complexity and inconsistency of integrating real-time, multiformat marketing data require extensive preprocessing, which can restrict the volume and fidelity of information the LLM can effectively use. Finally, if the retrieval mechanism identifies information that is incomplete, is irrelevant, or lacks proper context, the quality of insights will be compromised, regardless of how good the LLM’s generative capabilities are.</p>
<p>On this issue, Chuck Hwang, vice president of analytics and insights at Procter &amp; Gamble, observed that “some of the knowledge created, especially in marketing and research, is not fully preserved [and] is often embedded in slide decks or shared verbally, making it difficult for AI to fully capture the institutional context.” Therefore, the effectiveness of a RAG system depends on the underlying information retrieval architecture and data completeness. When these infrastructural and data quality challenges are successfully addressed, this knowledge integration aspect of generative AI can prove to be a source of significant value creation.</p>
<h3>Human Oversight Is Essential</h3>
<p>While we see immense value in using AI for both qualitative and quantitative research, we find it essential to underscore that humans are still the drivers of the insight-generation process.</p>
<p>At the data collection phase of qualitative research, companies can design human-AI teams to generate insights efficiently and effectively. LLMs are excellent assistants that can take the first pass at analyzing vast amounts of text and audiovisual data. This gives the experts time for higher-order tasks, such as ensuring that the insights answer the research questions. In our research, we found that more unique insights emerged from AI-human hybrids than from the human-only or LLM-only approaches. Experienced qualitative researchers and LLMs complement each other well.</p>
<p>Much along the same lines, in quantitative survey research, an LLM can rapidly generate a strong first draft of a survey that can serve as an efficient starting point in the design process. A human expert can begin with this draft survey and perform tasks like adding skip logic and programming instructions, and assessing respondent experience, before signing off on the final version. In this reimagined research pipeline, the LLM focuses on the laborious, repetitive, and uninteresting tasks while the human expert uses the time saved to think more creatively about the business questions to be answered and the quality of the insights the research should deliver.</p>
<p>As Microsoft senior director of consumer, brand, and AI insights Kajoli Tankha noted, “In our own work, GenAI has become a powerful collaborator — accelerating synthesis, enabling scale, and broadening what teams can take on. At the same time, human expertise remains essential for framing the right questions and translating outputs into insight.”</p>
<p>As with any disruptive innovation, we encourage companies to be thoughtful and strategic when adopting LLMs for marketing research. To calibrate and uncover the true value of an LLM for their business, companies should run multiple validation checks before fully embracing LLM-generated outcomes. Such a test-and-learn approach may reveal areas in which an LLM shines and those in which it is inappropriate.</p>
<p>Researchers must develop AI literacy so that they know how to prompt, evaluate, and govern models, and their companies must implement quality guardrails, bias checks, and strict protocols for working with AI. The adoption of generative AI increases the value of human judgment by elevating the researcher to the role of curator of truth rather than just a producer of tables, graphs, and slide decks.</p>
<h3>GenAI and Marketing Research: Implementation Risks and Considerations</h3>
<p>Like any technology, generative AI comes with significant negative externalities. Many are structural (such as intellectual property violation, impact on climate, and job displacements) and outside the scope of this article, but others are squarely related to marketing research and deserve full consideration within the insights function.</p>
<p>First, LLMs are prone to gender, race, and cultural biases because of the data on which they are trained. Modern-day marketing researchers should be trained to spot these limitations when incorporating LLMs into the research pipeline. This issue further reinforces the need for critical human oversight in marketing research.</p>
<p></p>
<p>Second, LLMs make it much easier not only to produce good marketing research but also credible-looking marketing research of low quality. Most of the experts with whom we spoke expressed concern about the marketing industry’s growing appetite for speed at the expense of truly meaningful insights.</p>
<p>Third, there is some early evidence of entry-level job losses in marketing because of AI.<a id="reflink7" class="reflink" href="#ref7">7</a> The tasks that can most easily be automated by LLMs have historically served as training opportunities for junior talent. Most of the experts with whom we spoke echoed concerns about AI’s impact on the talent pipeline. Without hands-on experience in tasks that AI can automate, they noted, emerging talent may struggle to develop the deep analytical thinking and contextual judgment required to interpret data meaningfully and challenge assumptions.</p>
<p>Finally, although digital twins have a tremendous upside, they could be misused to generate fraudulent data that is hard to detect. For example, human respondents to online surveys could use LLMs to generate realistic answers in order to earn compensation.</p>
<p>Although the risks outlined above are real, they can be mitigated through the rigorous oversight and AI literacy we advocated for earlier. GenAI is a powerful ally of marketers, and the next generation of marketing research will be defined by a symbiotic partnership led by humans and fully supported by AI.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/gain-consumer-insight-with-generative-ai/feed/</wfw:commentRss>
				<slash:comments>1</slash:comments>
							</item>
					<item>
				<title>Disintegrating the Org Chart: ServiceNow’s Jacqui Canney</title>
				<link>https://sloanreview.mit.edu/audio/disintegrating-the-org-chart-servicenows-jacqui-canney/</link>
				<comments>https://sloanreview.mit.edu/audio/disintegrating-the-org-chart-servicenows-jacqui-canney/#respond</comments>
				<pubDate>Tue, 07 Apr 2026 11:00:48 +0000</pubDate>
				<dc:creator><![CDATA[Sam Ransbotham. <p><cite>Me, Myself, and AI</cite> is a podcast produced by <cite>MIT Sloan Management Review</cite> and hosted by Sam Ransbotham. It is engineered by David Lishansky and produced by Allison Ryder.</p>
<p><a href="https://sloanreview.mit.edu/sam-ransbotham/">Sam Ransbotham</a> is a professor in the information systems department at the Carroll School of Management at Boston College, as well as guest editor for <cite>MIT Sloan Management Review</cite>’s Artificial Intelligence and Business Strategy Big Ideas initiative.</p>
]]></dc:creator>

						<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Cognitive Technologies]]></category>
		<category><![CDATA[Employee Experience]]></category>
		<category><![CDATA[Employee Motivation]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Organizational Behavior]]></category>
		<category><![CDATA[Skills & Learning]]></category>
		<category><![CDATA[Talent Management]]></category>
		<category><![CDATA[Workplace, Teams, & Culture]]></category>

				<description><![CDATA[In this episode of the Me, Myself, and AI podcast, Sam Ransbotham is joined by Jacqui Canney, chief people and AI enablement officer at ServiceNow. Jacqui outlines how the software company has embedded AI agents into processes like employee onboarding to automate tasks, personalize experiences, and free up people’s time to focus on higher-value work. [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<p>In this episode of the <cite>Me, Myself, and AI</cite> podcast, Sam Ransbotham is joined by Jacqui Canney, chief people and AI enablement officer at ServiceNow. Jacqui outlines how the software company has embedded AI agents into processes like employee onboarding to automate tasks, personalize experiences, and free up people’s time to focus on higher-value work. She emphasizes that successful adoption of artificial intelligence requires strong change management, workforce training, and a focus on talent — not just technology — including companywide AI skill assessments and personalized learning paths. Tune in to learn why Jacqui sees AI as a human capital opportunity.</p>
<aside class="callout-info">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/MMAI-S13-E3-Canney-ServiceNow-headshot-600.jpg" alt="Jacqui Canney"></p>
<h4>Jacqui Canney, ServiceNow</h4>
<p>Jacqui Canney is the chief people and AI enablement officer at ServiceNow, where she leads the enterprise software company’s talent strategies for improving employees’ experience and preparing them for the future workforce through the use of technology and generative AI.</p>
<p>Before joining ServiceNow in 2021, Canney served as chief people officer at WPP and Walmart. She previously worked at Accenture for 25 years. Canney currently sits on the board of directors for food delivery platform Wonder and nonprofit Project Healthy Minds. She’s also on the Institute for Corporate Productivity’s Chief HR Officer Board and Boston College’s board of trustees, and she cochairs the Boston College Wall Street Business Leadership Council.</p>
</aside>
<p>Subscribe to <cite>Me, Myself, and AI</cite> on <a href="https://podcasts.apple.com/us/podcast/me-myself-and-ai/id1533115958" target="_blank" rel="noopener">Apple Podcasts</a> or <a href="https://open.spotify.com/show/7ysPBcYtOPVgI6W5an6lup" target="_blank" rel="noopener">Spotify</a>.</p>
<h4>Transcript</h4>
<p><strong>Allison Ryder:</strong> We hear a lot about using agents for workflows. One company has 80,000 active workflows and believes it’s making innovation, employee experience, and other aspects of its business better with AI. Learn more on today’s episode. </p>
<p><strong>Jacqui Canney:</strong> I’m Jacqui Canney from ServiceNow, and you’re listening to <cite>Me, Myself, and AI</cite>.</p>
<p><strong>Sam Ransbotham:</strong> Welcome to <cite>Me, Myself, and AI</cite>, a podcast from <cite>MIT Sloan Management Review</cite> exploring the future of artificial intelligence. I’m Sam Ransbotham, professor of analytics at Boston College. I’ve been researching data, analytics, and AI at <cite>MIT SMR</cite> since 2014, with research articles, annual industry reports, case studies, and now 12 seasons of podcast episodes. In each episode, corporate leaders, cutting-edge researchers, and AI policy makers join us to break down what separates AI hype from AI success.</p>
<p>Hi, listeners. Thanks again for joining us. Today I’m talking with Jacqui Canney. She’s the chief people and AI enablement officer at ServiceNow. She leads all talent strategies for the company’s rapidly growing global workforce. We’ve known each other for a few years, and I’m glad the timing finally worked out for us to talk with our microphones on. Jacqui, thanks for joining us. </p>
<p><strong>Jacqui Canney:</strong> Thank you. Thank you, Sam, for having me. I’m really excited for this conversation. </p>
<p><strong>Sam Ransbotham:</strong> [It’s] going to be fun. Let’s start with ServiceNow. It’s huge, [an] S&P 100 [company], but some listeners might not be familiar with all that the company does. Can you give us a bit of background? </p>
<p><strong>Jacqui Canney:</strong> Sure. I’ll start with what our purpose is, which is to put AI to work for people. At that core, we are the AI platform for business transformation. If you think about automated workflows, you think about the ability to drive your business results, [and] it comes down to how you direct work. Our platform is literally built on AI so that we can help companies in — I think it’s now 80 billion — workflows that we manage that produce either better service, more analytics, all the things that companies are seeking to do with their organizations. I was a customer of ServiceNow, so that brought me to be really excited about working here, too. </p>
<p><strong>Sam Ransbotham:</strong> You really led with AI right there. How did that happen? We’re just [a] relatively few years into this whole AI world. How do you have 80 billion [workflows]? I thought for a second, that seemed like a huge number. How do you have that many workflows using AI already? </p>
<p><strong>Jacqui Canney:</strong> We have a very innovative company. It’s 22 years old, I want to say, and was built on how to help people experience work better. Fred Luddy, our founder, built the first workflow for a colleague who was struggling with the swivel chair of getting work done and Excel spreadsheets, etc. So at our core, innovation has been something that we’ve always tackled. You’ve seen the movement — analog to digital, [on-premises] to cloud, cloud to mobile, now this conversation to AI — and ServiceNow has had these amazing engineers and product leaders who’ve been thinking about this for a long time, even before people talked about ChatGPT. </p>
<p><strong>Sam Ransbotham:</strong> Maybe give us an example. What is one of these 80 billion [workflows], and how is artificial intelligence involved in that? </p>
<p><strong>Jacqui Canney:</strong> I’ll take one in my area that I see a lot. When somebody gets hired to work here, there [are] lots of steps to onboard people. That can be a lot of conversations. It can be different managers, different departments. But with our onboarding platform, you say, “Hey, this is the person [who’s] starting. This is the kind of computer that they want. This is the kind of cellphone that they need. This is the training they need to have happen, the proof of identity so that they can be paid, that they get paid, that they show up, and they’re feeling productive” before they even start on that day. And then [you include] what happens post that onboarding because there [are] follow-ups, [such as] reminding a manager, “Hey, so-and-so started 10 days ago. Why don’t you check in?” Or, “So-and-so got their first kudos, a recognition, [so] why don’t you check in and see how they’re doing?” It’s an automated workflow that takes [out] the guessing and makes the manager and the employee really feel a relationship right at the gate, that’s personalized. </p>
<p><strong>Sam Ransbotham:</strong> In that process, then, where was artificial intelligence, or how does that fit into all those steps? </p>
<p><strong>Jacqui Canney:</strong> You can have an agent [that] if I say, “I want a MacBook,” it makes the order. The agents get the order done. The agents get the order shipped to your house. Agents [are] working in the background while people are able to focus on what they need to, which is welcoming this great new employee. </p>
<p><strong>Sam Ransbotham:</strong> That seems like a good separation of tasks, the classic getting rid of the dirty, dull, and dangerous parts [in favor of] the things that humans are better at. Tell me a little bit more about how you would organize a process like that. I think I would be tempted to get whatever computer or phone I wanted without oversight perhaps. How do you integrate that? </p>
<p><strong>Jacqui Canney:</strong> It’s a really great question because it does bring it down to [the] practical, like, how do you get this work done? There’s governance built into the platform. You’re creating that governance as a leader when you implement the technology. Price points, options, whatever it is that your company is governing, get embedded into the choices. But also, there’s design, which is something that maybe not everybody thinks about when you talk about platform technology. But designing the experience is equally as important so that it’s not just about, “Here’s what the CIO is trying to get done. Here’s what procurement is trying to get done. Here’s what HR is trying to get done.” But [by] putting the person at the center — the manager and the employee — and designing a process that’s really great for them — and we also have it so you could do it on your phone — at its core … the right governance [is] around it. </p>
<p>Then, if something goes wrong, because that can happen too, what’s the feedback loop if the wrong computer came, or it didn’t come in time? Or [how can we] get the signal so that we can continue to improve our process, and certainly find where a process flow might break down so that [we] can correct that in the tech? </p>
<p><strong>Sam Ransbotham:</strong> That makes sense. Let’s go back to your new-hire example. How much do people know that artificial intelligence is involved in this process? Or where is it obvious, and where is it not obvious? </p>
<p><strong>Jacqui Canney:</strong> It’s becoming less obvious, is what I would say. We’ve acquired a company called Moveworks, which is in and of itself a front-door conversational experience. </p>
<p>Earlier versions of our platform would feel potentially more like I’m interacting with technology. I’m searching. I’m getting directed to [knowledge base] articles, things that were all easier [but] not perfectly seamless. Now this conversational layer, which we’ve implemented for all our people, is like going to search. You go to it and say, “Hey, I’m meeting with Sam. What was the last meeting that we had?” It’s literally having this conversation. So I think it’s becoming less clear if you’re talking to a person or you’re talking to tech, which is making it really easy to get to the answers that you want. </p>
<p><strong>Sam Ransbotham:</strong> Actually, one of the things I think about — and maybe this is just my own personal weirdness — [is] I feel like I interact with people differently than I do with machines. For example, if I was talking to you about getting a computer, I might say, “Oh hi, Jacqui, how are you doing? It sure is snowy here. It’s really cold. I was thinking about getting a computer.” On the other hand, if I was talking to a machine, I might be a little bit more brusque and say, “Buy machine now.” Maybe the robot overlords will come back and get me for that. But it seems like there could be some efficiency in being transparent: Hey, you’re talking to a machine; you can drop the conversation about the weather, perhaps, or the social glue. </p>
<p><strong>Jacqui Canney:</strong> It’s funny. You can sort of have social conversations with the machines, too. It can recognize if you’re stressed or in a hurry [by] the tempo of our voices, and it directs to responding in that way. You also can find a way out, to talk to a person. You can click through to get to a person. That way, you can get out of whatever chain of conversation that you’re in. </p>
<p>One thing you bring up, though, that I do worry a little bit about us as humans: If we are abrupt with the machine, are we going to forget and be abrupt with each other [when] we’re talking to [another] human? I think that’s at the core of what I’ve been spending a lot of my time on; there’s a lot of technology talk. There [are] 80 billion workflows just with us. But without getting the change management of the users right, whether they’re your employees or your customers or the end users of your technology … that’s what I’ve been thinking about. </p>
<p><strong>Sam Ransbotham:</strong> I haven’t thought about the spillover the other way, but that’s a good point, that maybe I’m becoming brusquer to my humans. Well, now I’ve got a new thing to worry about. </p>
<p>How much do these employees need to know about artificial intelligence? What’s your thinking on how much awareness people need to have of these technologies in order to be successful? </p>
<p><strong>Jacqui Canney:</strong> We’ve invested quite a bit in this space. Every person who works here — we’re 30,000 people now — has had AI training, and we’ve been doing this for a couple of years. One, because the products we build, no matter what part of the company you’re in, understanding what AI is, [having] a common vocabulary about that, that was really important to our CEO and our leadership team for the company. </p>
<p>We’ve invested [in] having, from speakers to AI Day to different kinds of training, and we’ve evolved quite a bit now, where we’ve assessed the whole company on AI skills, and it’s not like one size fits all. Different roles have different expectations and different experiences, so we’ve customized the assessments and built personalized learning journeys so that people can grow their skills. And we’ve seen our organization really lean in and be excited about that. </p>
<p>We also celebrate people who use AI tools really frequently because they’re learning from each other. I want to eliminate as much fear in the workforce about what AI is and what we’re using it for, and how we can use it in the future. I think by being transparent, by offering opportunities, by giving people learning experiences, even for myself, I’ve been seeing more confidence grow. We ask our people all the time how they are feeling. They feel pretty strongly that they’re getting the tools that they need. So we’re going to keep at it. </p>
<p><strong>Sam Ransbotham:</strong> There [are] like four or five things that I wanted to follow up on there. You mentioned lots of good topics. Maybe the first one I’ll start with is: How much do people need to know? Vocabulary, I think, was one of the things you mentioned, which makes sense. We need to be able to talk about technology in ways that make sense, to communicate with each other, but what are these skills that people are trying to pick up on?</p>
<p><strong>Jacqui Canney:</strong> Prompt engineering is something we all have been talking about. It is not something we talked about that long ago, right? You have a team like in my organization, which is a human resource people team, and we have implemented, obviously, our own tech, and we were able to come to double the productivity of what my team could do. It was 1-to-400 to 1-to-900 that we were serving because of the tech. Now, I didn’t want people to be displaced because of that. But then they became better at a couple of things. One is prompt engineering so that they could help create better questions that they’re asking so that we can get better answers and then train AI so that it continues to be better answers. Over 90% of our inquiries that go to our Now Assist, which is our own tech, get answered by the tech. </p>
<p>The more we can make that smarter and better, the more people will be happier to use that. And then we also created new roles. [These are] adjacent skills that I’ve seen the team lean into. We have product engineers and product designers inside HR. We didn’t have that before. We’ve built a new role called forward-deployed engineer, which is somebody who is quite technical but has an interest and a desire, and is really great at talking about business problems and business transformation, and marrying those conversations together. </p>
<p>So you can imagine talking to an HR lead [or] a CIO somewhere out there using our tech, and they know they have this problem they want to solve or this opportunity to fix. Now we’ve built a workforce that can go meet with that team, talk about their problem, and then say, “Here’s how we suggest the technology can solve the problem,” versus saying, “Here’s the technology. Work around it, and work it into your solution.” It’s more in service of the human. </p>
<p><strong>Sam Ransbotham:</strong> Those are some interesting numbers, like the 1-to-400 to 1-to-900, and your first reaction would be “OK, yeah, that’s going to lead to reduction.” But as you point out, there [are] just a bunch of new tasks that are coming up and new roles that are coming up as quickly as maybe whack-a-mole. You’re trying to eliminate some work, and new work is getting created. </p>
<p>What’s your sense of the net? If we’re moving from reducing things that people are needing to do, by the two-to-one-ish type of number that you mentioned, but you mentioned new roles, too. It seems like a big deal if that is a one-to-one swap, a one-to-a-half swap, or a one-to-two swap. That’s big. Which direction is it right now? </p>
<p><strong>Jacqui Canney:</strong> A crystal ball would be really good on that one right now. I think every company is tackling it in their own way. I think that, at its core, some companies have gone after this with a cost-cutting lens, and I don’t think that’s the way I would start if someone asked me. I really think the opportunity, as [it] has [been historically], is technology provides capacity and creativity, hopefully, or new adjacent business lines, the things that can grow. I’ve seen it not just here at ServiceNow but even in my old job at Walmart, where you could see where you implement this powerful tech, but it does create expansion. The hard work is the work redesign that has to happen. And that’s where leaders, CEOs, chief people officers really should be spending their time, because I think whether it’s a one-to-one or you’re flat or you’re growing, you’ve got to design that future. And if you don’t design it, you’ll lose the capacity. </p>
<p><strong>Sam Ransbotham:</strong> I think I was too sort of crude to say, “Is it net plus or minus?” I’m sure in many areas it’s plus and [in] many areas it’s minus. And then we’re looking at the net of the net across a big aggregate — the crystal ball is not quite polished enough for that. </p>
<p>I think this training program you mentioned is part of the ServiceNow University. I like the idea that you mentioned the skill assessment as part of that, but at the same time, you also mentioned just a second ago that prompt engineering wasn’t something you were paying attention to a couple of years ago. </p>
<p>So we have the changing skills of people and the changing needs of people. How often are you measuring these things? How are you measuring these things? The details on this seem very difficult with 30,000 people in a rapidly changing world. </p>
<p></p>
<p><strong>Jacqui Canney:</strong> Well, we have jumped on this with all of our selves. The board, our CEO, the leadership team, everybody is fully supportive of the changes that we’re making and that we’re driving inside our own company. This assessment of the 30,000 people was important. I felt like we needed an X-ray of the company to know where we were, to be able to go forward. We didn’t use it as anything scary or a negative. It was really meant to be like we’re all going to get smarter about what we know we have as skills and what we know we’re going to need. </p>
<p>Then if you take what we’re going to need, you’re able to say — and this is with the help of Pearson; they’ve been a good partner to us — “Here [are] the jobs, here [are] the skills, here [is] the new work that you’re planning, and then here [are] the gaps you need to close.” So it’s very personalized, but it’s also how we’re moving our change management through as a company. </p>
<p>I have other HR leaders [who] I really love working with, and we all talk all the time about how they’re tackling it. And I think, commonly, that’s what I’m hearing my peers talk about — how we’re sort of going after it. It’s like your X-ray, your gaps. What can you build? What’s adjacent? Who can you train? Who can you grow? Who do you have to hire? </p>
<p><strong>Sam Ransbotham:</strong> Actually, do you let outsiders take this? I’m ready to sign up because … I screw up a lot of stuff, and [it] can be so nice to know ahead of time. … I always think about this in one incremental hour. If I had one extra hour, what would I do with that hour? Lots of times, I just don’t know what the right thing to learn is or the new thing that would help the most. And I’m fascinated by the promise that these technologies could help us learn about these things. </p>
<p><strong>Jacqui Canney:</strong> ServiceNow University [has] a lot of free courses out there. You can go check it out. I’d love your feedback about it. </p>
<p><strong>Sam Ransbotham:</strong> Great. So you gave me homework. That’s no fun. </p>
<p><strong>Jacqui Canney:</strong> There you go. </p>
<p><strong>Sam Ransbotham:</strong> One of the things you’ve talked about is soft skills. … [For] the idea of a soft skill versus hard skill, first, what are your thoughts on the relative importance of those two types of skills going forward? </p>
<p><strong>Jacqui Canney:</strong> I have always believed that critical thinking, the ability to pattern recognize, those things that you learn, whether it’s through your work, your university, all the experiences that you have, are never more important than they are now. And I know lots of people are talking about that, and it’s not meant to be an easy thing. Not everybody has those skills. But people can be nurtured, I think, to better learn how to create those skills. </p>
<p>One of the things that I’ve been really thinking about is we talk a lot about leadership, and we’ve all talked about leadership for a very long time. But now, more than ever, the ability to find the people [who] have the wisdom is really important. If you’re leading a company or you’re leading a team, it’s never been harder. Everything’s really complex. People are on the road. People are hybrid. We still have some COVID stuff that we’re dealing with. Now you have this really important technology that’s kind of hit everybody’s desk. But at the same time, the world is moving faster than ever. </p>
<p>So how do you have the confidence to literally pattern recognize, have the wisdom to say, “These are the use cases I want to go after,” as opposed to, “These are just the use cases that everybody’s bringing to me”? [Those are the] … really important, nontechnical capabilities we all should be focused on growing. </p>
<p><strong>Sam Ransbotham:</strong> It was interesting. We had Taylor Stockton, who’s a former student of mine, on a previous episode. He works at the [U.S.] Department of Labor, and we were asking [about] hard skills, soft skills. He talked for a bit about soft skills and the importance of that, but then at the same time, he said [that] we also need those technical skills. So what’s your take? If I have one hour this afternoon, should I spend it on developing a soft skill or a hard skill? Or don’t pick on me. [Let’s say] one of my students wanders in here. What’s the one hour? Where do we spend it? </p>
<p><strong>Jacqui Canney:</strong> I might say 30 minutes on what they are curious about with the tech. Is it protocols? I think protocols [are] going to be the next thing [we’ll be] talking about. How do you govern the agents inside a company? That’s really important. Understanding the nature of how you build and create protocols is not something you need to be a computer science person to do. </p>
<p>And then the second is, I think, the ability to drive this critical thinking: I’m absorbing problems. I’m absorbing information. How am I able to take that and process that into an idea or a point of view? I think the world of my university, and that was a lot of how we were taught, not just to be great accountants or great finance people, but also to be great thinkers. Having that be part of what you’re thinking about if you have one hour, I think, is worth it. </p>
<p><strong>Sam Ransbotham:</strong> I have a ton of students who are about to graduate, and they’re talking about difficult job markets. I know you get asked this probably every time someone talks to you, given your role, but what should students who are close to graduation be doing? What should they be thinking about as they enter this job market? </p>
<p><strong>Jacqui Canney:</strong> I think two things are really important. One is, what are the skills that they’re taking out of their university experience? When you go to work at a company, they’re going to teach you a lot. They’re going to teach you how to work. They’re going to teach you a lot about that company, about how they work. But if you can come out of school with one great skill that you’re super proud of: It could be you’re a great writer. It could be you’re a great coder. It could be you are a great speaker. Whatever it is, but really know what that skill is and how you’re going to sell that to an employer that you’re going to work at. You’re probably more AI native than anybody else in the company because of the nature of how you’re growing up and the world that you’re in already. So that’s also on your side. </p>
<p>But the second thing is growth mindset. Demonstrate your ability to learn and change and be agile because I’ve also said, and I don’t have this written down because somebody told me, but the companies with the best language models are not going to be the ones with the most adaptive, agile workforces. So I look for those kinds of qualities, especially the early-in-career talent that I get to meet. </p>
<p><strong>Sam Ransbotham:</strong> I like that. It’s hopeful. I think your point about how well prepared students are — I love job descriptions that have something like, “needs 30 years of experience with large language models” — it’s just not possible. So the students graduating now are just as, or maybe probably more, familiar with this technology than many of us are. … I was thinking about blind spots. You’ve [now worked] at Walmart, WPP, [and] ServiceNow. What are people getting wrong? What are leadership blind spots here when people are thinking about artificial intelligence? </p>
<p><strong>Jacqui Canney:</strong> Well, I think focusing on the tool and not the talent is one of the top things. People really get wrapped up around [questions] like, “What’s my AI strategy?” [but] it’s really your business strategy. Then, how does the business use technology, but certainly, how does it bring its people along with it? That gets missed a lot. … I talked about the cost-cutting exercise; I think people get that wrong when they lead with that. Waiting for a perfect plan is another one I think people get stuck in. I know sometimes even I do, right? It’s like you don’t have this all figured out. Like you said, 30 years of LLM experience — where’s that going to come from? It doesn’t exist yet. </p>
<p><strong>Sam Ransbotham:</strong> I feel seen with that one. </p>
<p><strong>Jacqui Canney:</strong> I think people skip the hard parts. They skip the culture. They skip the trust. They skip the people part. I feel like that’s the stuff that I’ve seen go wrong. </p>
<p><strong>Sam Ransbotham:</strong> I think there’s a lot of ways to screw this up, too. I mean, there [are] a lot more ways to get things wrong than there are to get them right. Your idea of not having a perfect plan to start with feels wrong. I was reading something that … you had AI write a poem for [a] family trip. I was thinking about that. It struck me as funny because we actually, just for a cringe moment, I had my classroom write a theme song for our ML (machine learning) class. What would generative AI say is a good theme song for our class? We did not all recite the class anthem afterward, but you said that surprised you as something that the tool could do. What’s surprising people about what these tools are capable of? What are the things that people are learning aha from these tools? </p>
<p><strong>Jacqui Canney:</strong> I think it’s the ability to be better prepared for X meeting. … We have seen in our sales organization where they have access, obviously, to all the data about our customers, about the work that they’ve been doing. Now, how to prepare for those meetings in minutes and not days has been, I think, really exciting and eye-opening. People are loving that because it’s easier to get to answers quicker. </p>
<p>The other thing that I saw that people were super excited about, especially in our sales organization, [is] it went from like four or five days to find out what your commission is going to be to eight seconds. So if you have a workforce that’s motivated to know that, making that easier has been a great, well-received use of what the technology has been able to do in the day-to-day. I probably could think of a bunch more, but those two come closest to me right now.</p>
<p><strong>Sam Ransbotham:</strong> Actually, I like the quick feedback part because … earlier you were talking about assessing people’s skills, and I was thinking about how in the education world, we do a fair amount of testing. And one of the things I was thinking as you were saying that is that students actually don’t dislike tests.</p>
<p>Now, I’m sure people are freaking out right now as I’m saying that. But people like to get feedback about what they know and what they don’t know. People like quick feedback. This is the same thing with your commission example there. If you do something and you get feedback quickly, then that helps us reinforce it, helps us know what to do better. HR is historically driven by the idea of the annual performance review — 364 days ago, what did I do right or wrong? I don’t learn very well from that. You were mentioning commission, but that’s the example of quicker feedback. Both of those — I’m going to push back a little bit — feel like productivity enhancing, but we said earlier that there’s a bit of a trap of getting too sucked into productivity. Faster meeting preparation, faster readiness is good, faster feedback is good, but both of those feel like productivity. What would be the missing thing that we would want to add to that to make it a non-productivity?</p>
<p><strong>Jacqui Canney:</strong> I think it would mean the sale got better, bigger. If I would have had all the things I maybe before wouldn’t have known, like what did they say on LinkedIn, what’s the stock price doing? There’s an opportunity to not be incremental but to be more impactful. And maybe the sales commission one is a little bit about productivity, but I think it’s also highly motivating. That might get the salesperson to say, “If I could just sell this much more, look at what my commission could be.” And then lean into being better prepared for that. </p>
<p>I think, too, that I’ve seen us think about leadership in a different way that I’m not sure without AI we would have had the capacity to do. We have really stepped up [on] what does it mean to be a leader here? And [we have] invested in that [more] than I’ve ever seen because we know that that’s really the unlock for the organization. I think because of AI maybe creating the capacity, even for my own team, to be able to dream a little bigger about … the future of leadership and this concept of wisdom, I see that opening too. And I would say this lane of opportunity is what we still haven’t figured out yet. What are we going to build? Are we going to build a new business? Are we going to have totally different companies that are created? That’s what I think we’re on the cusp of figuring out. </p>
<p><strong>Sam Ransbotham:</strong> You’ve touched on this. You’re obviously from a human resources background, but you’re talking about a lot of stuff that feels like you’re stepping on some IT toes here. So, [what] is this relationship between these formerly quite separate parts of organizations going to be, as you’re using more of these tools? </p>
<p><strong>Jacqui Canney:</strong> I think AI is disintegrating the org chart, and not just between HR and IT. It’s sort of coming across a bunch of places because it just doesn’t see [it] that way. It doesn’t see silos, right? It sees across. Leaders are having to get comfortable with that. It doesn’t mean that the roles aren’t important. It’s just that they’re changing. </p>
<p>Here at ServiceNow, I was promoted to AI enablement officer, along with the chief people officer role just a little bit over a year ago. That was because [CEO] Bill [McDermott] felt like this is truly a human capital moment. It doesn’t make me in charge of it all. I’m the team captain. I’m not alone. But I have to sort of [keep] score of how we’re doing with that. And I think that says a lot about what he sees as a guy who’s seen across technology for decades of where change really [goes]. </p>
<p>Now our CIO, our product team, we work really closely, and we have agreed that the employee experience sits primarily with me and my team. So how technology, how processes, how policies, how all that impacts the experience, we’re kind of like the filter on it, and we work really closely together. We have a very transparent look at what use cases are in productivity across the company. Who’s driving ROI, who’s not? We have a control tower for that. I think that kind of keeps us all square because we can see very openly what’s happening. But yeah, HR roles are totally evolving. If you’re a [chief human resources officer] who’s really focused on process and policy and annual cycles, the CIO is going to come for you. </p>
<p><strong>Sam Ransbotham:</strong> We have a little segment where we ask quick questions. Just answer [what comes to] the top of your mind. What about artificial intelligence is moving faster or slower than you expected? </p>
<p><strong>Jacqui Canney:</strong> Moving faster in headlines, moving slower in, I’ll say, scalability. </p>
<p><strong>Sam Ransbotham:</strong> Getting something across an organization, I’m sure you think about that a lot. </p>
<p><strong>Jacqui Canney:</strong> Yeah. </p>
<p><strong>Sam Ransbotham:</strong> How are people using AI poorly? </p>
<p><strong>Jacqui Canney:</strong> I think they’re writing poems like I did. </p>
<p><strong>Sam Ransbotham:</strong> All right. There you go. </p>
<p>What do you wish that AI could do better? </p>
<p><strong>Jacqui Canney:</strong> I wish it could … I think it’s getting there, but [in] context and memory [it’s] being better. But I think that’s maybe more even how humans are using it. [But how can] I truly make AI be a digital twin of me? I haven’t figured that out yet. </p>
<p><strong>Sam Ransbotham:</strong> Are you finding because of AI you’re spending more time with technology or less time with technology? </p>
<p><strong>Jacqui Canney:</strong> I think it’s just in the flow of work now for me. I’m not really discerning [whether] I am in the tech or not. </p>
<p><strong>Sam Ransbotham:</strong> Well, this has been fascinating. I think one thing we’ll come back [to] is this idea that the use of artificial intelligence is eroding these org charts. I think that’s a really interesting high-level thought to come away from this. Thanks for taking the time to talk with us. </p>
<p><strong>Jacqui Canney:</strong> Thank you, Sam. This was great. </p>
<p><strong>Sam Ransbotham:</strong> Thanks for joining us today. On our next episode, I’ll talk with Peter Koerte, chief technology officer at Siemens, and we’ll talk about industrial AI. Please join us.</p>
<p><strong>Allison Ryder:</strong> Thanks for listening to <cite>Me, Myself, and AI</cite>. Our show is able to continue, in large part, due to listener support. Your streams and downloads make a big difference. If you have a moment, please consider leaving us an Apple Podcasts review or a rating on Spotify. And share our show with others you think might find it interesting and helpful.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/audio/disintegrating-the-org-chart-servicenows-jacqui-canney/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>How to Reap Compound Benefits From Generative AI</title>
				<link>https://sloanreview.mit.edu/article/how-to-reap-compound-benefits-from-generative-ai/</link>
				<comments>https://sloanreview.mit.edu/article/how-to-reap-compound-benefits-from-generative-ai/#respond</comments>
				<pubDate>Mon, 06 Apr 2026 11:00:55 +0000</pubDate>
				<dc:creator><![CDATA[David Kiron and Michael Schrage. <p>David Kiron is the editorial director, research, of <cite>MIT Sloan Management Review</cite> and program lead for its Big Ideas research initiatives. Michael Schrage is a research fellow with the MIT Sloan School of Management’s Initiative on the Digital Economy. His research, writing, and advisory work focuses on the behavioral economics of digital media, models, and metrics as strategic resources for managing innovation opportunity and risk.</p>
]]></dc:creator>

						<category><![CDATA[AI Strategy]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Business Value]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Value Creation]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR &#124; Minneapolis Institute of Art In domain after domain, AI has compressed work that used to be expensive — generating drafts, code, prototypes, and analyses. The marginal cost of a first attempt has dropped sharply. What remains expensive is what happens after the output arrives: evaluating what gets generated. That involves separating [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Kiron-1290x860-1.jpg" alt="" class="wp-image-126461" /><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR | Minneapolis Institute of Art</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">In domain after domain</span>, AI has compressed work that used to be expensive — generating drafts, code, prototypes, and analyses. The marginal cost of a first attempt has dropped sharply. What remains expensive is what happens after the output arrives: evaluating what gets generated. That involves separating signals from noise, catching errors, capturing what was learned, and applying those lessons to the next iteration.</p>
<p>This shift changes what organizations should optimize for. The old question was “How do we produce more, faster?” The new question is “How do we systematically learn from, and with, what AI produces?”</p>
<p>Most organizations still overinvest in answering the old question. They treat artificial intelligence as a throughput accelerator: task in, output out, loop closes. This is consumption economics. A serious CFO instantly recognizes the pattern: asset depreciation.</p>
<p>The organizations pulling ahead answer the new question. They treat AI as a capability accelerator: task in, output out. But they also ask, “What worked? What failed? What should change next time?” Insights get captured, converted into shared knowledge, and applied to subsequent interactions. Each cycle makes the next more effective. This is compounding value. Serious CFOs recognize this pattern, too: asset appreciation.</p>
<p></p>
<p>The data bears this out. Organizations that build systematic feedback loops between humans and AI are six times more likely to derive substantial financial benefits from AI, according to research by <cite>MIT Sloan Management Review</cite> and Boston Consulting Group.<a id="reflink1" class="reflink" href="#ref1">1</a> Organizations that invest in learning with AI are 73% more likely to achieve significant financial impact.<a id="reflink2" class="reflink" href="#ref2">2</a>  Yet, as of 2024, 70% of companies had adopted AI, but only 15% were using it for organizational learning.<a id="reflink3" class="reflink" href="#ref3">3</a></p>
<p>Leaders seeking compound returns must build what most companies don’t yet understand, let alone possess: systems that verify AI outputs, evaluate what they reveal, and capture what was learned so that each interaction becomes a building block for the next. This type of ROI with GenAI — return on iteration — doesn’t happen by accident; it requires infrastructure. Let’s examine what that infrastructure looks like.</p>
<h3>Why This Moment Is Structurally Different</h3>
<p>This is not old productivity advice dressed in new rhetoric. Two complementary economic dynamics that reinforce each other in a virtuous cycle make compounding management an imperative. </p>
<p>In his 1966 book <cite>The Tacit Dimension</cite>, philosopher Michael Polanyi observed that humans know more than they can articulate. For decades, that tacit knowledge protected knowledge workers. What could not be explicitly described could not be automated. Tacit expertise was a moat.</p>
<p>AI breaches that moat — not by codifying tacit knowledge but by inferring it from behavioral traces at scale. Large language models (LLMs) absorb how experts actually work, including knowledge the experts never articulated. Legal reasoning in briefs and opinions, financial judgment in analyst reports and trading patterns, strategic thinking in board presentations: As these behavioral traces become more legible to AI models, the tacit expertise embedded in them becomes readable by machines.</p>
<p></p>
<p>Boris Cherny, who led the development of Claude Code, described a revealing moment: After he gave Claude the tools to interact with his file system, the <a href="https://newsletter.pragmaticengineer.com/p/how-claude-code-is-built" target="_blank">AI began exploring the system on its own</a> to find answers. “It was mind-blowing,” Cherny said. He had not programmed that capability. The model inferred how developers work from the traces they had left behind — behaviors that no one had previously formalized.</p>
<p>The second dynamic makes the economic case for compounding even more compelling. In 1865, economist William Stanley Jevons observed that when steam engines became more efficient, coal consumption increased rather than decreased. Efficiency gains made the capability cheaper, stimulating demand. As tacit expertise becomes readable by machines, the cost of sophisticated capability drops dramatically. Projects that were previously too expensive to prototype can proliferate. Iteration cycles that once took months compress to hours. More expertise becomes readable to machines, expanding what AI can access while enhancing the AI’s knowledge base and improving its capability. More capability expands what organizations attempt. The loop feeds itself.</p>
<p>The data supports this structural shift. Organizations that combine strong organizational learning with learning specific to AI are up to 80% more effective at managing uncertainty.<a id="reflink4" class="reflink" href="#ref4">4</a> The implication is direct: Becoming better learners with AI is at least as important as using AI to create efficiencies.</p>
<p>The organizational challenge worldwide is not whether or how AI will access their people’s domain expertise — that appears computationally inevitable. The issue is developing the competence and commitment to install mechanisms that reap compounding returns on human-AI interactions before competitors do.</p>
<p></p>
<h3>Three Steps to Compounding Benefits</h3>
<p>What do those essential mechanisms look like? We argue that organizations must prioritize three distinct but interrelated operations. When all three of the following steps are present and connected, organizations can reap compounding benefits on AI use. When any step is missing, organizations merely consume AI outputs.</p>
<p><strong>1. Verification.</strong> The question here is “Does this output meet the standard?” This step produces a binary answer: correct or incorrect, usable or not. Verification compares output against a criterion that already exists. Unverified AI output is noise with a confident tone. But verification, used alone, catches errors without generating learning.</p>
<p><strong>2. Evaluation.</strong> For this step, the question is “What does this output reveal?” Where verification compares output against existing standards, evaluation may generate standards that did not exist before. This is why evaluation requires domain expertise in ways verification often does not. The expert as evaluator is not merely checking quality. They are discovering <em>what quality means</em> in this new context. With AI outputs, evaluation is required across three dimensions: volume, variety, and velocity. Human bandwidth to do evaluations, not AI access, becomes the binding constraint.</p>
<p><strong>3. Learning capture.</strong> The third question is “How do we ensure that this insight persists?” When evaluation is not recorded, knowledge does not compound; it evaporates after each interaction. Learning capture converts single insights into organizational knowledge, such as documented criteria, updated prompts, and shared repositories of what worked and why. Think of it as version control for organizational judgment. Without it, evaluation is a one-time event. And learning capture alone (documentation without verification or evaluation upstream) produces nothing but organized noise.</p>
<p>Those three steps dynamically reinforce one another. Better verification produces cleaner signals for evaluation. Better evaluation generates richer material for capture. Better capture improves the criteria used in the next round of verification. The cycle is the point.</p>
<p></p>
<p>There is yet another valuable and scalable learning dividend: Most experts cannot fully articulate what makes their judgment good. Forcing that judgment into written standards, such as the way developers write CLAUDE.md files that specify what “good” code looks like, makes the tacit explicit for colleagues and for AI alike. The gap between what an LLM delivers and what the expert wanted surfaces unspoken knowledge. </p>
<p>At Anthropic, Cherny gives the AI a way to verify its own work — a test suite, a browser check — before a human ever sees it. To evaluate the work’s quality, he concurrently runs 10 to 15 Claude instances that generate swarms of smart subagents: One checks style while another hunts bugs, then a second cohort challenges the first for false positives. Capture is key: A CLAUDE.md file gathers mistakes, corrections, and design principles inside the workflow itself — not after its completion but while it is happening. Each new session inherits what every prior session learned. For Cherny and his developers, the benefits compound.</p>
<p>There are analogous questions for leaders of other business functions: What is your equivalent of version control for organizational decisions? Of automated testing for new approaches? Of code review to make evaluation criteria explicit and shared? The “verification-evaluation-learning capture” flywheel offers both challenge and opportunity for managers and executives who want to use AI to do measurably more than simply cut costs and improve efficiencies.</p>
<p>Consider a marketing team using AI to generate campaign briefs. Verification asks whether the brief meets basic brand standards, such as consistent tone, correct product claims, and regulation-compliant disclaimers. Automation is fast and cheap. Evaluation asks what the brief reveals: Did AI surface customer insights the team hadn’t named? Did it miss the emotional register entirely? Are these insights “actionable” — meaning, can they trigger interactions and offers to cultivate relationships and/or close deals? These judgments require a senior strategist, not a checklist. </p>
<p>Learning capture asks whether that strategist’s correction — “Our brand never leads with product features; it leads with customer identity” — gets written into a shared prompt template or brief standard for the whole team to use the next time. Without that last step, the strategist’s insight dies with the session. With it, every subsequent brief starts smarter. And perhaps that brief becomes the charter for designing an intelligent marketing agent.</p>
<p>The moment a CMO and/or CFO builds dashboards around those questions and criteria, the organization has begun compounding.</p>
<h3>When Verification Masquerades as Evaluation</h3>
<p>The machinery requires a human who holds the loop open when every instinct says to close it.</p>
<p>Jaana Dogan, a principal engineer at Google responsible for developer infrastructure on the Gemini API, ran a revealing experiment. She pointed Claude Code — a rival’s tool — at a problem her team had spent many months solving. Given a short prompt with no proprietary Google data, Claude Code generated a design solution comparable to the one her team had landed on, along with a working prototype.</p>
<p>Most managers, seeing that output, would just verify: “Does this match what we built? Close enough? Adopt or reject.” Verification is fast, comfortable, and binary. It answers the question already in your head.</p>
<p>Dogan did something different. She <a href="https://x.com/rakyll/status/2007240188645581224" target="_blank" rel="noopener noreferrer">decided</a>, “It’s not perfect and I’m iterating on it.” </p>
<p>Evaluation interrogates what the output reveals — about the problem, about your assumptions, and about what you haven’t yet named. Dogan could do this because she had months of judgment to bring to the encounter. AI compressed the implementation; it could not compress the formation of expertise. Without that prior work, only two moves exist: Accept or reject. With it, a third move opens up: Stay in the encounter and learn.</p>
<p>This is the distinction most organizations miss. They treat AI outputs as verdicts to be confirmed rather than starting points to be interrogated. The result is consumption dressed up as adoption — verification mistaken for the whole job.</p>
<p>The implication: Deploy AI first in domains where your people already have deep expertise, not because AI needs hand-holding but because evaluation requires someone capable of recognizing what “not perfect” actually means and knowing what iteration may reveal. The expert as evaluator is not a transitional role.</p>
<p></p>
<p>But Dogan’s insight lives only in her head until infrastructure captures it. The question for any organization is not whether individual experts can hold loops open — some always will. It’s whether the machinery exists to convert their judgment into shared knowledge that persists.</p>
<p>That machinery is what most organizations lack. They have experts. Some even have experts with the right disposition. What they don’t have is the infrastructure that makes compounding automatic rather than incidental.</p>
<h3>Building the Capability</h3>
<p>Translating these practices into infrastructure for business functions beyond software is the work that remains for leaders. This requires a minimum of five moves.</p>
<p><strong>1. Preserve your company’s evaluation expertise.</strong> To reap compound interest, you’re dependent on people who can accurately evaluate AI output. This is domain expertise repositioned: the expert as evaluator rather than the expert as producer. Organizations that let people’s deep expertise atrophy because “AI can do that now” will lose this very valuable capability.</p>
<p><strong>2. Build verification mechanisms.</strong> As noted above, the cycle cannot begin without verification of output. Software verification is cheap: Code runs or it doesn’t. Finance has moderate verification costs; models can be stress-tested against historical data, for example. Strategic planning has expensive verification costs: Long bets may not resolve for years. Most organizations treat expensive verification costs as a good reason not to start some work with AI tools. Instead, the smart move is doing <em>minimally viable verification</em>, the cheapest credible check that an AI output is not wrong. Consider multijudge systems that surface disagreement, and consistency checks that compare outputs across different formulations of the same problem. None of these guarantees correctness, but each offers enough verification to start the cycle. </p>
<p><strong>3. Institute evaluation practices.</strong> Few organizations systematically evaluate AI outputs. After every significant AI interaction, users should ask three questions: What worked? What failed? What was interestingly wrong — wrong in a way that reveals something about the problem the team has not previously articulated? That third question is where hidden value lives. An output that fails in a way the expert noticed but had not yet named becomes new organizational knowledge: It is tacit expertise becoming explicit. People must be prompted to ask these questions as part of the existing workflow. Build evaluation into workflows to pave the way for value to compound.</p>
<p><strong>4. Create capture systems.</strong> Evaluation without capture evaporates. Capture systems operate on two levels: inferential (learning from patterns in accumulated traces, the way AI learns from historical data) and explicit (recording human judgment in retrievable form). Both matter. A practical approach to both is lightweight infrastructure: decision journals that record not just what was decided but why; prompt repositories that preserve what worked and what failed instructively; and evaluation logs that make the team’s evolving standards searchable. The design principle is retrievability, not comprehensiveness. A marketing team’s capture system is a prompt library and a shared brief template. A finance team’s is an annotated model log. Every function can build its equivalent of CLAUDE.md. Discipline, not cost or creativity, is the true constraint.</p>
<p><strong>5. Measure the cycle, not just the output.</strong> Most organizations judge an AI deployment’s success using measures like tools adopted, hours saved, or tasks completed. These are consumption metrics. Organizations trying to reap compound returns measure the cycle: How many interactions were verified? How many were evaluated? How much learning was captured? How quickly did captured learning change subsequent practice? Did your team leaders learn things from AI interactions last week that changed how they worked this week? If not, the cycle is not running.</p>
<p></p>
<h3>The Deeper Transformation</h3>
<p>Leaders want to consume AI. They ask, “How do we produce faster, better, cheaper with AI?” The new question is “How do we learn from what AI produces systematically, and at speed?”</p>
<p>Productivity in an era of generative AI is not output per unit of input. It is also determined by measurable learning per unit of interaction. Organizations that build the machinery to run the cycle — verify, evaluate, capture, apply — will build that capability over time. Those that do not will consume AI without converting it into knowledge. They’ll be busy, perhaps, but not learning and not reaping compound benefits.</p>
<p>Dogan’s eight words embody this shift: “It’s not perfect and I’m iterating on it.” She verified that the output was usable. She evaluated what it revealed. </p>
<p>She is iterating; her learning is being applied to the next interaction. The compounding cycle is running. It is available to any organization willing to build the machinery that makes it possible.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/how-to-reap-compound-benefits-from-generative-ai/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Job Pivots in the Age of AI: Lessons From Mike Mulligan and His Steam Shovel</title>
				<link>https://sloanreview.mit.edu/article/job-pivots-in-the-age-of-ai-lessons-from-mike-mulligan-and-his-steam-shovel/</link>
				<comments>https://sloanreview.mit.edu/article/job-pivots-in-the-age-of-ai-lessons-from-mike-mulligan-and-his-steam-shovel/#comments</comments>
				<pubDate>Thu, 02 Apr 2026 11:00:54 +0000</pubDate>
				<dc:creator><![CDATA[Scott F. Latham and Beth K. Humberd. <p>Scott F. Latham, Ph.D., is a professor in strategy at the Manning School of Business at the University of Massachusetts Lowell. Beth K. Humberd, Ph.D., is an associate professor of management at the Manning School of Business. </p>
]]></dc:creator>

						<category><![CDATA[Adaptation]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Career Change]]></category>
		<category><![CDATA[Employee Psychology]]></category>
		<category><![CDATA[Employment]]></category>
		<category><![CDATA[Resilience]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Disruption]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Managing Your Career]]></category>
		<category><![CDATA[Skills & Learning]]></category>

				<description><![CDATA[Matt Harrison Clough As organizations like Amazon, PwC, and Microsoft have announced AI-fueled layoffs, it’s no surprise that half of Americans have expressed concern about AI’s larger potential impact on their jobs. Of course, companies can attribute layoffs to AI efficiencies while trimming workforces for various reasons. Yet there is no question that artificial intelligence [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/Latham_Pivot-1290x860-1.jpg" alt="" class="wp-image-126336"/><figcaption>
<p class="attribution">Matt Harrison Clough</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">As organizations</span> like Amazon, PwC, and Microsoft have announced AI-fueled layoffs, it’s no surprise that <a href="https://www.pewresearch.org/science/2025/09/17/how-americans-view-ai-and-its-impact-on-people-and-society/" target="_blank" rel="noopener noreferrer">half of Americans</a> have expressed concern about AI’s larger <a href="https://doi.org/10.1038/s41598-024-75113-w" target="_blank" rel="noopener noreferrer">potential impact on their jobs</a>. Of course, companies can <em>attribute</em> layoffs to AI efficiencies while trimming workforces for various reasons. Yet there is no question that artificial intelligence is causing disruption in the job market, making both entry-level jobs and roles in functions like HR and project management, for example, harder to find. Workers and leaders are currently faced with an overwhelming amount of advice for navigating this period of uncertainty. As we move through a historic period of AI-driven labor disruption, why not turn to a place of comfort and simplicity in the pages of a well-known children’s book? </p>
<p>Our ongoing research, focused on the future of work, recently took us to the Virginia Lee Burton archives at the Cape Ann Museum in Gloucester, Massachusetts. Burton is well known for her children’s stories, including <cite>The Little House</cite>, <cite>Life Story</cite>, <cite>Katy and the Big Snow</cite>, and <cite>Mike Mulligan and His Steam Shovel</cite>. Through archival research, we learned that the story of Mike Mulligan offers powerful historic lessons on labor disruption and job adaptation that may provide comfort and guidance for workers and leaders in today’s AI age.</p>
<p></p>
<h3>The Story of Mike Mulligan and His Steam Shovel</h3>
<p>One of Burton’s most enduring stories is <cite><a href="https://www.youtube.com/watch?v=NQjHJKNyoUE" target="_blank" rel="noopener noreferrer">Mike Mulligan and His Steam Shovel</a></cite>, published in 1939, about steam shovel operator Mike and his steam shovel, named Mary Anne. (Befitting a children’s book, Mary Anne is an anthropomorphized earth-moving machine.) The story is set against a future of work that unfolded a hundred years ago. After the Great Depression, the U.S. economy experienced wide-scale mechanization, standardization, and mass production designed to lift the economic situation. As a team, Mike and Mary Anne play a significant role in the boom; they lay the foundations for buildings, open waterways for ships, level the ground for highways, cut tunnels for railroads, and smooth the earth for airfields. </p>
<p>However, their success is somewhat short-lived, as technological advancement brings superior machinery into play. At its core, <cite>Mike Mulligan and His Steam Shovel</cite> is a story of disruption, change, and adaptation. Mike and Mary Anne lose their jobs when new innovations arrive; steam shovels like Mary Anne and steam shovel operators like Mike Mulligan are no longer needed. </p>
<p>Burton writes, “Then along came the new gasoline shovels, and the new electric shovels, and the new diesel motor shovels, and took all the jobs away from the steam shovels.” As the image below conveys, Mike ends up sitting dejectedly on a log while Mary Anne cries oil tears — both of them out of a job at the hands of disruptive innovation. “No steam shovels wanted” is boldly painted on the fence in the background.</p>
<p><img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/Latham_fig1.jpg" alt="A sketchbook illustration for Mike Mulligan and His Steam Shovel showing Mary Anne the steam shovel standing idle beside a fence with "No Steam Shovels Wanted" painted on it, while Mike Mulligan sits slumped on a log in the foreground. Text above reads "Mike Mulligan and Mary Anne were VERY SAD."" class="mt20 mb8" width="100%"></p>
<p class="attribution" style="margin-bottom: 40px;">Cape Ann Museum Library and Archives</p>
<p>While at first things seem hopeless, the book shifts to a story of adaptation and ends with a successful occupational pivot. After digging a hole for the construction of a new town hall (their last job as a steam shovel and operator), Mary Anne becomes the steam furnace in the basement of the building. Mike becomes the building’s custodian, responsible for caring for the new furnace. </p>
<p>But arriving at that point was complex: Mike had to take a series of risks professionally, trust in his ability to adapt, and persevere in the face of disruption to reinvent himself in an occupational sense.</p>
<h3>Three Modern Lessons From Mike and Mary Anne’s Successful Pivots</h3>
<p>While doing our larger body of research on the future of work, we saw how this classic children’s story captures the critical underpinnings of a successful occupational pivot in the face of a dramatic, exogenous shift. It offers three key lessons for today’s workers facing a similar technological-driven disruption from AI tools.</p>
<h4>1. Embrace technology to realize a new occupational identity.</h4>
<p>The book foreshadows a dynamic that is central in today’s economy: The future of work will involve a high degree of human and technological collaboration. Not too long ago, the prospect of AI in our day-to-day work lives felt more like science fiction than reality; and yet, in the very near future, the vast majority of jobs will require employees to <a href="https://www.weforum.org/stories/2026/01/ai-agentic-workplace-human-resources/" target="_blank" rel="noopener noreferrer">work with artificial intelligence</a> to some degree. In some roles, AI has already <a href="https://www.ednc.org/how-much-could-ai-change-jobs-indeed-report-sheds-light-on-changing-labor-force-needs/#:~:text=The%20jobs%20most%20highly%20exposed,position%20fell%20into%20minimal%20transformation." target="_blank" rel="noopener noreferrer">changed the nature of the job</a> altogether. Yet workers across many professions continue to <a href="https://www.hrdive.com/news/employers-employees-resistant-hostile-to-AI/749730/" target="_blank" rel="noopener noreferrer">resist and combat</a> the inevitable rise of AI tools.</p>
<p></p>
<p> </p>
<p>The first essential lesson to be drawn from <cite>Mike Mulligan and His Steam Shovel</cite> is the need to reconsider our working relationship with technology: Rather than seeing it as a disruption, we can embrace technology as a means of discovering new opportunities, and perhaps even a new professional identity.</p>
<p>When faced with the prospect of being a custodian, Mike could have politely declined the opportunity: “No, thank you. I am a steam shovel operator.” Doing so would have echoed a degree of ignorance with respect to the larger disruption occurring (steam shovels being replaced by superior technologies). Instead, as the story illustrates, when faced with an occupational pivot, Mulligan said, “Why not?” </p>
<p>Workers today can learn a lot from this. It can be anxiety-provoking to consider an occupational pivot, especially when your identity is tied to your work (“I am a steam shovel operator. It’s who I am!”). But Mike Mulligan leans into the disruption.</p>
<p>In the context of AI, we hear a lot about human-AI collaboration and even cobots, but are workers today truly embracing the interdependence? Rather than seeing AI simply as a technological tool, they can consider how the technology might provide a renewed sense of purpose in their careers, just as it did for Mike Mulligan. </p>
<p>As the technology evolved, Mike evolved in his career and his sense of self. Today’s accountants might be toiling away on Excel spreadsheets that soon will migrate to AI platforms (if they haven’t already). They could already be working with AI agents or soon will. They can push back (“I’m an accountant, not a programmer!”), or they can learn from Mike Mulligan and say, “Why not?”</p>
<h4>2. Understand shifts in how value is delivered.</h4>
<p>Back in 2018, we wrote an article on <a href="https://sloanreview.mit.edu/article/four-ways-jobs-will-respond-to-automation/">the four ways in which jobs will respond to automation</a>. The central premise of our framework was a focus on value: We argued that every jobholder uses a set of core skills to deliver value in some form to a recipient, and thus the key to understanding job evolution is to consider adapting value provision based on emerging technologies. Ironically, Mike and Mary Anne seemed to understand this same premise better than some workers do today. </p>
<p>How did Mike and Mary Anne shift from steam shovel team to furnace team? In Burton’s world, the transition was predicated on the use of their respective skills to provide value in a new context. The last image in the book shows Mary Anne as a furnace connected to the heating ducts, applying her “skills” to deliver new value: providing heat. </p>
<p>Mike is shown sitting in a rocking chair next to Mary Anne, ensuring that her operation supports the building for many winters to come. The team once provided value through digging holes; they shifted to providing value by delivering heat to the town hall and maintaining the building.</p>
<p><img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/Latham_fig2.jpg" alt="An oval-shaped sketchbook illustration showing the basement of the Popperville town hall, where the steam shovel has been converted into a steam furnace connected to heating ducts. Mike Mulligan sits in a rocking chair reading a newspaper beside the furnace, while townspeople descend the stairs." class="mt20 mb8" width="100%"></p>
<p class="attribution" style="margin-bottom: 40px;">Cape Ann Museum Library and Archives</p>
<p>The lesson? While they were sad when their earlier jobs were taken over by superior engines, they were creative in finding a way to use their skills to provide value in a new context.</p>
<p>Several years ago, we worked on a project through a U.S. Department of Labor grant, using our job evolution framework to assist workers who were impacted by the closing of a nuclear power plant. These educated professionals, including nuclear engineers, scientists, and project managers, had expected to work their entire careers at the plant but now had to pivot to use their skills in new contexts. (The nuclear power plant job market was not booming at the time.) One of the biggest challenges we witnessed was that individuals tended to box themselves in relative to their job as prescribed; they struggled to think about how their skills could deliver value in a new context. </p>
<p>Our framework, which focuses on separately assessing skill threats and forms of value delivery, helped those workers reframe the application of their skills outside of the nuclear industry. This effort landed some of the workers in IT, data science, or even environmental consulting roles. But doing so wasn’t an easy fix: It required personal reflection, analysis, and a willingness to make creative moves. Ultimately, by focusing on value creation, those professionals landed in places they never thought they’d be, much like Mike and Mary Anne.  </p>
<h4>3. Leaders must not lose sight of organizational purpose.</h4>
<p>Our last lesson is for leaders: Don’t fall prey to the siren call of AI at all costs. AI is an enabling technology meant to help organizations create new efficiencies and sources of value. A leader’s role is to consider the company’s higher identity and purpose — and then to help employees, customers, and key stakeholders understand how AI can serve and even strengthen that sense of purpose. </p>
<p>Though not specifically referenced in the book, the historical backdrop of Burton’s story is that Mike and Mary Anne were part of the <a href="https://doi.org/10.4324/9781315743219" target="_blank" rel="noopener noreferrer">Works Progress Administration</a> — a Roosevelt-era federal jobs program that was instrumental in getting people back to work during the Depression. Yet many historians have noted that in addition to job creation, the WPA’s primary purpose was to <a href="https://www.npr.org/2020/04/04/826909516/in-the-1930s-works-program-spelled-hope-for-millions-of-jobless-americans" target="_blank" rel="noopener noreferrer">instill hope in a down-and-out country</a>.</p>
<p></p>
<p>In the book, Mike and Mary Ann’s greater purpose and value was also providing hope — to the town, through the new town hall where they worked as a team. It’s a lesson that organizational leaders need to consider. What <a href="https://sloanreview.mit.edu/article/unlock-the-power-of-purpose/">organizational purpose</a> is AI strengthening? Also, what aspects of organizational identity do your company’s AI plans reflect to workers and other stakeholders?</p>
<p>For example, Lyft’s leaders have described <a href="https://www.adweek.com/brand-marketing/purpose-driven-how-lyft-balances-tech-trust-and-human-connection/" target="_blank" rel="noopener noreferrer">the company’s AI integration work</a> as grounded in its long-standing purpose “to serve and connect.” Rather than shaping the company’s AI narrative around the tools, leaders are keeping the company’s purpose front and center. </p>
<p>Think about the underlying reason your organization exists. AI strategies should ultimately reflect who your company is (organizational identity) and its reason for being (organizational purpose).  </p>
<h3>Resilience in the Face of Disruption</h3>
<p>Collectively, these three lessons fall under a broader theme from Mike and Mary Anne’s story: resilience. On the back of Mary Anne, a sign proudly proclaims “Mike Mulligan — Dig Anything, Any Time, Any Place.” The message captures confidence in the pair’s abilities, and a willingness to work; indeed, their work ethic and perseverance are the basis of their pivot. When they are displaced by innovation, they scour the country for new jobs and believe enough in themselves to take on the challenge of building a town hall in Popperville — as a team. (Burton explicitly states that Mike couldn’t abandon Mary Anne.) They embrace resilience in the face of disruption.</p>
<p><img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/Latham_fig3.jpg" alt="A sketchbook illustration for the book's title page showing Mary Anne the steam shovel bursting dramatically through the page, her bucket raised and treads visible, with radiating lines conveying energy and motion." class="mt20 mb8" width="100%"></p>
<p class="attribution" style="margin-bottom: 40px;">Cape Ann Museum Library and Archives</p>
<p></p>
<p>While the current period of AI disruption feels new to many of us, the experience of labor disruption is truly timeless. In the Cape Ann Museum’s archives, we found a letter from a fan to Burton dated Dec. 5, 1942. The reader, Mrs. Helen Baurd, shares that her father was a steam shovel operator who, along with his colleagues, held the Mike Mulligan story near and dear to his heart and, in fact, passed the book around: “‘Mike Mulligan’ traveled all over. ... The men loved it,” she writes. Imagine a first edition of <cite>Mike Mulligan and His Steam Shovel</cite>, covered in grease and shared among operators on lunch breaks, providing inspiration for those men to continue working. The fan’s letter concludes powerfully, “I thot you would be interested to know you are not only giving pleasure to children but to many grown-ups as well.”</p>
<p>Whether it be pleasure or inspiration you take from Mike and Mary Anne’s story, it captures the real-world challenges of individuals dealing firsthand with job disruption. The letter’s closing sentiment is the basis for this article. While <cite>Mike Mulligan and His Steam Shovel</cite> is a children’s story, we believe that it offers a powerful parallel for individuals who want to write their own ending in this age of AI. One hundred years ago, the hero was a steam shovel operator; today, it might be a programmer or nuclear engineer. Whatever our role may be, we can all learn about career pivots and resilience from Mike Mulligan and Mary Anne. </p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/job-pivots-in-the-age-of-ai-lessons-from-mike-mulligan-and-his-steam-shovel/feed/</wfw:commentRss>
				<slash:comments>1</slash:comments>
							</item>
			</channel>
</rss>