<?xml version="1.0" encoding="UTF-8" standalone="no"?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" version="2.0">

	<channel>
		<title>MIT Sloan Management Review</title>
		<atom:link href="http://sloanreview.mit.edu/feed/" rel="self" type="application/rss+xml"/>
		<link>https://sloanreview.mit.edu</link>
		<description>Sustainable Innovation</description>
		<lastBuildDate>Wed, 15 Apr 2026 17:43:50 +0000</lastBuildDate>
		<language>en-US</language>
				<sy:updatePeriod>hourly</sy:updatePeriod>
				<sy:updateFrequency>1</sy:updateFrequency>
		<generator>https://wordpress.org/?v=6.9.4</generator>
			<item>
				<title>The Human Side of AI Adoption: Lessons From the Field</title>
				<link>https://sloanreview.mit.edu/article/the-human-side-of-ai-adoption-lessons-from-the-field/</link>
				<comments>https://sloanreview.mit.edu/article/the-human-side-of-ai-adoption-lessons-from-the-field/#respond</comments>
				<pubDate>Tue, 14 Apr 2026 11:00:06 +0000</pubDate>
				<dc:creator><![CDATA[Ganes Kesari. <p><a href="https://www.linkedin.com/in/gkesari/" target="_blank">Ganes Kesari</a> is founder and CEO at <a href="https://tensorplanet.com/" target="_blank">Tensor Planet</a>, a software product company focused on predictive maintenance for commercial vehicle fleets.</p>
]]></dc:creator>

						<category><![CDATA[AI Strategy]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Change Management]]></category>
		<category><![CDATA[Employee Psychology]]></category>
		<category><![CDATA[Leadership Advice]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Disruption]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leading Change]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR Not a day goes by without another article being published about how AI could disrupt yet another aspect of our business or personal lives. In recent years, AI adoption has indeed taken off. However, if you pay close attention, you’ll notice a dichotomy. Many examples of successful early adoption of artificial intelligence [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Kesari-1290x860-1.jpg" alt="" class="wp-image-126585"/><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">Not a day goes by without</span> another article being published about how AI could disrupt yet another aspect of our business or personal lives. In recent years, AI adoption has indeed taken off. However, if you pay close attention, you’ll notice a dichotomy. </p>
<p>Many examples of successful early adoption of artificial intelligence tend to come from a small cluster of industries that are heavily digitized or are pro-technology. The usual suspects include banking, financial services, e-commerce retailers, and the like. However, some other industrial sectors, many of which are big contributors to our economy, don’t show the same level of progress or enthusiasm when it comes to AI adoption. </p>
<p>Take the example of specialty and essential services industries such as construction, mining, or waste management. Some of these companies are part of a robust economy but largely powered by legacy software from decades ago, with some processes still happening through pen and paper. While AI has made nascent inroads here, the levels of adoption leave much room for growth.</p>
<p>Leaders in these industries often assume that they have stable processes that have served them well for decades. Yes, things might break once in a while, leading to customer service disruption, rework for the team, and internal process disruption. But then, they have always recovered. People in these industries may view AI as gimmicky, too much work, and/or not trustworthy.</p>
<p></p>
<p>Having spent more than 15 years helping dozens of industries embrace AI, I’ve been curious to study what distinguishes the two sets of leaders and the quite different levels of AI adoption they achieve. And, importantly, I’ve spent years in the trenches experimenting with techniques that help address adoption challenges.</p>
<p>Here, I’ll share what’s at the root of the leadership challenge and how leaders in industries that have been conservative about AI can orchestrate meaningful change. Let’s examine some grounded examples and no-nonsense tips for AI adoption.</p>
<h3>Why AI Adoption Lags in Some Industries</h3>
<p>My experience in the field points to three prevalent factors holding back some industries from moving forward with artificial intelligence.</p>
<h4>1. AI feels inaccessible and scary.</h4>
<p>When you can’t comprehend something, you start developing a fear of it. When everyone around you seems to talk about it and you feel left behind, the fear only grows. When the technology feels intrusive and uncomfortable, you draw back into your shell.</p>
<p>This is exactly what’s happening with AI when it comes to a majority of late adopters in both private and public sectors. The hype around AI and the seemingly irrational excitement of tech pundits only alienates people in cautious companies. To make matters worse, anytime there’s news about an uninformed AI investment backfiring or machine learning algorithms going rogue, it solidifies the narrative that AI is inaccessible and not ready for the masses yet.</p>
<p>Driver-facing AI-enabled cameras in freight vehicles are a case in point. For truck drivers, a camera inside the cab feels intrusive and disciplinary long before it’s perceived as a safety or performance-aiding tool. A <a href="https://truckingresearch.org/2023/04/new-atri-research-identifies-strategies-for-improving-driver-facing-camera-approval-and-utilization/" target="_blank" rel="noopener noreferrer">report by the American Transportation Research Institute</a> shows that truck drivers’ approval of driver-facing cameras tends to be low: just 2.24, on average, on a 0-to-10 scale among 650 current users from across the industry.</p>
<h4>2. AI looks like a lot of avoidable work.</h4>
<p>AI is often touted as a savior that automates drudgery. But people on the ground who are tasked with making the AI tools work and integrating them into workflows may perceive AI as creating <em>extra</em> work, not relieving them of it. </p>
<p>With front-line teams in labor-intensive industries often feeling overstretched and under-supported, the need for more training or changes to existing workflows just adds friction before adding any value. In many late-adopting industries, AI is immediately associated with capital-heavy hardware and forced operational change. </p>
<p></p>
<p>It doesn’t help that organizational memories are often clouded by many failed or painfully stretched technology rollouts — think enterprise resource planning systems, safety tools, telematics systems, and so on. People wonder whether this AI-tools wave is another fad that’s worth waiting out. When you take a deeper look, you realize that change fatigue, not an aversion to technology, is the real blocker.</p>
<h4>3. AI benefits don’t really seem worth the pain.</h4>
<p>Most technology evangelists and leaders commit the blunder of communicating AI value in the wrong currency. Improved accuracy or productivity boosts mean little to front-line operators, who care more about customer escalations, rework, or operating costs.</p>
<p>In a 2025 <a href="https://www.deloitte.com/se/sv/Industries/technology/perspectives/ai-roi-the-paradox-of-rising-investment-and-elusive-returns.html" target="_blank" rel="noopener noreferrer">executive survey by Deloitte</a>, although 65% of leaders said that AI is part of their corporate strategy, many also acknowledged that the ROI is neither immediate nor purely financial. From a front-line worker perspective, the cost of learning and adopting an intimidating technology like AI feels personal, but the benefits feel abstract and impersonal. </p>
<p>When it’s difficult to articulate tangible business outcomes from AI for the next quarter, such initiatives struggle to secure or sustain sponsorship and are easily deprioritized. Every time AI implementations fail to deliver on vague goals, which is quite often, the trust deficit only grows.</p>
<p></p>
<h3>Three Pillars for Successful AI Adoption</h3>
<p>How can you, as a leader, address those challenges and set your organization up for success? Consider these three essential strategies.</p>
<h4>1. Use everyday analogies to make AI less threatening.</h4>
<p>Education is a prerequisite for meaningful AI adoption. When your end users don’t understand why they should use or trust AI, the initiative is dead on arrival. How can you make AI accessible to an audience that’s not digital-native?</p>
<p>We are no longer in a period where there are few notable uses of AI. Some people don’t realize that they already use AI dozens of times every single day. Don’t we unlock phones with facial recognition? Aren’t even unbranded smartwatches good at detecting workout activities or flagging an irregular heart rhythm? Don’t some people delight at discovering long-lost school buddies through Facebook or Instagram friend recommendations?</p>
<p>Each of these examples is an instance of AI at work. In conversations with leaders, when I share these as examples of sophisticated AI use by the general public, it surprises them every single time. Once the technology is reframed this way, conversations can begin to shift from fear of AI to a curiosity around where else it might be at play. You make real progress when you demystify AI through familiar experiences rather than technical lectures.</p>
<p>This framing also enables a more honest discussion about the potential of AI and the threat to jobs. In many professions, people then begin to appreciate that they are more likely to lose opportunities not to the AI itself but to other humans who know how to use AI better. This strengthens AI’s positioning as assistive and AI tool use as another skill to acquire.</p>
<p>Take the case of AI platform Hey Bubba, designed for trucking owner-operators and small trucking companies. Instead of using dashboards or complex workflows, the system operates entirely through voice. Drivers can search and book freight, negotiate with brokers, find parking, and book hotels through natural conversations, with the help of AI. This service works because it builds on familiar uses of AI assistants, such as Siri and Alexa, and thus feels natural.</p>
<h4>2. Integrate AI into systems people already use.</h4>
<p>Is it easier to renovate a house or ask people to move into a brand-new one with unfamiliar rooms, rules, and routines? With AI adoption, you want to take the renovation approach. It’s a blunder to try a big-bang approach to roll AI into an organization.</p>
<p>Always start with incremental changes to existing workflows and software. Remember that your teams already use dozens of software tools. These are the best starting points where leaders can inject AI and gently nudge user adoption.</p>
<p></p>
<p>For example, most front-line teams already live inside software, such as billing systems, customer relationship management systems, dispatch tools, maintenance software, or safety logs. Some of these systems may be clunky, but they are heavily used and largely unavoidable. The pain points within these systems could act as perfect entry points to introducing AI — places where users could see the value and welcome the initiative with open arms. When AI meets people where they already work, curiosity replaces resistance.</p>
<p>Take the case of fleet maintenance. Most technicians and supervisors already spend their days inside a computerized maintenance management system. Work orders are logged there. Inspections are recorded there. Breakdowns are investigated there. </p>
<p>An effective approach to introducing AI that can predict vehicle failures, for example, is to embed AI directly into the maintenance systems users already trust. AI can flag recurring fault codes, highlight assets with rising failure risk, or suggest prioritizing certain work orders before a breakdown occurs. </p>
<h4>3. Quantify AI’s impact using metrics people already track.</h4>
<p>Once you make AI accessible and identify familiar avenues to inject it, the quickest way to earn buy-in is to lead with the business result it unlocks. </p>
<p>Start by anchoring AI value to outcomes that stakeholders really care about and are judged on. Usually, there are two perspectives: creating upside (growth or throughput) or preventing downside (lost revenue or risk reduction). Examples of upside metrics are win rates, or asset utilization, while downside metrics include cost leakage or service disruptions. Remember: New KPIs always trigger debate and delay action, whereas familiar metrics accelerate alignment.</p>
<p>Next, pick a combination of short-term impact and long-horizon projections. Sticking just to lag metrics could disillusion stakeholders, who need to see quicker momentum to retain confidence and excitement for AI. For example, reduction in customer complaints is an example of a lead metric to validate short-term progress, while incremental revenue from repeat customers is a lag metric that might need a few quarters to start materializing.</p>
<p>Consider the <a href="https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/unlocking-profitable-b2b-growth-through-gen-ai" target="_blank" rel="noopener noreferrer">example of an industrial materials distributor</a> focused on accelerating growth. The company struggled to systematically identify and act on new business opportunities. Field sellers relied on manual, time-intensive methods, such as driving through cities to visually spot new construction projects. The process was inconsistent, slow, and difficult to scale.</p>
<p>The company built an AI engine that combined internal sales data with external signals to score and prioritize potential opportunities and recommend relevant products. Generative AI was then applied to extract insights from unstructured public data, such as construction permits, to identify upcoming capital projects.</p>
<p>These insights were embedded into existing sales workflows to personalize outreach at scale. The approach unlocked new opportunities in the first year, significantly expanding the sales pipeline and improving success rates for email outreach — both of which were existing sales metrics that stakeholders already cared about.</p>
<p></p>
<h3>Where AI Adoption Is Really Won or Lost</h3>
<p>In late-adopting industries, AI doesn’t fail because the technology falls short. AI often fails because leaders underestimate the human and operational context in which AI tools are introduced. We must remember that front-line skepticism is not resistance to progress — it’s just a rational human response that can be influenced when tackled strategically.</p>
<p>The organizations that move fastest follow a clear progression. They demystify AI by promoting understanding among people; embed AI into existing workflows before forcing new ones; and prove AI’s value using metrics that are already being used to reward or penalize people. When these conditions are met, adoption becomes a pull factor as opposed to a hard push.</p>
<p>The way forward for late-adopter industries is not to imitate tech-first sectors but to adopt AI on their own terms. Successful leaders treat AI as a capability to be woven incrementally into daily work rather than a system to be rolled out abruptly. In these environments, user comfort and trust, not algorithms, ultimately determine whether AI delivers on its promise.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/the-human-side-of-ai-adoption-lessons-from-the-field/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Managing Up: A Skill Set That Matters Now</title>
				<link>https://sloanreview.mit.edu/article/managing-up-a-skill-set-that-matters-now/</link>
				<comments>https://sloanreview.mit.edu/article/managing-up-a-skill-set-that-matters-now/#respond</comments>
				<pubDate>Mon, 13 Apr 2026 11:00:31 +0000</pubDate>
				<dc:creator><![CDATA[Phillip G. Clampitt and Bob DeKoch. <p>Phillip G. Clampitt is the Blair Endowed Chair in Communication at the University of Wisconsin-Green Bay. Bob DeKoch is the founder of the leadership consulting firm Limitless and a former president of The Boldt Company. They are the coauthors of <cite>Leading With Care in a Tough World: Beyond Servant Leadership</cite> (Rodin Books, 2022).</p>
]]></dc:creator>

						<category><![CDATA[Communication]]></category>
		<category><![CDATA[Human Psychology]]></category>
		<category><![CDATA[Leadership Advice]]></category>
		<category><![CDATA[Leadership Development]]></category>
		<category><![CDATA[Leadership Style]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leadership Skills]]></category>
		<category><![CDATA[Managing Your Career]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR &#124; Getty Images Are you skilled at managing up? If your talents are lacking when it comes to managing and dealing with the people above you in the organizational hierarchy, you can find yourself mired in some unpleasant and career-harming situations. Maybe you’re frustrated by a micromanaging supervisor or feeling marginalized by [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Clampitt-1290x860-1.jpg" alt="" class="wp-image-126588"/><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR | Getty Images</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">Are you skilled at managing up?</span> If your talents are lacking when it comes to managing and dealing with the people above you in the organizational hierarchy, you can find yourself mired in some unpleasant and career-harming situations. Maybe you’re frustrated by a micromanaging supervisor or feeling marginalized by them. Maybe you feel constantly in the dark about your manager’s expectations, or you’re tired of absorbing an outsize number of shocks for your team. Any of these can be a warning signal that you need to work on effective upward communication and leadership. </p>
<p>It’s an important set of skills right now. With some organizations using artificial intelligence to eliminate middle layers of management, the ability to manage up has become even more vital to your career — and your organization’s success. Leaders above are often unaware of what they don’t know, and they might be misled by AI.</p>
<p>If you want to strengthen your ability to lead up, you need to know how to assess your skills — and bolster them.</p>
<p>We define effective managing up, or upward leadership as “listening to those higher in rank and influencing them to assist you and your team to better embody the organization’s values and fulfill its mission, strategy, and goals.”<a id="reflink1" class="reflink" href="#ref1">1</a> Successful upward leaders create sustainable wins for the boss, team, and organization.</p>
<p> </p>
<p>Notice that this definition starts with listening. Just because someone wrote down the organization’s values, mission, strategy, and goals on ever-available, wallet-sized notecards or displayed them in a flashy PowerPoint graphic does not ensure that everyone will interpret the ideas in a similar and synergistic fashion. The written word is not enough. Understanding the nuances of interpretation requires active listening for unstated sentiments. </p>
<p>Leading up also, of course, involves influencing. Effective upward leaders establish connections, circumvent problems, and convince those in power to embrace opportunities, innovations, and novel insights. But assisting is equally important. Think of an NBA assist wizard like LeBron James who knows when and where to deliver the ball to other players so they can score. Assisting requires proper alignment between team members, knowledge of who is in position to score, and a willingness to let others shine.</p>
<h3>Three Roles You Play While Managing Up</h3>
<p>Based on surveys of thousands of employees and hundreds of interviews with midlevel managers, we discerned that people leading up assume three interrelated roles: </p>
<p><strong>Buffer.</strong> The buffer dampens frustrations from above (and below), absorbing complaints, gripes, annoyances, and, potentially, offensive remarks. Successful buffers actively listen for underlying (often unstated) sentiments and seek understanding of key (but often vague) goals to protect others from irrelevant or unintended messages.</p>
<p><strong>Translator.</strong> The translator receives information, directives, and perspectives from above (and below). Then they convey the meaning in the language of the audiences at those levels, minimizing potential misunderstanding while respecting the sensibilities of the audience. </p>
<p><strong>Advocate.</strong> The advocate seeks to persuade or dissuade others in positions above (or below) their own. This could mean sharing differing opinions, arguing for a new direction, or pushing back on a new idea or policy.<a id="reflink2" class="reflink" href="#ref2">2</a></p>
<p>It’s not enough to be skilled at one of these roles. Artfully leading upward requires an integration of all three. For example, advocates must translate a pushback comment into a language understood by others while buffering away minor issues. Likewise, a buffer must act as a translator when anticipating how pushback language might be misinterpreted by people above. The translation may, in turn, result in advocating for a change in the directive’s wording to increase the odds of acceptance. </p>
<p>There is no magic formula to determine the right balance, because it will vary with each situation. However, leaning too heavily into one role usually signals problems. If you, as a leader, spend most of your time buffering employees from verbal storms from on high, then it might be time to augment your role as an advocate. </p>
<p>Leading upward does not come naturally to most people. In fact, in his 2001 book, <cite>Leading Up: How to Lead Your Boss So You Both Win</cite>, Wharton professor Michael Useem suggested that just one-third of managerial employees had the necessary skills and desire to do so.<a id="reflink3" class="reflink" href="#ref3">3</a> But you can rewrite your own story by properly assessing your upward leadership talents and then strategically applying them. </p>
<h3>Assess Your Ability to Manage Up</h3>
<p>The best way to improve your upward leadership acumen starts with assessing your current talent level. These three questions can help you judge.  </p>
<p><strong>What role do you primarily perform when you are most frustrated?</strong> Aggravation, frustration, and irritation go with any job but can also signal role imbalance. For example, if you feel micromanaged, you may be overplaying the buffer role and not voicing concerns (the advocate role) about optimizing your own working environment.</p>
<p><strong>What role do you primarily perform when you are in a state of flow?</strong> In his seminal 2008 book, <cite>Flow: The Psychology of Optimal Experience</cite>, Mihaly Csikszentmihalyi describes flow as “a sense that one’s skills are adequate to cope with challenges at hand. … Concentration is so intense that there is no attention left over to think about anything irrelevant.”<a id="reflink4" class="reflink" href="#ref4">4</a> Ideally, your state of flow involves the skillful and seamless fulfillment of all three roles. But that mastery rarely happens, because we all have a tendency to lean too heavily on a role or skill that comes naturally to us. For example, selling or advocating may be your “happy place,” but leaning on that ability alone will not allow you to excel at upward leadership. For that, you’ll need to master the skills of buffering and translating.</p>
<p><strong>Are you equally comfortable performing these roles in both directions (upward and downward)?</strong> Many people selectively employ their buffering, advocating, and translating skills when communicating with people at higher authority levels. This might be healthy in some cases, but it could also be a red flag, revealing that you lack a healthy relationship with those in power and are unwilling to engage in candid, if sometimes difficult, conversations.</p>
<h3>Build Three Key Skills to Manage Up Better</h3>
<p>Once you’ve thought through your role tendencies, it is time to build your buffering, translating, and advocating skills. </p>
<h4>Buffering</h4>
<p>Buffering skills and sensibilities are largely self-taught. Take cues from politicians, coaches, or leaders you admire. Watch successful leaders during press conferences. Some of them ignore the passion of the critic, others deflect unpleasant issues, and some selectively listen for words that they can turn to their advantage. Building up this emotional thick skin takes time and perspective. </p>
<p></p>
<p>Alida Al-Saadi, a former senior executive at Korn Ferry and Accenture, shared this incident: “A manager repeatedly pushed me to be ‘more concise,’ despite being famously long-winded himself. At first it felt unfair. Eventually I understood that thick skin isn’t arguing the irony; it’s hearing what someone needs from you and deciding, deliberately, how to strategically adjust.”<a id="reflink5" class="reflink" href="#ref5">5</a> In short, buffering her reactions and deferring the debate about the accuracy of his critique enhanced their working relationship. </p>
<p>However, buffering does not mean just passively absorbing blows. After all, a shock absorber can only absorb so many shocks before the source of the trouble has to be addressed. Good buffers learn to have productive conversations with their superiors by identifying key issues and rephrasing concerns that might be red flags for their team. Skilled buffers actively listen to engage in productive conversations that support team motivation and performance. This means tuning your antenna to what’s not being said and homing in on ideas that need further development.</p>
<h4>Translating</h4>
<p>Turning your own or your team’s reactions, concerns, or feelings into words that a superior can understand may be all it takes to shift that leader’s position, tweak an idea, or change a disagreeable behavior; it’s one step short of advocacy. This requires an underappreciated ability to convey emotional reactions in a respectful manner. </p>
<p>For example, sometimes employees who first hear about a major organizational change react with colorful and offensive language.<a id="reflink6" class="reflink" href="#ref6">6</a> In those cases, effective leaders accurately relay those sentiments to the higher-ups without sharing personal invectives. A descriptive statement like, “They weren’t very happy” or “They expressed their displeasure in strong language” allows for further discussion that focuses on the substantive issues driving the reactions. </p>
<p>Building your translating skills sometimes means learning new vocabulary. That’s because you should shift your reporting from a direct to an indirect approach for more contentious issues. Directly pushing back with a comment like “I disagree” isn’t always the best option. An indirect and often more effective approach could be to say, “If someone were to play devil’s advocate, they might say …” or “Is there another way to look at this issue?” These phrases distance the pushback in a manner that does not directly challenge the egos of the people above.</p>
<p></p>
<h4>Advocating</h4>
<p>Speaking up for your team, say, by nudging superiors in a different direction represents the most challenging role. What are the best ways to do it? For starters, link to the superior’s underlying motivations, sensibilities, and mental framework. Successful upward leaders frame their team’s reaction to an idea or policy change by first acknowledging the positive intentions of the idea or policy before sharing the team’s suggested tweaks. </p>
<p>They also provide evidence that their superiors find credible. Different supervisors value different kinds of evidence to arrive at conclusions. Some put more faith in statistics, AI projections, or models, while others trust case studies, expert advice, personal testimonies, or historical analogies. </p>
<p>Finally, sense when to back off. Some leaders mistakenly expect quick or even instantaneous agreement from their superiors after proposing initiatives, program tweaks, personnel changes, or innovative suggestions. However, persuasion often requires patience and a willingness to back off at the right time to allow others time to shift the tumblers in their minds before locking something new in place. Pushing too hard or too soon can close the door on any new ideas.</p>
<h3>Habits of Successful Upward Leaders</h3>
<p>Skill-building sets the stage, but successful upward leaders also use the following strategies regularly to maximize their performance and help their organizations thrive.</p>
<h4>Actively build a relationship of candor and trust with people above you in the hierarchy.</h4>
<p>Do you reflexively assume that you are fully trusted by those above? A misreading of interpersonal dynamics can prove to be frustrating, befuddling, and problematic, and can introduce relationship troubles: You might excessively buffer the superior from challenges you face in your department (unwarranted buffering), be overly candid about your own reactions or your employees’ outbursts (unedited translating), or offer unwelcome advice (inappropriate advocating). Instead, consider taking the following actions to establish an empowering relationship of trust.</p>
<p><strong>Take the first step.</strong> Ideally, superiors would seek out and build robust, healthy relationships with direct reports. But in our research, we’ve found that to be more the exception than the rule. Consequently, leaders in subordinate positions must often take active steps to build strong, candid relationships.<a id="reflink7" class="reflink" href="#ref7">7</a> Sometimes that requires the assertiveness and subtlety of a mixed martial arts fighter like Ronda Rousey. Yes, <em>subtlety</em>: Rousey was able to persuade the CEO of the Ultimate Fighting Championship, Dana White, to create a women’s division — even though he had publicly declared that he’d never do it. She took the first step by requesting a 15-minute meeting with Dana, seeking career advice, and then effectively advocated for her idea. The meeting morphed into a 45-minute discussion and resulted in the new UFC women’s division.<a id="reflink8" class="reflink" href="#ref8">8</a>  </p>
<p><strong>Mind the cadence and robustness of meetings with your supervisors.</strong> Your investment in establishing a relationship with superiors can dwindle away without routine and robust communications. The communication cadence needs to keep pace with the fast-changing organizational dynamics. And discussions need to be robust enough to allow the relationship to emerge beyond a position-to-position discussion to more of a person-to-person dialogue. Ideally, that means regularly scheduled face-to-face discussions with your boss, plus skip-level meetings with other people above you in the hierarchy. Advocating for such a time commitment may require some lobbying, but it will spawn benefits by minimizing disconnects and maximizing organizational alignment.<a id="reflink9" class="reflink" href="#ref9">9</a></p>
<p></p>
<p><strong>Avoid assuming that what worked with one supervisor will work with another.</strong> Just because a previous supervisor trusted you to be a great buffer, translator, or advocate, it doesn’t mean a different person in the organization will. While working with various people in the hierarchy above you, you must seek out signals about what problems you can handle on your own without reporting above (buffering). Additionally, you need to search for cues about what issues are off-limits when considering offering unsolicited advice (buffering and advocacy). Your supervisor might welcome tweaks to organizational strategy, but those higher up may not be as open to the pushback.</p>
<h4>Adopt an educational mindset.</h4>
<p>George Reed served as a dean at the University of Colorado — Colorado Springs and an instructor at the U.S. Army War College. He smilingly reminded us, “I’ve had to educate more than a few new chancellors and commanders in my career.”<a id="reflink10" class="reflink" href="#ref10">10</a> When someone new assumed command, Reed started from zero by providing background about his department or division and then sought to earn trust with the newcomer to buffer, advocate, and translate as he saw fit. </p>
<p>Emotionally, this may seem like going backward, but it is essential to establishing a productive working relationship. Sometimes a well-selected list of “10 things everybody should know about our department” does the trick and starts an illuminating educational discussion.<a id="reflink11" class="reflink" href="#ref11">11</a> </p>
<p>Take the following actions to bolster your educational mindset. </p>
<p><strong>Assess the risks of advocacy.</strong> Deciding how and when to advocate revolves around the question “How open will my superior be to my influence attempt?” Correcting a client’s misspelled name on a pending document typically would be zero risk. On the other hand, drawing your supervisor’s attention to an annoying personal habit of theirs, such as always being late to meetings, would be a higher risk (as outlined in the table below).</p>
<div class="callout-highlight">
<aside class="l-content-wrap">
<article>
<h4>Common Conversation Points: Mind the Risk Level</h4>
<p class="caption">
<table id="Chart1" class="chart-vertical-stripes no-mobile">
<thead>
<tr>
<th><strong>Higher-Risk Issue</strong>s</th>
<th><strong>Lower-Risk Issues</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>
<ul>
<li>Annoying personal qualities (such as interrupting others or pettiness)</li>
<li>Character flaws (such as arrogance or impulsiveness)</li>
<li>Competency concerns</li>
<li>Ethical issues (such as dishonesty)</li>
<li>Personal-life concerns</li>
<li>Policy disagreements</li>
<li>Poor performance (such as missed goals)</li>
<li>Unsolicited pushback</li>
</ul>
</td>
<td>
<ul>
<li>Positive operational results</li>
<li>Minor policy tweaks</li>
<li>Differing technical interpretations</li>
<li>Praise</li>
<li>Differing data interpretations</li>
<li>Solicited pushback</li>
<li>Recognition of personal/professional accomplishments</li>
<li>Small changes on documents/presentations</li>
<li>Fresh insights on challenges</li>
<li>Requests for career advice</li>
</ul>
</td>
</tr>
</tbody>
</table>
<p><!--IMAGE FALLBACK FOR MOBILE BELOW --><br />
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Clampitt_Upward_Essay_Table_REV.jpg" alt="A two-column table comparing higher-risk and lower-risk issues. Higher-risk issues include: annoying personal qualities (such as interrupting others or pettiness), character flaws (such as arrogance or impulsiveness), competency concerns, ethical issues (such as dishonesty), personal-life concerns, policy disagreements, poor performance (such as missed goals), and unsolicited pushback. Lower-risk issues include: positive operational results, minor policy tweaks, differing technical interpretations, praise, differing data interpretations, solicited pushback, recognition of personal/professional accomplishments, small changes on documents/presentations, fresh insights on challenges, and requests for career advice." class="no-desktop">
</p>
</article>
</aside>
</div>
<p>Issues can shift from one column to the other, depending on the particular supervisor-report relationship and the organizational culture. Your goal over time, of course, is to move as many issues as possible to the second column.</p>
<p>As a relationship matures, people learn to better identify others’ touchy subjects and anticipate their likely responses to a direct style of advocacy. A high-quality relationship between leaders allows a high degree of candor and a high volume of advocacy.</p>
<p>But lower-quality relationships or newer ones often improve with the deft use of more indirect advocacy and thoughtful translation. </p>
<p>Regardless of relational quality, a strong mutual commitment to shared values allows for more direct advocacy. For example, on a construction site or factory floor that has a strong safety culture, candid advocacy about potential safety concerns can be successful regardless of rank or relationship status. </p>
<p><strong>Reserve private conversations for more delicate matters.</strong> Unfortunately, not all leaders welcome pushback in public forums. Advocating for a shift or a tweak to superiors’ pet project in front of a group will often shut down further discussion because it may threaten the leader’s ego.</p>
<p>For example, consider a supervisor who occasionally launches into an annoying behavior like overselling initiatives to others and not allowing time for further discourse. Enlightening the supervisor about this off-putting tendency should usually be reserved for private, one-on-one, ego-protecting conversations. Discussions like these are particularly tricky because selling may be the supervisor’s forte. Often, someone’s greatest ability has an unrecognized downside that needs to be throttled back in certain situations or offset with other skills. </p>
<h4>Routinely rebalance your upward leadership role profile.</h4>
<p>Your upward leadership role profile should not be static. Ideally, relationships between leaders at different levels improve, and their mutual commitment to shared values evolves. Consequently, the amount of energy devoted to the roles of buffer, translator, and advocate will become more balanced and shift away from more dysfunctional allocations, like excessive advocacy or heavy buffering. Consider the following tactics when periodically rebalancing your profile: </p>
<p><strong>Reflect on how your allocation maximizes both your professional fulfillment and organizational contribution.</strong> The ideal allocation of the roles you play depends on your specific situation, goals, and the managerial style of your supervisor. Ask yourself, “What is the optimal percentage of my energy that should be devoted to buffering, translating, and advocating to optimize my growth and organizational performance?” </p>
<p>As a general rule, aim to build relational trust so that the percentage of your time devoted to buffering decreases to 10%-20% while advocating and translating (40%-45% each) assume more predominant roles. This type of allocation maximizes professional development and organizational growth but leaves enough time for you to serve as a proper shock absorber for the inevitable miscues, frustrations, and rumors that occur.</p>
<p><strong>Test and recalibrate.</strong> Shifting your role balance requires courage, particularly when everything seems to be going well. And, as with any new skill, both mastering and feeling comfortable with it will require some practice. For example, making the conscious effort to advocate more or throttling back can be unsettling; monitoring results allows you to tweak both the skills and the balance between the three key roles. Other people on your team may notice your behavior change as well. If questioned, you could say, “I’m experimenting with a different approach to exert influence.”</p>
<p></p>
<p><strong>Entertain other opportunities.</strong> Our multiyear research consistently revealed that employees’ relationships with their direct supervisor greatly influence their level of job satisfaction, engagement, and productivity.<a id="reflink12" class="reflink" href="#ref12">12</a> So, assuming that you’ve tried the strategies above and your role profile as a buffer, translator, and advocate continues to be unfulfilling, it may be time to look for other job opportunities that will allow you to flourish. After all, successful upward leadership requires superiors who are also willing to change. </p>
<p></p>
<p>Leading upward represents one of the most significant and least appreciated talents you can master. It requires courage tempered with discretion, thoughtful advocacy coupled with inquisitive listening, and an eagerness to debate peppered with a zeal to engage in calculated silences. </p>
<p>Practicing when and how to use these polarized aptitudes allows leaders to seamlessly integrate the roles of buffer, translator, and advocate. Learning to do so may not bring many accolades or trophies attesting to your “upward leadership excellence.” But mastering upward leadership will, at the very least, ensure career fulfillment and, at the very best, organizational excellence. Think of midlevel leaders you know who rose through the ranks or ensured great outcomes for their teams: Most have mastered the difficult art form of respectfully and resolutely leading up. And perhaps improving your own upper leadership acumen will spur you to further cultivate a climate within your own team that encourages upward leadership, improving employee engagement and work outcomes.<a id="reflink13" class="reflink" href="#ref13">13</a></p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/managing-up-a-skill-set-that-matters-now/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>﻿The Trap That Skilled Negotiators Miss</title>
				<link>https://sloanreview.mit.edu/article/the-trap-that-skilled-negotiators-miss/</link>
				<comments>https://sloanreview.mit.edu/article/the-trap-that-skilled-negotiators-miss/#comments</comments>
				<pubDate>Sun, 12 Apr 2026 11:00:25 +0000</pubDate>
				<dc:creator><![CDATA[Monica Wadhwa and Krishna Savani. <p>﻿Monica Wadhwa is an associate professor in the Department of Marketing and Supply Chain Management at Temple University’s Fox School of Business. Krishna Savani is a professor of management at Hong Kong Polytechnic University. Both authors contributed equally to this article.﻿</p>
]]></dc:creator>

						<category><![CDATA[Decision-Making]]></category>
		<category><![CDATA[Human Psychology]]></category>
		<category><![CDATA[Managerial Psychology]]></category>
		<category><![CDATA[Narrated Article]]></category>
		<category><![CDATA[Negotiations]]></category>
		<category><![CDATA[Pricing]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leadership Skills]]></category>
		<category><![CDATA[Frontiers]]></category>

				<description><![CDATA[Brian Stauffer/theispot.com Say you walk into a car dealership determined to stay within budget. The salesperson shows you a car you like and quotes a price of $41,435. You know there’s room to negotiate, but when it’s time to counter, that first number quietly takes over. Your counteroffer, the concessions, and the final deal all [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/2026SUMMER_Savani-1290x860-1.jpg" alt="" class="wp-image-126477"/><figcaption>
<p class="attribution">Brian Stauffer/theispot.com</p>
</figcaption></figure>
<p></p>
<p></p>
<p><span class="smr-leadin">Say you walk into a car dealership</span> determined to stay within budget. The salesperson shows you a car you like and quotes a price of $41,435. You know there’s room to negotiate, but when it’s time to counter, that first number quietly takes over. Your counteroffer, the concessions, and the final deal all end up orbiting around $41,435.</p>
<p>That’s anchoring at work. In negotiations, first offers become psychological reference points, and people often fail to adjust far enough away from them, even though they are free to counter with any amount they want.</p>
<p>Although the anchoring effect is well documented, what makes this bias so frustrating is that it persists even among skilled and experienced negotiators. It shows up in procurement, strategic deals, and executive compensation conversations — any situation in which one party gets a number on the table early and the other party must respond under time pressure.</p>
<p>If you’re preparing for an important negotiation, the standard advice is familiar: Do your homework, know your target, and don’t reveal too much too soon. Those suggestions are useful, but none of them changes the fact that when the first offer lands, your mind starts thinking of counteroffers close to that number. Our <a href="https://doi.org/10.1016/j.jesp.2023.104575" target="_blank">recent research</a>, published in the <cite>Journal of Experimental Social Psychology</cite>, identified a simple way to reduce the anchoring effect when you don’t control the first offer: Adopt a <em>choice mindset</em> right when you see the first offer.</p>
<p></p>
<h3>The Power of Choice Reminders</h3>
<p>A <a href="https://doi.org/10.1016/j.obhdp.2019.05.003" target="_blank">choice mindset</a> is a state of mind in which people perceive the availability of more choices than they are presented with. When in this mindset, people are more likely to recognize the options available to them, including nonobvious options (such as delaying a decision or changing the structure of a deal), particularly in situations in which they feel constrained (such as difficult negotiations).</p>
<p>In everyday life, a choice mindset is the difference between thinking “I have no choice; I have to take what I can get” and thinking “I have choices and can even consider options that have not been presented to me.” The key insight is that <em>feeling</em> constrained is not the same as <em>being</em> constrained, and the subjective perception of choice can be nudged.</p>
<p>When someone quotes a price of $41,435, your brain starts searching for a reasonable counter in the neighborhood of that number rather than exploring the full range of possible counteroffers. Our research tested the idea that a choice mindset can widen that search. The mechanism is cognitive: A choice reminder leads people to think of other potential counteroffers, which weakens the anchor’s dominance and helps negotiators move further away from the first offer.</p>
<p></p>
<p>We tested the effect of this reminder across seven studies with U.S. participants recruited through online research platforms. The intervention was intentionally minimal. In the choice condition, after seeing a seller’s quoted price, participants received a simple reminder that they could choose their offer (“You can choose to offer any amount that you want. It’s your choice!” for example). The control condition received standard negotiation instructions without that explicit choice reminder. The practical translation is straightforward: A small prompt pushed people to counter more aggressively and rely less on the seller’s opening number.</p>
<p>For example, in one of the studies, based on a used-car bargaining scenario, participants were shown cars along with detailed information and were quoted prices ranging from $15,599 to $19,781 — intentionally precise numbers because <a href="https://doi.org/10.1037/0022-3514.81.4.657" target="_blank">prior research</a> suggests that precise first offers serve as potent anchors. As expected, the choice reminder reduced anchoring: Participants in the choice condition countered with lower offers than those in the control condition. The implication for leaders is that this isn’t just a trick for minor purchases; it can be applied in real negotiations, where the other side’s opening offer is presented as a carefully calculated figure.</p>
<p></p>
<h3>Having More Options Helps Negotiators</h3>
<p>Why is such a simple reminder so effective? We investigated the mechanism directly by measuring whether a choice reminder changes what negotiators think about before they commit to a counteroffer. In a study that tasked participants with negotiating the price of a painting, we asked participants to list all of the offers they could imagine making instead of a single figure. The choice reminder led participants to generate a small but significant increase in the number of counteroffer options. This matters because anchoring is fundamentally a cognitive spotlight problem: The anchor dominates the focus, and any nudge that expands the set of options you consider can loosen that grip.</p>
<p>We further tested whether simply thinking of more offers could trigger this de-anchoring, by randomly assigning participants to generate either two or eight potential offers before making their final counter. Generating eight offers significantly reduced anchoring, because of the breadth of the range they produced. Participants who generated eight offers produced a much wider set of options, and that variance statistically explained why their final counteroffer moved further away from the initial anchor. Ultimately, the way out of an anchor is not just grit or negotiation bravado; it hinges on widening the decision space before you make your move.</p>
<p>Negotiators in a choice mindset can avoid anchoring on first offers not only by generating more counteroffers but also by shifting the negotiation to other points of discussion. A book publisher negotiating with an agent who is asking for a $100,000 advance, for instance, can weaken the effect of the first offer by pivoting to negotiating other variables, such as royalty tiers and payment structures, thereby expanding the scope of the discussion and reframing an adversarial exchange into a collaborative problem-solving session.</p>
<p>This mechanism points to a simple practice you can use in negotiations. When the other side makes a first offer, you should aim to create a brief choice pause. This moment is not about theatrics; it’s about preventing the first number from becoming your default starting point. During this pause, try to think of multiple counteroffers that are within the bounds of reason, including a few that might appear aggressive but can still be defended based on relevant reference points. The goal is not to counter with the most aggressive number possible but to generate credible options that are not influenced by the first offer. If you have come to the negotiation table with your own first offer prepared, but your counterparty makes the first offer, rather than using their offer as a baseline for negotiations, counter with your preplanned first offer (and the accompanying rationale) even if it appears quite far from theirs.</p>
<p>This practice is even more effective when integrated into your preparation. Rather than just setting a single target and a walk-away point, prepare a set of counters that spans a meaningful range. This broader map protects you against the pull of a surprising anchor. By shifting the focus from a single point to a prebuilt range of possibilities, you change the tone of the internal deliberations before you ever respond externally.</p>
<h3>How Distractions Can Derail Negotiations</h3>
<p>There is an important caveat, and it’s one that will resonate with any executive who has had to negotiate a deal while juggling a dozen competing priorities: This strategy depends on attention and cognitive bandwidth. We predicted that if the choice reminder works by prompting people to think through more counteroffers, then it should be weaker when cognitive resources are constrained. That’s exactly what we found. In a study that used a divided-attention paradigm, participants negotiated while brand logos were flashed on the screen; they were asked to count certain logos, a task designed to mimic distraction and multitasking.</p>
<p>Under normal conditions, the choice reminder reduced anchoring. Under high cognitive load, the effect disappeared: Participants in the choice condition were just as anchored as those in the control condition.</p>
<p>This boundary condition has an immediate managerial implication. If you want to benefit from a choice mindset, you can’t treat negotiation as a task you do while triaging email, scanning Slack, or squeezing a call into a depleted part of your day. The moment you receive the first offer is exactly when you need enough bandwidth to generate alternatives. When you’re distracted, your mind reverts to the easiest available path, which is to negotiate around the anchor. In practice, that may mean setting norms (such as “We don’t counter on the spot for high-stakes deals”) or simply buying time (like asking for a short break or a follow-up call) so that you can do the brief work of generating your set of counteroffers.</p>
<p></p>
<p>We also tested and ruled out an alternative explanation that leaders sometimes assume: that a choice reminder simply makes people more self-interested or more motivated to win, leading them to make tougher offers. In one study, we measured motivation to get a low price and perceived task importance. Those measures did not differ between conditions, even though the choice reminder still reduced anchoring.</p>
<p>That pattern is consistent with a cognitive understanding of negotiation: The choice nudge changes how people think, not just how hard they want to bargain.</p>
<p>The implication is that a choice mindset is most useful when you already know which way you want to move (price down, salary up, liability down, scope up, and so on). When the right direction is uncertain, you should pair this approach with independent benchmarks and analysis so that you’re not simply widening the range without clarifying your strategic aim.</p>
<p>Anchoring is one of those biases that is easy to recognize in others but hard to avoid, especially because it operates in the flow of everyday work life. Yet the practical lesson from our research is encouraging: You don’t always need complex negotiation tactics to reduce it. Sometimes you just need a tiny moment of cognitive reframing. When you remind yourself that you have a choice, you’re more likely to generate alternatives, expand the range of possible counters, and move further away from the first number put in front of you.</p>
<p></p>
<p>The next time you receive a first offer, whether it’s from a supplier, a job candidate, a partner, or a counterpart in a strategic deal, try the following steps:</p>
<ol>
<li>Pause to consider the offer. Ask your counterpart for a moment to think.</li>
<li>Remind yourself: I have a choice.</li>
<li>Give yourself just enough time to create a few options for a counteroffer before you pick one.</li>
</ol>
<p>In many negotiations, that small shift can be the difference between your counteroffer being anchored to the initial offer and setting your own terms. Indeed, <a href="https://doi.org/10.1111/iere.12719" target="_blank">research has found</a> that the most likely outcome is the midpoint between the first offer and the first counteroffer.</p>
<p>Once you have a counteroffer in mind, you can draw from other research that has identified some best practices for ensuring that the negotiation that follows is successful. Aim to <a href="https://doi.org/10.1017/jmo.2020.47" target="_blank">shift the conversation</a> from haggling over a number to building a shared rationale for concluding a deal. When sharing your counteroffer, make the underlying criteria explicit (using market comparables, outside options, or precedents, for example), and invite the other side to respond with alternative objective criteria rather than a competing anchor. If several terms are on the table, move quickly from a single counter to two or three <a href="https://doi.org/10.1016/j.obhdp.2019.01.007" target="_blank">package offers</a> that are equally attractive to you but trade price against other issues; this helps surface priorities and unlocks value. Then concede slowly and deliberately, labeling each concession and tying it to a reciprocal move so that the negotiation stays organized around your counteroffer rather than drifting back toward the original offer. All of this is made possible by a brief moment of cognitive reframing — pausing to remind yourself that you have a choice — that loosens the anchor’s grip and lets you negotiate on your own terms.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/the-trap-that-skilled-negotiators-miss/feed/</wfw:commentRss>
				<slash:comments>1</slash:comments>
							</item>
					<item>
				<title>﻿Rethink ﻿Responsibility in the Age of AI</title>
				<link>https://sloanreview.mit.edu/article/rethink-responsibility-in-the-age-of-ai/</link>
				<comments>https://sloanreview.mit.edu/article/rethink-responsibility-in-the-age-of-ai/#respond</comments>
				<pubDate>Thu, 09 Apr 2026 11:00:22 +0000</pubDate>
				<dc:creator><![CDATA[François-Xavier de Vaujany and Aurélie Leclercq-Vandelannoitte. <p>François-Xavier de Vaujany is a full professor in organization studies at Université Paris Dauphine-PSL and a senior researcher at DRM. Aurélie Leclercq-Vandelannoitte is a ﻿﻿CNRS ﻿researcher at LEM — Lille Économie Management, which comprises ﻿Univ. Lille, the CNRS, and the IESEG School of Management.</p>
]]></dc:creator>

						<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Decision-Making]]></category>
		<category><![CDATA[Leadership Advice]]></category>
		<category><![CDATA[Narrated Article]]></category>
		<category><![CDATA[Organizational Culture]]></category>
		<category><![CDATA[Risk Management]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Crisis Management]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Ethics]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leading Change]]></category>
		<category><![CDATA[Frontiers]]></category>

				<description><![CDATA[Mark Airs/Ikon Images Early one morning in 2018, a self-driving Uber vehicle fatally struck a pedestrian in Tempe, Arizona. The world had questions: Who was responsible? Was it the safety driver behind the wheel? The engineers who designed the algorithms? Uber’s leadership? Or the regulators who had allowed autonomous-vehicle testing? The inability to name a [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/2026SUMMER_Vaujany-1290x860-1.jpg" alt="" class="wp-image-126474" /><figcaption>
<p class="attribution">Mark Airs/Ikon Images</p>
</figcaption></figure>
<p></p>
<p></p>
<p><span class="smr-leadin">Early one morning in 2018</span>, a self-driving Uber vehicle <a href="https://www.nytimes.com/interactive/2018/03/20/us/self-driving-uber-pedestrian-killed.html" target="_blank">fatally struck a pedestrian</a> in Tempe, Arizona. The world had questions: Who was responsible? Was it the safety driver behind the wheel? The engineers who designed the algorithms? Uber’s leadership? Or the regulators who had allowed autonomous-vehicle testing? The inability to name a single culprit signaled a profound shift in how responsibility must be understood and attributed in the age of intelligent technologies.</p>
<p>As organizations deploy increasingly autonomous systems such as drones, trading bots, or algorithmic decision makers (like automated resume screeners or credit assessment tools), agency becomes distributed, emerging from the complex interplay of human and machine actions. Decisions, once linear and traceable, now unfold across networks of people and artificial intelligence systems, introducing new forms of influence and unpredictability.</p>
<p>For today’s leaders, this means that the old search for a culprit loses relevance. The real challenge is not to assign blame but to instead construct a shared narrative — to uncover not only what went wrong but how collective activities, assumptions, and technologies shaped the outcome. As our recent research, <a href="https://doi.org/10.25300/MISQ/2025/17970" target="_blank">published in <cite>MIS Quarterly</cite></a>, shows, forging organizational learning and resilience depends on this collaborative revisiting of how decisions happen and how stories of responsibility are constructed. We call this process <em>narrative responsibility</em>.</p>
<p></p>
<h3>Why Classic Models of Responsibility No Longer Work</h3>
<p>Classic theories of responsibility have rested on three core assumptions: that the world is fundamentally linear, with events following clear cause-and-effect logic; that decision makers act in a shared space and time, making the link between actions and consequences traceable; and that responsibility can be precisely attributed backward to an individual whose intentions and choices drive outcomes.</p>
<p>Consistent with these assumptions, when something goes wrong, organizations often enact traditional models of accountability by holding a senior leader personally responsible. For instance, after two fatal crashes of Boeing’s 737 MAX aircraft killed 346 people in 2018 and 2019, <a href="https://www.nytimes.com/2019/12/22/business/boeing-dennis-muilenburg-737-max.html" target="_blank">CEO Dennis Muilenburg</a> was swiftly dismissed as a visible response to the crisis. However, despite this action and promises of cultural change from his successor, the underlying quality and safety failures persisted — culminating in a door plug blowing off a 737 MAX midflight in 2024 and the departure of yet another CEO. Removing one individual rarely addresses the deeper, complex causes of organizational failure. </p>
<p>Such approaches to accountability have always faced limits, even before the rise of digital technologies. What’s new in the age of AI and automation is how much faster, more complex, and opaque decisions are becoming, making old models of accountability less tenable than ever. </p>
<p>Take the <a href="https://www.businessinsider.com/amazon-drone-crash-oregon-fire-2022-3" target="_blank">crash of Amazon’s Prime Air delivery drone</a> in Oregon in 2022. While <a href="https://www.faa.gov/uas/advanced_operations/nepa_and_drones/20250827_Amazon_Pendleton_OR_Written_ReEvaluation.pdf" target="_blank">official reports</a> focused on technical or operator errors, the reality is that accountability for such incidents is inherently distributed — across coders, approval teams, and operations or project managers. Actions and consequences are distributed in ways that old models of accountability simply cannot address. </p>
<p>This challenge demands a fresh approach to responsibility that moves from blame to narrative responsibility.</p>
<p></p>
<h3>Making Narrative Responsibility Real: Three Actionable Moves</h3>
<p>Translating narrative responsibility from theory to practice requires that leaders reframe how accountability is constructed, sustained, and experienced so that every incident becomes a catalyst for collective learning and continual improvement. To make this shift, organizations must embed narrative responsibility at every level. Here’s how leaders can put the principles of narrative responsibility into action:</p>
<p><strong>1. Map the real story — beyond the obvious.</strong> In the aftermath of an incident, organizational reviews — whether technical, legal, or managerial — often aim to converge toward a coherent causal account that enables closure and action. While such convergence is common and often necessary, it can also narrow the scope of responsibility by privileging stabilized explanations over contested or ambiguous ones. A narrative responsibility approach does not reject conventional audits but complements them by attending to how responsibility is constructed, anticipated, distributed, and gradually fixed through organizational storytelling, decision rationales, and silences over time.</p>
<p>Google’s response to its Gemini image-generation failure in early 2024 offers a partial model. When the tool generated historically inaccurate images, Google published a <a href="https://blog.google/products-and-platforms/products/gemini/gemini-image-generation-issue/" target="_blank">detailed public explanation</a> tracing the root cause to flawed diversity tuning and misguided model behavior. Meanwhile, in an <a href="https://www.npr.org/2024/02/28/1234532775/google-gemini-offended-users-images-race" target="_blank">internal memo</a>, CEO Sundar Pichai committed to structural changes, improved launch processes, and expanded red-teaming. This was genuine story mapping — naming what broke and why. </p>
<p>But a more comprehensive exercise might have identified competitive pressure to ship quickly, organizational incentives that discouraged cautious testing, and the gap between known risks and the decision to launch as factors to consider. Mapping the real story means going beyond the technical postmortem to surface the human and organizational dynamics that allowed failure in the first place. It means going beyond individual errors or broken code to understand how assumptions, data, and organizational routines interact — and where ambiguity, a lack of relevant anticipations, and misalignment take root.</p>
<p> </p>
<p><strong>2. Distribute ownership, not blame.</strong> In today’s complex AI-enabled organizations, decisions and outcomes emerge not from a single hand on the wheel but from dynamic interactions over time, which calls for a collective and distributed notion of responsibility. Real accountability depends on ongoing engagement and sensemaking across teams and functions. Too often, warnings or objections that were ignored or never voiced play as big a part as active missteps.</p>
<p>Forward-thinking organizations are creating formal structures, such as steering committees, incident review panels, traceability systems, and cross-functional advisory groups, to institutionalize narrative responsibility. These forums are designed as open, psychologically safe spaces where staff members at all levels can reflect on what happened, voice difficult truths, and collectively reconstruct how incidents unfolded. In health care, this shift is well underway: UCLA Health, for example, <a href="https://www.healthcareexecutive.org/archives/march-april-2020/the-promise-and-practice-of-a-just-culture" target="_blank">established a network</a> of trained culture champions and incident review committees that examine adverse events to surface systemic patterns and drive improvement across the organization. The aviation sector offers a proven model of this collective-learning approach: After an automation-related failure, airlines like Air France and KLM, in line with European Union Aviation Safety Agency regulations, convene multidisciplinary panels as part of their safety management systems. These panels, aligned with the principles of “just culture,” focus not on blaming but on extracting lessons and adapting systemically. This approach has demonstrably strengthened airline safety and customer trust.</p>
<p><strong>3. Embed reflection in everyday practice.</strong> For narrative responsibility to thrive, it must not be practiced only post-crisis; it must become organizational routine. Sustainable learning emerges when teams habitually review how stories of accountability are constructed — and reconstructed — across daily operations and the use of technologies like AI.</p>
<p>Some organizations add narrative review points to recurring meetings, asking, “What did we learn?” “Where did our assumptions or processes fail?” or “How did our actions contribute to the outcome?” (See, for instance, the chapter “<a href="https://sre.google/sre-book/postmortem-culture/" target="_blank">Postmortem Culture: Learning From Failure</a>” in Google’s book <cite>Site Reliability Engineering</cite>.) Others routinely include responsibility narratives in management reports, not only after incidents but as an ongoing practice — turning lessons learned into living documents that support continuous learning. <a href="https://www.academia.edu/41536010/Transformation_at_ING_A_Agile" target="_blank">ING Bank</a>, for instance, has built regular reviews and “retrospective learning sessions” directly into its <a href="https://www.bcg.com/publications/2018/human-resources-pioneering-role-agile-ing" target="_blank">agile routines</a>. After each sprint, teams discuss what went well, what could be improved, and how lessons learned from critical events can inform future work, to ensure that key insights connect day-to-day operations to broader conversations about ethics and risk.</p>
<p></p>
<p>When the three principles are enacted, they reshape not just day-to-day operations but how organizations collectively respond to failure at all levels. Returning to the opening example of Uber’s tragic self-driving car incident, the official response centered on individual fault: The safety driver was prosecuted, and <a href="https://www.nytimes.com/2018/03/26/technology/arizona-uber-cars.html" target="_blank">Uber halted its autonomous-vehicle program</a>. However, as far as we know, organizational and systemic factors like design decisions, safety culture, and regulatory gaps were extensively documented in the <a href="https://www.ntsb.gov/investigations/accidentreports/reports/har1903.pdf" target="_blank">official investigation</a> but received limited attention in subsequent public and judicial responses. A narrative responsibility approach — one that maps the real story with all stakeholders and techniques involved, distributes ownership beyond blame, and embeds ongoing reflection — would have invited all key actors to collectively examine what shaped the anticipated and realized outcomes. While this wouldn’t have reversed past harm, it could have surfaced deeper lessons, enabled more meaningful accountability, and driven more systemic change for the future.</p>
<p></p>
<h3>From Blame to Shared Narrative</h3>
<p>Sustaining narrative responsibility requires more than scattered initiatives. It must become part of an organization’s DNA.</p>
<p>As businesses adopt AI agents, they can no longer rely on compliance teams or retroactive audits to assign accountability. Instead, establishing a shared practice of responsibility by constructing, questioning, and evolving the organizational narrative, together, is a strategic, forward-looking imperative for all leaders and teams. </p>
<p>Embracing narrative responsibility is critical for today’s organizations, but it’s not a panacea. There are real risks, particularly if the process is used to diffuse or obscure accountability — especially when leaders control the story. It cannot substitute for legal or regulatory obligations: Frameworks like the <a href="https://www.nytimes.com/2025/07/10/business/ai-rules-europe.html" target="_blank">European Union’s AI Act</a> remain essential safeguards. And when responsibility is distributed across organizations, constructing shared accountability is complex and demands intentional openness and collaboration. For narrative responsibility to be transformative, it must complement — never replace — robust ethical and legal standards.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/rethink-responsibility-in-the-age-of-ai/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Gain Consumer Insight With Generative AI</title>
				<link>https://sloanreview.mit.edu/article/gain-consumer-insight-with-generative-ai/</link>
				<comments>https://sloanreview.mit.edu/article/gain-consumer-insight-with-generative-ai/#respond</comments>
				<pubDate>Wed, 08 Apr 2026 11:00:30 +0000</pubDate>
				<dc:creator><![CDATA[Neeraj Arora, Ishita Chakraborty, and Yohei Nishimura. <p>Neeraj Arora is the Arthur C. Nielsen Jr. Chair in Marketing Research and Education at the University of Wisconsin-Madison’s Wisconsin School of Business. Ishita Chakraborty is an assistant professor of marketing and the Thomas and Charlene Landsberg Smith Faculty Fellow at the Wisconsin School of Business. Yohei Nishimura is a doctoral student in the marketing department at the Wisconsin School of Business.</p>
]]></dc:creator>

						<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Customer Behavior]]></category>
		<category><![CDATA[Data-Driven Marketing]]></category>
		<category><![CDATA[Marketing Analytics]]></category>
		<category><![CDATA[Marketing Innovation]]></category>
		<category><![CDATA[Marketing Research]]></category>
		<category><![CDATA[Narrated Article]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Analytics & Business Intelligence]]></category>
		<category><![CDATA[Customers]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Marketing]]></category>
		<category><![CDATA[Marketing Strategy]]></category>

				<description><![CDATA[Stuart Kinlough/Ikon Images Marketing leaders often face a dilemma: Deriving the insights they need in order to make confident decisions can cost tens of thousands of dollars and involve several months of data gathering and analysis, by which time market conditions may have shifted. Can generative AI fundamentally reshape this calculus? Drawing on recent research, [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/2026SUMMER_Arora-1290x860-1.jpg" alt="" class="wp-image-126470" /><figcaption>
<p class="attribution">Stuart Kinlough/Ikon Images</p>
</figcaption></figure>
<p></p>
<p></p>
<p><span class="smr-leadin">Marketing leaders often face a dilemma:</span> Deriving the insights they need in order to make confident decisions can cost tens of thousands of dollars and involve several months of data gathering and analysis, by which time market conditions may have shifted. Can generative AI fundamentally reshape this calculus?</p>
<p>Drawing on recent research, including our own study published in the <a href="https:/https://journals.sagepub.com/doi/10.1177/00222429241276529" target="_blank"><cite>Journal of Marketing</cite></a>, as well as interviews with marketing leaders from major organizations, we have identified five ways that large language models (LLMs) are beginning to transform the marketing function and reshape the $153 billion insights industry.<a id="reflink1" class="reflink" href="#ref1">1</a> LLMs can viably compress marketing research timelines from months to days by introducing new approaches for rapid concept testing, such as the use of synthetic consumer “digital twins,” and enabling qualitative research at scale. These techniques allow companies to better harness unstructured data and smaller research teams to conduct much larger studies than they could previously.</p>
<p>Organizations conduct marketing research to uncover consumer insights that guide strategic and tactical business decisions. Historically, insight generation has been a multistage, time-consuming, and labor-intensive process.</p>
<p>A typical marketing research pipeline includes problem definition, research design, study design, sample selection, data collection, data analysis, and insights delivery. Some aspects of marketing research are qualitative (such as interviews and focus groups), and others (surveys, for example) are quantitative in nature. These studies may be conducted by in-house marketing research teams or outsourced to agencies with specialized expertise. A research project can take a few weeks to several months, depending on its scope, and can cost anywhere from tens to hundreds of thousands of dollars.</p>
<p></p>
<p>Generative AI is making the consumer insight generation process substantially more efficient while also presenting novel ways to make the <a href="https://hbr.org/2025/05/how-gen-ai-is-transforming-market-research" target="_blank">research more effective</a>. In short, it is making the marketing research process faster and cheaper.</p>
<p>Just as AI-driven drug discovery has shortened the timeline from candidate screening to clinical-trial readiness, generative AI is shortening timelines from exploration to insights.<a id="reflink2" class="reflink" href="#ref2">2</a> AI is being integrated into the market research process with humans in the loop, as illustrated in the figure “How AI Is Integrated Into the Marketing Research Process.” In the early stages of research, problem definition and design are primarily guided by the decision maker. This is because critical factors — such as client experience, market intuition, and practical constraints like budget and timing — are human-led and challenging for AI to infer. Although the AI can help refine problem statements or brainstorm design options, its role during these early stages is typically minimal. In contrast, AI serves as an excellent collaborator in the remaining stages of marketing research.</p>
<p></p>
<p>In the study design phase of qualitative research, LLMs can be used to generate initial drafts of discussion guides for exploratory work. During sample selection, they can help identify respondent characteristics that align with the research goals. In the analysis phase, LLMs summarize long interviews, extract themes, and organize unstructured text into interpretable insights. As Paul Metz, CEO of C+R Research, said, “AI tools process and synthesize large volumes of transcript data within hours, detecting patterns and themes that previously took days to uncover.”</p>
<p>Such efficiencies allow teams to handle large volumes of qualitative data and work more productively. The speed and cost savings allow companies to shift from large, infrequent studies that take months to complete to smaller, more frequent studies aligned with decision cycles. This also empowers managers to test more ideas, iterate quickly, and adopt an experimentation-oriented mindset.</p>
<p>For quantitative research, LLMs can be used to quickly generate the first draft of a survey, report summary statistics, visualize the data, and debug analysis code as needed. These GenAI use cases allow the research team to delegate many of the rote tasks to the AI, use that time to focus on answering the business questions more effectively, and deliver insights faster.</p>
<div class="callout-highlight callout-highlight--transparent">
<aside class="l-content-wrap">
<article>
<h4>How AI Is Integrated Into the Marketing Research Process</h4>
<p class="caption">Early stages of marketing research are human-led; LLM-based AI tools can aid in the completion of tasks in later stages of the pipeline.</p>
<p><img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/SU_26_RF_Arora.png" alt="This figure shows the various stages of marketing research, with humans defining the problem and developing the high-level research design and AI working in partnership with humans through the later phases."/></p>
<p class="attribution">
</article>
</aside>
</div>
<aside class="callout-info">
<h4>The Research</h4>
<p><span class="blue">&bull;</span> In their <cite>Journal of Marketing</cite> paper, the authors tested how well the large language model GPT-4 could replicate qualitative and quantitative marketing research projects conducted in 2019 by a Fortune 500 food manufacturing company and its market research partner.</p>
<p><span class="blue">&bull;</span> To replicate the qualitative study, the LLM was used to generate synthetic respondents that matched the profiles of human respondents in the original study. These synthetic respondents were asked a subset of the questions from the original study, and their responses were evaluated and compared against the original human responses by crowd workers on attributes such as depth, clarity, and insightfulness.</p>
<div class="callout-toggle">
<p><span class="blue">&bull;</span> The LLM and experienced human analysts from the partner company then conducted separate thematic concept analyses on the original human response transcripts, and their findings were compared in a blind evaluation by senior qualitative researchers.</p>
<p><span class="blue">&bull;</span> To replicate the quantitative study, which asked respondents to rate pet food product concepts, the LLM was used to generate synthetic responses to the same questions based on the demographic and screening data from the original study’s participants. The synthetic data was then compared with the original study’s results.</p>
<p><span class="blue">&bull;</span> Additionally, the authors conducted semistructured interviews with five industry leaders affiliated with the Marketing Leadership Institute at the Wisconsin School of Business to contextualize their findings: Chauncey Holder (senior expert, McKinsey), Chuck Hwang (vice president of analytics and insights, Procter & Gamble), Lisa Gudding (president, Ipsos), Paul Metz (CEO, C+R Research), and Kajoli Tankha (senior director of consumer, brand, and AI insights, Microsoft).</p>
</div>
</aside>
<h3>Generate Consumer Insights With Synthetic Digital Twins</h3>
<p>An important way in which LLMs enable data generation for consumer insights is using digital twins. A digital twin is a synthetic, data-driven representation of an object or process that enables simulation and what-if experimentation at low cost. A range of fields, such as drug discovery, climate science, and supply chain management, were using digital twins well before the rise of LLMs.</p>
<p>In marketing, LLMs are enabling the use of consumer digital twins — personas that can simulate decision-making, preference shifts, and responses to marketing stimuli — as testbeds for premarket experimentation.<a id="reflink3" class="reflink" href="#ref3">3</a> Instead of waiting for new-data collection, analysts can simulate concept tests, assortment decisions, pricing moves, or campaign reactions in silico before making a significant financial commitment.</p>
<p>AI market research companies like Evidenza and academic initiatives such as Columbia University’s digital twin data set highlight the growing ecosystem around AI-driven consumer emulation.<a id="reflink4" class="reflink" href="#ref4">4</a> Evidenza partnered with a German information and communications technology company to study whether B2B buyers would trust the company to handle cybersecurity and cloud infrastructure for sensitive data. The research team used synthetic samples of decision makers to simulate a study and quickly test hypotheses around spending trajectories, the products most likely to drive vendor switching, and other questions. Validation against an existing human survey revealed strong correlations (0.75-0.88) across metrics, confirming that the synthetic samples provided directionally accurate insights. The synthetic approach enabled the B2B company to obtain valuable input at a fraction of the time and cost of traditional marketing research.</p>
<p>Consumer digital twins can be generated from a variety of demographic, psychographic, and behavioral data from various internal and external sources that companies may have access to. To generate digital twins in our study, we obtained detailed profiles of respondents in our research partner’s original study, including their demographics and product use. We then prompted the LLM by providing it with the research context and the persona we wanted it to assume based on a human respondent’s profile. Finally, we asked it to perform a task, such as giving a detailed answer to an open-ended question or picking from multiple response options for a survey question. We generated hundreds of synthetic respondents in that manner using the API for an LLM.</p>
<p>Our study found that LLMs can generate high-quality, information-rich qualitative data. Both LLM- and human-generated data look and feel remarkably similar, although LLM responses are superior in terms of depth and insightfulness, since they are unconstrained by time or a willingness to elaborate. They can also help reach niche or hard-to-reach segments, thus complementing human respondents in meaningful ways. For quantitative survey research, we found that an LLM does a good job of replicating the direction and magnitude of the human answers well.</p>
<p></p>
<p>Additionally, our findings revealed that digital twins add significant value to develop the research process. An LLM can be used to generate synthetic response data to a survey before it is administered to human respondents. By turning the typical research flow on its head, this “backward” marketing research approach allows researchers to test their survey design before fielding a survey.<a id="reflink5" class="reflink" href="#ref5">5</a> They can look at the synthetic survey results to answer fundamental questions, such as the quality of insights the survey is likely to reveal and survey questions that could be removed or added. In some circumstances, synthetic data may even obviate the need to conduct the survey; this could occur, for example, when one concept clearly dominates all of the concepts tested, or when the main insight from the survey is not new.</p>
<p>The gains from digital twin data are likely to be higher for hard-to-reach respondents, such as doctors or senior managers. Decision makers would much rather work with data from digital twins than have no data at all for these hard-to-reach groups. An attractive aspect of digital twins is that they do not get tired or have time constraints and can provide lengthy answers for many questions.</p>
<p>In addition to generating useful data, LLMs can be helpful in collecting and analyzing unstructured data from human or synthetic participants.</p>
<p></p>
<h3>Unlock Qualitative Research at Scale</h3>
<p>The traditional model for conducting marketing research is to begin with unstructured qualitative research (such as ethnographies, in-depth interviews, or focus groups) involving a small number of respondents and use it as the foundation for a large sample survey. Because unstructured, qualitative data involves a small sample size, is labor intensive, and is therefore expensive to collect and analyze, companies have historically relied more heavily on survey data. However, LLMs are proving to be useful in making qualitative data much easier to collect and analyze.</p>
<p><strong>AI as the data collection engine. </strong>An impressive use case for generative AI in data collection is as an interviewer of human respondents, where it is used to perform three key tasks:</p>
<ul>
<li><strong>Interviewer:</strong> The LLM follows a discussion guide to ask specific questions.</li>
<li><strong>Scorer:</strong> The LLM then evaluates the human answer against metrics such as clarity and depth, and provides a score on a scale of 1-100.</li>
<li><strong>Prober:</strong> If the evaluation score is below a preestablished threshold, the LLM asks the respondent to elaborate further.</li>
</ul>
<p>This three-step approach is not limited to conducting interviews with humans; it can also be applied to generating synthetic data. In testing this idea, we determined that synthetic data from AI-moderated interviews preserves the meaning and essence of human-generated data. Importantly, independent evaluation by human raters scored the AI-generated data significantly higher on measures of depth and insight.</p>
<p>AI-moderated interviews are powerful additions to a marketing researcher’s toolkit and permit data collection for qualitative research at scale. Unlike a human moderator, an AI moderator can collect detailed unstructured data (video, audio, or text) from many respondents across the globe, and at a fraction of the cost of a traditional in-person in-depth interview. Although an experienced human moderator may be better at reading respondents’ tone, body language, and visual cues, the advantage of AI moderators is the ability to quickly conduct interviews at scale, across geographical boundaries. AI moderators may offer an additional advantage in situations where humans feel uncomfortable talking about a product because of social desirability biases or fear of judgment.</p>
<p>Suppliers such as Outset and Nexxt Intelligence have commercially available products with AI-moderated functionality for conducting interviews. In one <a href="https://outset.ai/resources/stories/how-hubspot-ran-100-interviews-in-days-with-outset-and-shaped-their-ai-roadmap" target="_blank" rel="noopener noreferrer">case study</a>, Outset claimed to have completed 100 interviews in just a few days — a task that normally would have taken weeks. The resulting qualitative data revealed problems its client had not known existed and helped shape messaging for its brand campaigns. The AI moderator approach also gave the client the ability to conduct research continuously rather than just once or twice a year.</p>
<p><strong>AI as the analysis engine. </strong>The traditional approach to qualitative data analysis is largely manual and performed by expert analysts, who sort through large volumes of unstructured text and audiovisual data. The analysis task for text data, for example, involves thematic concept analysis, which includes reading the text, excluding fillers, highlighting key phrases or sentences, clustering them into related concepts or themes, iterating to remove repetitive ideas, and consolidating the themes into a concise summary. Our research finds that LLMs have made many of these analysis tasks easier to perform without sacrificing quality.</p>
<p>At the process level, we find that humans tend to highlight more sentences than LLMs when analyzing data and that there is significant overlap in the sentences that humans and the LLM highlight as important. LLMs uncover most of the same themes that humans do and identify new themes that humans do not. Overall, LLMs are comparable to humans in identifying key ideas, grouping them into themes, and summarizing them. In practice, suppliers such as Voxpopme offer excellent tools to analyze multimodal (video, audio, and text) qualitative data. In one case study, Voxpopme claimed a 30% to 50% reduction in the cost of qualitative research projects, a 50% increase in the use of existing research insights, and an impressive 60-times-faster research analysis.</p>
<p></p>
<p>AI-enabled marketing research makes it possible to conduct both qualitative and quantitative research at scale. This was previously infeasible with traditional qualitative research (small samples, deep insights) and quantitative research (large samples, broad insights). Given LLMs’ effectiveness, low cost, and ease of use, we expect that they will play an increasingly critical role during the data collection and analysis stages for unstructured data. Companies, in turn, are quickly discovering how much more they can do with unstructured data than was previously possible.</p>
<p>In addition to traditional qualitative research data (from in-depth interviews and focus groups, for example), there is also rich information in unstructured data such as online reviews, call center transcripts, and social media posts. Chauncey Holder, a senior expert at McKinsey, noted that “AI agents can interrogate multimodal data — like social media, category features, and behavioral signals — to uncover unmet needs and emerging trends, identifying white-space opportunities more efficiently than traditional methods.” The inability to mine this information-rich data quickly and inexpensively was a constraint for marketing researchers because past natural language processing models relied heavily on expensive, labor-intensive human labeling.<a id="reflink6" class="reflink" href="#ref6">6</a> Pretrained LLMs have changed this by enabling low-cost semantic summarization, topic extraction, sentiment classification, and narrative insight generation from massive multimodal data far more easily than previously available tools could. This change marks a massive shift in how the field of marketing research can unlock the value of unstructured data to inform business decisions.</p>
<h3>Connect Siloed Data Using Retrieval-Augmented Generation</h3>
<p>Although today’s LLMs have an impressive set of capabilities, their performance on complex tasks that require domain knowledge (in-house marketing research by a brand, for example) can be limited. For situations in which the LLM lacks the requisite information, <a href="https://sloanreview.mit.edu/article/a-practical-guide-to-gaining-value-from-llms/">retrieval-augmented generation (RAG)</a> is a cost-effective method that can improve its output quality. RAG incorporates information from an external knowledge source, such as a company’s existing qualitative data, as input <em>in addition </em>to the user prompt.</p>
<p>In our own research, we had mixed results when generating synthetic survey data using an LLM alone (without RAG). Although the LLM correctly captured the direction and magnitude of consumer attitudes, it exhibited two key weaknesses evident in many basic AI applications. First, the responses lacked heterogeneity; there was less variation in the AI’s answers compared with the human data. Second, the LLM answers lacked the internal consistency found in human answers; for example, the LLM’s answers did not rate attributes such as “healthy ingredients” and “safest food” similarly, as humans would. Both of those shortcomings were partially overcome when we used RAG to draw on existing qualitative data.</p>
<p>More broadly, RAG can be particularly useful for marketing research, where managers rely on multiple external information sources for decision-making. Effectively integrating siloed insight streams is a challenging task for marketing organizations: Survey trackers, customer relationship management (CRM) systems, social listening, and third-party intelligence rarely “speak” to one another in a cohesive way. LLMs using RAG offer “connective tissue” across disparate sources and enable cross-source synthesis. RAG can also be used to integrate multiple sources of information — such as in-house CRM, survey, and demographic data — and create an AI-enabled chatbot, or persona bot, that brand managers can use to gain a deeper understanding of their customers.</p>
<p>Lisa Gudding, president of strategic growth at consulting firm Ipsos, echoed the argument above, adding that “companies are now blending their own behavioral data with syndicated studies and trend signals that we supply to build richer, more dynamic insight ecosystems. This shift has given rise to data as a service [DaaS], where AI is enabling a new kind of consultative intelligence.” Market Logic and Stravito are two examples of DaaS-based knowledge management companies that integrate multiple sources of information to deliver insights to market researchers.</p>
<p>Although RAG is useful for integrating siloed, multimodal marketing data, it is not without limitations. First, it faces scalability challenges where retrieval accuracy and processing speed degrade as the knowledge base gets very large. Second, the inherent complexity and inconsistency of integrating real-time, multiformat marketing data require extensive preprocessing, which can restrict the volume and fidelity of information the LLM can effectively use. Finally, if the retrieval mechanism identifies information that is incomplete, is irrelevant, or lacks proper context, the quality of insights will be compromised, regardless of how good the LLM’s generative capabilities are.</p>
<p>On this issue, Chuck Hwang, vice president of analytics and insights at Procter &amp; Gamble, observed that “some of the knowledge created, especially in marketing and research, is not fully preserved [and] is often embedded in slide decks or shared verbally, making it difficult for AI to fully capture the institutional context.” Therefore, the effectiveness of a RAG system depends on the underlying information retrieval architecture and data completeness. When these infrastructural and data quality challenges are successfully addressed, this knowledge integration aspect of generative AI can prove to be a source of significant value creation.</p>
<h3>Human Oversight Is Essential</h3>
<p>While we see immense value in using AI for both qualitative and quantitative research, we find it essential to underscore that humans are still the drivers of the insight-generation process.</p>
<p>At the data collection phase of qualitative research, companies can design human-AI teams to generate insights efficiently and effectively. LLMs are excellent assistants that can take the first pass at analyzing vast amounts of text and audiovisual data. This gives the experts time for higher-order tasks, such as ensuring that the insights answer the research questions. In our research, we found that more unique insights emerged from AI-human hybrids than from the human-only or LLM-only approaches. Experienced qualitative researchers and LLMs complement each other well.</p>
<p>Much along the same lines, in quantitative survey research, an LLM can rapidly generate a strong first draft of a survey that can serve as an efficient starting point in the design process. A human expert can begin with this draft survey and perform tasks like adding skip logic and programming instructions, and assessing respondent experience, before signing off on the final version. In this reimagined research pipeline, the LLM focuses on the laborious, repetitive, and uninteresting tasks while the human expert uses the time saved to think more creatively about the business questions to be answered and the quality of the insights the research should deliver.</p>
<p>As Microsoft senior director of consumer, brand, and AI insights Kajoli Tankha noted, “In our own work, GenAI has become a powerful collaborator — accelerating synthesis, enabling scale, and broadening what teams can take on. At the same time, human expertise remains essential for framing the right questions and translating outputs into insight.”</p>
<p>As with any disruptive innovation, we encourage companies to be thoughtful and strategic when adopting LLMs for marketing research. To calibrate and uncover the true value of an LLM for their business, companies should run multiple validation checks before fully embracing LLM-generated outcomes. Such a test-and-learn approach may reveal areas in which an LLM shines and those in which it is inappropriate.</p>
<p>Researchers must develop AI literacy so that they know how to prompt, evaluate, and govern models, and their companies must implement quality guardrails, bias checks, and strict protocols for working with AI. The adoption of generative AI increases the value of human judgment by elevating the researcher to the role of curator of truth rather than just a producer of tables, graphs, and slide decks.</p>
<h3>GenAI and Marketing Research: Implementation Risks and Considerations</h3>
<p>Like any technology, generative AI comes with significant negative externalities. Many are structural (such as intellectual property violation, impact on climate, and job displacements) and outside the scope of this article, but others are squarely related to marketing research and deserve full consideration within the insights function.</p>
<p>First, LLMs are prone to gender, race, and cultural biases because of the data on which they are trained. Modern-day marketing researchers should be trained to spot these limitations when incorporating LLMs into the research pipeline. This issue further reinforces the need for critical human oversight in marketing research.</p>
<p></p>
<p>Second, LLMs make it much easier not only to produce good marketing research but also credible-looking marketing research of low quality. Most of the experts with whom we spoke expressed concern about the marketing industry’s growing appetite for speed at the expense of truly meaningful insights.</p>
<p>Third, there is some early evidence of entry-level job losses in marketing because of AI.<a id="reflink7" class="reflink" href="#ref7">7</a> The tasks that can most easily be automated by LLMs have historically served as training opportunities for junior talent. Most of the experts with whom we spoke echoed concerns about AI’s impact on the talent pipeline. Without hands-on experience in tasks that AI can automate, they noted, emerging talent may struggle to develop the deep analytical thinking and contextual judgment required to interpret data meaningfully and challenge assumptions.</p>
<p>Finally, although digital twins have a tremendous upside, they could be misused to generate fraudulent data that is hard to detect. For example, human respondents to online surveys could use LLMs to generate realistic answers in order to earn compensation.</p>
<p>Although the risks outlined above are real, they can be mitigated through the rigorous oversight and AI literacy we advocated for earlier. GenAI is a powerful ally of marketers, and the next generation of marketing research will be defined by a symbiotic partnership led by humans and fully supported by AI.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/gain-consumer-insight-with-generative-ai/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Disintegrating the Org Chart: ServiceNow’s Jacqui Canney</title>
				<link>https://sloanreview.mit.edu/audio/disintegrating-the-org-chart-servicenows-jacqui-canney/</link>
				<comments>https://sloanreview.mit.edu/audio/disintegrating-the-org-chart-servicenows-jacqui-canney/#respond</comments>
				<pubDate>Tue, 07 Apr 2026 11:00:48 +0000</pubDate>
				<dc:creator><![CDATA[Sam Ransbotham. <p><cite>Me, Myself, and AI</cite> is a podcast produced by <cite>MIT Sloan Management Review</cite> and hosted by Sam Ransbotham. It is engineered by David Lishansky and produced by Allison Ryder.</p>
<p><a href="https://sloanreview.mit.edu/sam-ransbotham/">Sam Ransbotham</a> is a professor in the information systems department at the Carroll School of Management at Boston College, as well as guest editor for <cite>MIT Sloan Management Review</cite>’s Artificial Intelligence and Business Strategy Big Ideas initiative.</p>
]]></dc:creator>

						<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Cognitive Technologies]]></category>
		<category><![CDATA[Employee Experience]]></category>
		<category><![CDATA[Employee Motivation]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Organizational Behavior]]></category>
		<category><![CDATA[Skills & Learning]]></category>
		<category><![CDATA[Talent Management]]></category>
		<category><![CDATA[Workplace, Teams, & Culture]]></category>

				<description><![CDATA[In this episode of the Me, Myself, and AI podcast, Sam Ransbotham is joined by Jacqui Canney, chief people and AI enablement officer at ServiceNow. Jacqui outlines how the software company has embedded AI agents into processes like employee onboarding to automate tasks, personalize experiences, and free up people’s time to focus on higher-value work. [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<p>In this episode of the <cite>Me, Myself, and AI</cite> podcast, Sam Ransbotham is joined by Jacqui Canney, chief people and AI enablement officer at ServiceNow. Jacqui outlines how the software company has embedded AI agents into processes like employee onboarding to automate tasks, personalize experiences, and free up people’s time to focus on higher-value work. She emphasizes that successful adoption of artificial intelligence requires strong change management, workforce training, and a focus on talent — not just technology — including companywide AI skill assessments and personalized learning paths. Tune in to learn why Jacqui sees AI as a human capital opportunity.</p>
<aside class="callout-info">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/MMAI-S13-E3-Canney-ServiceNow-headshot-600.jpg" alt="Jacqui Canney"></p>
<h4>Jacqui Canney, ServiceNow</h4>
<p>Jacqui Canney is the chief people and AI enablement officer at ServiceNow, where she leads the enterprise software company’s talent strategies for improving employees’ experience and preparing them for the future workforce through the use of technology and generative AI.</p>
<p>Before joining ServiceNow in 2021, Canney served as chief people officer at WPP and Walmart. She previously worked at Accenture for 25 years. Canney currently sits on the board of directors for food delivery platform Wonder and nonprofit Project Healthy Minds. She’s also on the Institute for Corporate Productivity’s Chief HR Officer Board and Boston College’s board of trustees, and she cochairs the Boston College Wall Street Business Leadership Council.</p>
</aside>
<p>Subscribe to <cite>Me, Myself, and AI</cite> on <a href="https://podcasts.apple.com/us/podcast/me-myself-and-ai/id1533115958" target="_blank" rel="noopener">Apple Podcasts</a> or <a href="https://open.spotify.com/show/7ysPBcYtOPVgI6W5an6lup" target="_blank" rel="noopener">Spotify</a>.</p>
<h4>Transcript</h4>
<p><strong>Allison Ryder:</strong> We hear a lot about using agents for workflows. One company has 80,000 active workflows and believes it’s making innovation, employee experience, and other aspects of its business better with AI. Learn more on today’s episode. </p>
<p><strong>Jacqui Canney:</strong> I’m Jacqui Canney from ServiceNow, and you’re listening to <cite>Me, Myself, and AI</cite>.</p>
<p><strong>Sam Ransbotham:</strong> Welcome to <cite>Me, Myself, and AI</cite>, a podcast from <cite>MIT Sloan Management Review</cite> exploring the future of artificial intelligence. I’m Sam Ransbotham, professor of analytics at Boston College. I’ve been researching data, analytics, and AI at <cite>MIT SMR</cite> since 2014, with research articles, annual industry reports, case studies, and now 12 seasons of podcast episodes. In each episode, corporate leaders, cutting-edge researchers, and AI policy makers join us to break down what separates AI hype from AI success.</p>
<p>Hi, listeners. Thanks again for joining us. Today I’m talking with Jacqui Canney. She’s the chief people and AI enablement officer at ServiceNow. She leads all talent strategies for the company’s rapidly growing global workforce. We’ve known each other for a few years, and I’m glad the timing finally worked out for us to talk with our microphones on. Jacqui, thanks for joining us. </p>
<p><strong>Jacqui Canney:</strong> Thank you. Thank you, Sam, for having me. I’m really excited for this conversation. </p>
<p><strong>Sam Ransbotham:</strong> [It’s] going to be fun. Let’s start with ServiceNow. It’s huge, [an] S&P 100 [company], but some listeners might not be familiar with all that the company does. Can you give us a bit of background? </p>
<p><strong>Jacqui Canney:</strong> Sure. I’ll start with what our purpose is, which is to put AI to work for people. At that core, we are the AI platform for business transformation. If you think about automated workflows, you think about the ability to drive your business results, [and] it comes down to how you direct work. Our platform is literally built on AI so that we can help companies in — I think it’s now 80 billion — workflows that we manage that produce either better service, more analytics, all the things that companies are seeking to do with their organizations. I was a customer of ServiceNow, so that brought me to be really excited about working here, too. </p>
<p><strong>Sam Ransbotham:</strong> You really led with AI right there. How did that happen? We’re just [a] relatively few years into this whole AI world. How do you have 80 billion [workflows]? I thought for a second, that seemed like a huge number. How do you have that many workflows using AI already? </p>
<p><strong>Jacqui Canney:</strong> We have a very innovative company. It’s 22 years old, I want to say, and was built on how to help people experience work better. Fred Luddy, our founder, built the first workflow for a colleague who was struggling with the swivel chair of getting work done and Excel spreadsheets, etc. So at our core, innovation has been something that we’ve always tackled. You’ve seen the movement — analog to digital, [on-premises] to cloud, cloud to mobile, now this conversation to AI — and ServiceNow has had these amazing engineers and product leaders who’ve been thinking about this for a long time, even before people talked about ChatGPT. </p>
<p><strong>Sam Ransbotham:</strong> Maybe give us an example. What is one of these 80 billion [workflows], and how is artificial intelligence involved in that? </p>
<p><strong>Jacqui Canney:</strong> I’ll take one in my area that I see a lot. When somebody gets hired to work here, there [are] lots of steps to onboard people. That can be a lot of conversations. It can be different managers, different departments. But with our onboarding platform, you say, “Hey, this is the person [who’s] starting. This is the kind of computer that they want. This is the kind of cellphone that they need. This is the training they need to have happen, the proof of identity so that they can be paid, that they get paid, that they show up, and they’re feeling productive” before they even start on that day. And then [you include] what happens post that onboarding because there [are] follow-ups, [such as] reminding a manager, “Hey, so-and-so started 10 days ago. Why don’t you check in?” Or, “So-and-so got their first kudos, a recognition, [so] why don’t you check in and see how they’re doing?” It’s an automated workflow that takes [out] the guessing and makes the manager and the employee really feel a relationship right at the gate, that’s personalized. </p>
<p><strong>Sam Ransbotham:</strong> In that process, then, where was artificial intelligence, or how does that fit into all those steps? </p>
<p><strong>Jacqui Canney:</strong> You can have an agent [that] if I say, “I want a MacBook,” it makes the order. The agents get the order done. The agents get the order shipped to your house. Agents [are] working in the background while people are able to focus on what they need to, which is welcoming this great new employee. </p>
<p><strong>Sam Ransbotham:</strong> That seems like a good separation of tasks, the classic getting rid of the dirty, dull, and dangerous parts [in favor of] the things that humans are better at. Tell me a little bit more about how you would organize a process like that. I think I would be tempted to get whatever computer or phone I wanted without oversight perhaps. How do you integrate that? </p>
<p><strong>Jacqui Canney:</strong> It’s a really great question because it does bring it down to [the] practical, like, how do you get this work done? There’s governance built into the platform. You’re creating that governance as a leader when you implement the technology. Price points, options, whatever it is that your company is governing, get embedded into the choices. But also, there’s design, which is something that maybe not everybody thinks about when you talk about platform technology. But designing the experience is equally as important so that it’s not just about, “Here’s what the CIO is trying to get done. Here’s what procurement is trying to get done. Here’s what HR is trying to get done.” But [by] putting the person at the center — the manager and the employee — and designing a process that’s really great for them — and we also have it so you could do it on your phone — at its core … the right governance [is] around it. </p>
<p>Then, if something goes wrong, because that can happen too, what’s the feedback loop if the wrong computer came, or it didn’t come in time? Or [how can we] get the signal so that we can continue to improve our process, and certainly find where a process flow might break down so that [we] can correct that in the tech? </p>
<p><strong>Sam Ransbotham:</strong> That makes sense. Let’s go back to your new-hire example. How much do people know that artificial intelligence is involved in this process? Or where is it obvious, and where is it not obvious? </p>
<p><strong>Jacqui Canney:</strong> It’s becoming less obvious, is what I would say. We’ve acquired a company called Moveworks, which is in and of itself a front-door conversational experience. </p>
<p>Earlier versions of our platform would feel potentially more like I’m interacting with technology. I’m searching. I’m getting directed to [knowledge base] articles, things that were all easier [but] not perfectly seamless. Now this conversational layer, which we’ve implemented for all our people, is like going to search. You go to it and say, “Hey, I’m meeting with Sam. What was the last meeting that we had?” It’s literally having this conversation. So I think it’s becoming less clear if you’re talking to a person or you’re talking to tech, which is making it really easy to get to the answers that you want. </p>
<p><strong>Sam Ransbotham:</strong> Actually, one of the things I think about — and maybe this is just my own personal weirdness — [is] I feel like I interact with people differently than I do with machines. For example, if I was talking to you about getting a computer, I might say, “Oh hi, Jacqui, how are you doing? It sure is snowy here. It’s really cold. I was thinking about getting a computer.” On the other hand, if I was talking to a machine, I might be a little bit more brusque and say, “Buy machine now.” Maybe the robot overlords will come back and get me for that. But it seems like there could be some efficiency in being transparent: Hey, you’re talking to a machine; you can drop the conversation about the weather, perhaps, or the social glue. </p>
<p><strong>Jacqui Canney:</strong> It’s funny. You can sort of have social conversations with the machines, too. It can recognize if you’re stressed or in a hurry [by] the tempo of our voices, and it directs to responding in that way. You also can find a way out, to talk to a person. You can click through to get to a person. That way, you can get out of whatever chain of conversation that you’re in. </p>
<p>One thing you bring up, though, that I do worry a little bit about us as humans: If we are abrupt with the machine, are we going to forget and be abrupt with each other [when] we’re talking to [another] human? I think that’s at the core of what I’ve been spending a lot of my time on; there’s a lot of technology talk. There [are] 80 billion workflows just with us. But without getting the change management of the users right, whether they’re your employees or your customers or the end users of your technology … that’s what I’ve been thinking about. </p>
<p><strong>Sam Ransbotham:</strong> I haven’t thought about the spillover the other way, but that’s a good point, that maybe I’m becoming brusquer to my humans. Well, now I’ve got a new thing to worry about. </p>
<p>How much do these employees need to know about artificial intelligence? What’s your thinking on how much awareness people need to have of these technologies in order to be successful? </p>
<p><strong>Jacqui Canney:</strong> We’ve invested quite a bit in this space. Every person who works here — we’re 30,000 people now — has had AI training, and we’ve been doing this for a couple of years. One, because the products we build, no matter what part of the company you’re in, understanding what AI is, [having] a common vocabulary about that, that was really important to our CEO and our leadership team for the company. </p>
<p>We’ve invested [in] having, from speakers to AI Day to different kinds of training, and we’ve evolved quite a bit now, where we’ve assessed the whole company on AI skills, and it’s not like one size fits all. Different roles have different expectations and different experiences, so we’ve customized the assessments and built personalized learning journeys so that people can grow their skills. And we’ve seen our organization really lean in and be excited about that. </p>
<p>We also celebrate people who use AI tools really frequently because they’re learning from each other. I want to eliminate as much fear in the workforce about what AI is and what we’re using it for, and how we can use it in the future. I think by being transparent, by offering opportunities, by giving people learning experiences, even for myself, I’ve been seeing more confidence grow. We ask our people all the time how they are feeling. They feel pretty strongly that they’re getting the tools that they need. So we’re going to keep at it. </p>
<p><strong>Sam Ransbotham:</strong> There [are] like four or five things that I wanted to follow up on there. You mentioned lots of good topics. Maybe the first one I’ll start with is: How much do people need to know? Vocabulary, I think, was one of the things you mentioned, which makes sense. We need to be able to talk about technology in ways that make sense, to communicate with each other, but what are these skills that people are trying to pick up on?</p>
<p><strong>Jacqui Canney:</strong> Prompt engineering is something we all have been talking about. It is not something we talked about that long ago, right? You have a team like in my organization, which is a human resource people team, and we have implemented, obviously, our own tech, and we were able to come to double the productivity of what my team could do. It was 1-to-400 to 1-to-900 that we were serving because of the tech. Now, I didn’t want people to be displaced because of that. But then they became better at a couple of things. One is prompt engineering so that they could help create better questions that they’re asking so that we can get better answers and then train AI so that it continues to be better answers. Over 90% of our inquiries that go to our Now Assist, which is our own tech, get answered by the tech. </p>
<p>The more we can make that smarter and better, the more people will be happier to use that. And then we also created new roles. [These are] adjacent skills that I’ve seen the team lean into. We have product engineers and product designers inside HR. We didn’t have that before. We’ve built a new role called forward-deployed engineer, which is somebody who is quite technical but has an interest and a desire, and is really great at talking about business problems and business transformation, and marrying those conversations together. </p>
<p>So you can imagine talking to an HR lead [or] a CIO somewhere out there using our tech, and they know they have this problem they want to solve or this opportunity to fix. Now we’ve built a workforce that can go meet with that team, talk about their problem, and then say, “Here’s how we suggest the technology can solve the problem,” versus saying, “Here’s the technology. Work around it, and work it into your solution.” It’s more in service of the human. </p>
<p><strong>Sam Ransbotham:</strong> Those are some interesting numbers, like the 1-to-400 to 1-to-900, and your first reaction would be “OK, yeah, that’s going to lead to reduction.” But as you point out, there [are] just a bunch of new tasks that are coming up and new roles that are coming up as quickly as maybe whack-a-mole. You’re trying to eliminate some work, and new work is getting created. </p>
<p>What’s your sense of the net? If we’re moving from reducing things that people are needing to do, by the two-to-one-ish type of number that you mentioned, but you mentioned new roles, too. It seems like a big deal if that is a one-to-one swap, a one-to-a-half swap, or a one-to-two swap. That’s big. Which direction is it right now? </p>
<p><strong>Jacqui Canney:</strong> A crystal ball would be really good on that one right now. I think every company is tackling it in their own way. I think that, at its core, some companies have gone after this with a cost-cutting lens, and I don’t think that’s the way I would start if someone asked me. I really think the opportunity, as [it] has [been historically], is technology provides capacity and creativity, hopefully, or new adjacent business lines, the things that can grow. I’ve seen it not just here at ServiceNow but even in my old job at Walmart, where you could see where you implement this powerful tech, but it does create expansion. The hard work is the work redesign that has to happen. And that’s where leaders, CEOs, chief people officers really should be spending their time, because I think whether it’s a one-to-one or you’re flat or you’re growing, you’ve got to design that future. And if you don’t design it, you’ll lose the capacity. </p>
<p><strong>Sam Ransbotham:</strong> I think I was too sort of crude to say, “Is it net plus or minus?” I’m sure in many areas it’s plus and [in] many areas it’s minus. And then we’re looking at the net of the net across a big aggregate — the crystal ball is not quite polished enough for that. </p>
<p>I think this training program you mentioned is part of the ServiceNow University. I like the idea that you mentioned the skill assessment as part of that, but at the same time, you also mentioned just a second ago that prompt engineering wasn’t something you were paying attention to a couple of years ago. </p>
<p>So we have the changing skills of people and the changing needs of people. How often are you measuring these things? How are you measuring these things? The details on this seem very difficult with 30,000 people in a rapidly changing world. </p>
<p></p>
<p><strong>Jacqui Canney:</strong> Well, we have jumped on this with all of our selves. The board, our CEO, the leadership team, everybody is fully supportive of the changes that we’re making and that we’re driving inside our own company. This assessment of the 30,000 people was important. I felt like we needed an X-ray of the company to know where we were, to be able to go forward. We didn’t use it as anything scary or a negative. It was really meant to be like we’re all going to get smarter about what we know we have as skills and what we know we’re going to need. </p>
<p>Then if you take what we’re going to need, you’re able to say — and this is with the help of Pearson; they’ve been a good partner to us — “Here [are] the jobs, here [are] the skills, here [is] the new work that you’re planning, and then here [are] the gaps you need to close.” So it’s very personalized, but it’s also how we’re moving our change management through as a company. </p>
<p>I have other HR leaders [who] I really love working with, and we all talk all the time about how they’re tackling it. And I think, commonly, that’s what I’m hearing my peers talk about — how we’re sort of going after it. It’s like your X-ray, your gaps. What can you build? What’s adjacent? Who can you train? Who can you grow? Who do you have to hire? </p>
<p><strong>Sam Ransbotham:</strong> Actually, do you let outsiders take this? I’m ready to sign up because … I screw up a lot of stuff, and [it] can be so nice to know ahead of time. … I always think about this in one incremental hour. If I had one extra hour, what would I do with that hour? Lots of times, I just don’t know what the right thing to learn is or the new thing that would help the most. And I’m fascinated by the promise that these technologies could help us learn about these things. </p>
<p><strong>Jacqui Canney:</strong> ServiceNow University [has] a lot of free courses out there. You can go check it out. I’d love your feedback about it. </p>
<p><strong>Sam Ransbotham:</strong> Great. So you gave me homework. That’s no fun. </p>
<p><strong>Jacqui Canney:</strong> There you go. </p>
<p><strong>Sam Ransbotham:</strong> One of the things you’ve talked about is soft skills. … [For] the idea of a soft skill versus hard skill, first, what are your thoughts on the relative importance of those two types of skills going forward? </p>
<p><strong>Jacqui Canney:</strong> I have always believed that critical thinking, the ability to pattern recognize, those things that you learn, whether it’s through your work, your university, all the experiences that you have, are never more important than they are now. And I know lots of people are talking about that, and it’s not meant to be an easy thing. Not everybody has those skills. But people can be nurtured, I think, to better learn how to create those skills. </p>
<p>One of the things that I’ve been really thinking about is we talk a lot about leadership, and we’ve all talked about leadership for a very long time. But now, more than ever, the ability to find the people [who] have the wisdom is really important. If you’re leading a company or you’re leading a team, it’s never been harder. Everything’s really complex. People are on the road. People are hybrid. We still have some COVID stuff that we’re dealing with. Now you have this really important technology that’s kind of hit everybody’s desk. But at the same time, the world is moving faster than ever. </p>
<p>So how do you have the confidence to literally pattern recognize, have the wisdom to say, “These are the use cases I want to go after,” as opposed to, “These are just the use cases that everybody’s bringing to me”? [Those are the] … really important, nontechnical capabilities we all should be focused on growing. </p>
<p><strong>Sam Ransbotham:</strong> It was interesting. We had Taylor Stockton, who’s a former student of mine, on a previous episode. He works at the [U.S.] Department of Labor, and we were asking [about] hard skills, soft skills. He talked for a bit about soft skills and the importance of that, but then at the same time, he said [that] we also need those technical skills. So what’s your take? If I have one hour this afternoon, should I spend it on developing a soft skill or a hard skill? Or don’t pick on me. [Let’s say] one of my students wanders in here. What’s the one hour? Where do we spend it? </p>
<p><strong>Jacqui Canney:</strong> I might say 30 minutes on what they are curious about with the tech. Is it protocols? I think protocols [are] going to be the next thing [we’ll be] talking about. How do you govern the agents inside a company? That’s really important. Understanding the nature of how you build and create protocols is not something you need to be a computer science person to do. </p>
<p>And then the second is, I think, the ability to drive this critical thinking: I’m absorbing problems. I’m absorbing information. How am I able to take that and process that into an idea or a point of view? I think the world of my university, and that was a lot of how we were taught, not just to be great accountants or great finance people, but also to be great thinkers. Having that be part of what you’re thinking about if you have one hour, I think, is worth it. </p>
<p><strong>Sam Ransbotham:</strong> I have a ton of students who are about to graduate, and they’re talking about difficult job markets. I know you get asked this probably every time someone talks to you, given your role, but what should students who are close to graduation be doing? What should they be thinking about as they enter this job market? </p>
<p><strong>Jacqui Canney:</strong> I think two things are really important. One is, what are the skills that they’re taking out of their university experience? When you go to work at a company, they’re going to teach you a lot. They’re going to teach you how to work. They’re going to teach you a lot about that company, about how they work. But if you can come out of school with one great skill that you’re super proud of: It could be you’re a great writer. It could be you’re a great coder. It could be you are a great speaker. Whatever it is, but really know what that skill is and how you’re going to sell that to an employer that you’re going to work at. You’re probably more AI native than anybody else in the company because of the nature of how you’re growing up and the world that you’re in already. So that’s also on your side. </p>
<p>But the second thing is growth mindset. Demonstrate your ability to learn and change and be agile because I’ve also said, and I don’t have this written down because somebody told me, but the companies with the best language models are not going to be the ones with the most adaptive, agile workforces. So I look for those kinds of qualities, especially the early-in-career talent that I get to meet. </p>
<p><strong>Sam Ransbotham:</strong> I like that. It’s hopeful. I think your point about how well prepared students are — I love job descriptions that have something like, “needs 30 years of experience with large language models” — it’s just not possible. So the students graduating now are just as, or maybe probably more, familiar with this technology than many of us are. … I was thinking about blind spots. You’ve [now worked] at Walmart, WPP, [and] ServiceNow. What are people getting wrong? What are leadership blind spots here when people are thinking about artificial intelligence? </p>
<p><strong>Jacqui Canney:</strong> Well, I think focusing on the tool and not the talent is one of the top things. People really get wrapped up around [questions] like, “What’s my AI strategy?” [but] it’s really your business strategy. Then, how does the business use technology, but certainly, how does it bring its people along with it? That gets missed a lot. … I talked about the cost-cutting exercise; I think people get that wrong when they lead with that. Waiting for a perfect plan is another one I think people get stuck in. I know sometimes even I do, right? It’s like you don’t have this all figured out. Like you said, 30 years of LLM experience — where’s that going to come from? It doesn’t exist yet. </p>
<p><strong>Sam Ransbotham:</strong> I feel seen with that one. </p>
<p><strong>Jacqui Canney:</strong> I think people skip the hard parts. They skip the culture. They skip the trust. They skip the people part. I feel like that’s the stuff that I’ve seen go wrong. </p>
<p><strong>Sam Ransbotham:</strong> I think there’s a lot of ways to screw this up, too. I mean, there [are] a lot more ways to get things wrong than there are to get them right. Your idea of not having a perfect plan to start with feels wrong. I was reading something that … you had AI write a poem for [a] family trip. I was thinking about that. It struck me as funny because we actually, just for a cringe moment, I had my classroom write a theme song for our ML (machine learning) class. What would generative AI say is a good theme song for our class? We did not all recite the class anthem afterward, but you said that surprised you as something that the tool could do. What’s surprising people about what these tools are capable of? What are the things that people are learning aha from these tools? </p>
<p><strong>Jacqui Canney:</strong> I think it’s the ability to be better prepared for X meeting. … We have seen in our sales organization where they have access, obviously, to all the data about our customers, about the work that they’ve been doing. Now, how to prepare for those meetings in minutes and not days has been, I think, really exciting and eye-opening. People are loving that because it’s easier to get to answers quicker. </p>
<p>The other thing that I saw that people were super excited about, especially in our sales organization, [is] it went from like four or five days to find out what your commission is going to be to eight seconds. So if you have a workforce that’s motivated to know that, making that easier has been a great, well-received use of what the technology has been able to do in the day-to-day. I probably could think of a bunch more, but those two come closest to me right now.</p>
<p><strong>Sam Ransbotham:</strong> Actually, I like the quick feedback part because … earlier you were talking about assessing people’s skills, and I was thinking about how in the education world, we do a fair amount of testing. And one of the things I was thinking as you were saying that is that students actually don’t dislike tests.</p>
<p>Now, I’m sure people are freaking out right now as I’m saying that. But people like to get feedback about what they know and what they don’t know. People like quick feedback. This is the same thing with your commission example there. If you do something and you get feedback quickly, then that helps us reinforce it, helps us know what to do better. HR is historically driven by the idea of the annual performance review — 364 days ago, what did I do right or wrong? I don’t learn very well from that. You were mentioning commission, but that’s the example of quicker feedback. Both of those — I’m going to push back a little bit — feel like productivity enhancing, but we said earlier that there’s a bit of a trap of getting too sucked into productivity. Faster meeting preparation, faster readiness is good, faster feedback is good, but both of those feel like productivity. What would be the missing thing that we would want to add to that to make it a non-productivity?</p>
<p><strong>Jacqui Canney:</strong> I think it would mean the sale got better, bigger. If I would have had all the things I maybe before wouldn’t have known, like what did they say on LinkedIn, what’s the stock price doing? There’s an opportunity to not be incremental but to be more impactful. And maybe the sales commission one is a little bit about productivity, but I think it’s also highly motivating. That might get the salesperson to say, “If I could just sell this much more, look at what my commission could be.” And then lean into being better prepared for that. </p>
<p>I think, too, that I’ve seen us think about leadership in a different way that I’m not sure without AI we would have had the capacity to do. We have really stepped up [on] what does it mean to be a leader here? And [we have] invested in that [more] than I’ve ever seen because we know that that’s really the unlock for the organization. I think because of AI maybe creating the capacity, even for my own team, to be able to dream a little bigger about … the future of leadership and this concept of wisdom, I see that opening too. And I would say this lane of opportunity is what we still haven’t figured out yet. What are we going to build? Are we going to build a new business? Are we going to have totally different companies that are created? That’s what I think we’re on the cusp of figuring out. </p>
<p><strong>Sam Ransbotham:</strong> You’ve touched on this. You’re obviously from a human resources background, but you’re talking about a lot of stuff that feels like you’re stepping on some IT toes here. So, [what] is this relationship between these formerly quite separate parts of organizations going to be, as you’re using more of these tools? </p>
<p><strong>Jacqui Canney:</strong> I think AI is disintegrating the org chart, and not just between HR and IT. It’s sort of coming across a bunch of places because it just doesn’t see [it] that way. It doesn’t see silos, right? It sees across. Leaders are having to get comfortable with that. It doesn’t mean that the roles aren’t important. It’s just that they’re changing. </p>
<p>Here at ServiceNow, I was promoted to AI enablement officer, along with the chief people officer role just a little bit over a year ago. That was because [CEO] Bill [McDermott] felt like this is truly a human capital moment. It doesn’t make me in charge of it all. I’m the team captain. I’m not alone. But I have to sort of [keep] score of how we’re doing with that. And I think that says a lot about what he sees as a guy who’s seen across technology for decades of where change really [goes]. </p>
<p>Now our CIO, our product team, we work really closely, and we have agreed that the employee experience sits primarily with me and my team. So how technology, how processes, how policies, how all that impacts the experience, we’re kind of like the filter on it, and we work really closely together. We have a very transparent look at what use cases are in productivity across the company. Who’s driving ROI, who’s not? We have a control tower for that. I think that kind of keeps us all square because we can see very openly what’s happening. But yeah, HR roles are totally evolving. If you’re a [chief human resources officer] who’s really focused on process and policy and annual cycles, the CIO is going to come for you. </p>
<p><strong>Sam Ransbotham:</strong> We have a little segment where we ask quick questions. Just answer [what comes to] the top of your mind. What about artificial intelligence is moving faster or slower than you expected? </p>
<p><strong>Jacqui Canney:</strong> Moving faster in headlines, moving slower in, I’ll say, scalability. </p>
<p><strong>Sam Ransbotham:</strong> Getting something across an organization, I’m sure you think about that a lot. </p>
<p><strong>Jacqui Canney:</strong> Yeah. </p>
<p><strong>Sam Ransbotham:</strong> How are people using AI poorly? </p>
<p><strong>Jacqui Canney:</strong> I think they’re writing poems like I did. </p>
<p><strong>Sam Ransbotham:</strong> All right. There you go. </p>
<p>What do you wish that AI could do better? </p>
<p><strong>Jacqui Canney:</strong> I wish it could … I think it’s getting there, but [in] context and memory [it’s] being better. But I think that’s maybe more even how humans are using it. [But how can] I truly make AI be a digital twin of me? I haven’t figured that out yet. </p>
<p><strong>Sam Ransbotham:</strong> Are you finding because of AI you’re spending more time with technology or less time with technology? </p>
<p><strong>Jacqui Canney:</strong> I think it’s just in the flow of work now for me. I’m not really discerning [whether] I am in the tech or not. </p>
<p><strong>Sam Ransbotham:</strong> Well, this has been fascinating. I think one thing we’ll come back [to] is this idea that the use of artificial intelligence is eroding these org charts. I think that’s a really interesting high-level thought to come away from this. Thanks for taking the time to talk with us. </p>
<p><strong>Jacqui Canney:</strong> Thank you, Sam. This was great. </p>
<p><strong>Sam Ransbotham:</strong> Thanks for joining us today. On our next episode, I’ll talk with Peter Koerte, chief technology officer at Siemens, and we’ll talk about industrial AI. Please join us.</p>
<p><strong>Allison Ryder:</strong> Thanks for listening to <cite>Me, Myself, and AI</cite>. Our show is able to continue, in large part, due to listener support. Your streams and downloads make a big difference. If you have a moment, please consider leaving us an Apple Podcasts review or a rating on Spotify. And share our show with others you think might find it interesting and helpful.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/audio/disintegrating-the-org-chart-servicenows-jacqui-canney/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>How to Reap Compound Benefits From Generative AI</title>
				<link>https://sloanreview.mit.edu/article/how-to-reap-compound-benefits-from-generative-ai/</link>
				<comments>https://sloanreview.mit.edu/article/how-to-reap-compound-benefits-from-generative-ai/#respond</comments>
				<pubDate>Mon, 06 Apr 2026 11:00:55 +0000</pubDate>
				<dc:creator><![CDATA[David Kiron and Michael Schrage. <p>David Kiron is the editorial director, research, of <cite>MIT Sloan Management Review</cite> and program lead for its Big Ideas research initiatives. Michael Schrage is a research fellow with the MIT Sloan School of Management’s Initiative on the Digital Economy. His research, writing, and advisory work focuses on the behavioral economics of digital media, models, and metrics as strategic resources for managing innovation opportunity and risk.</p>
]]></dc:creator>

						<category><![CDATA[AI Strategy]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Business Value]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Value Creation]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR &#124; Minneapolis Institute of Art In domain after domain, AI has compressed work that used to be expensive — generating drafts, code, prototypes, and analyses. The marginal cost of a first attempt has dropped sharply. What remains expensive is what happens after the output arrives: evaluating what gets generated. That involves separating [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Kiron-1290x860-1.jpg" alt="" class="wp-image-126461" /><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR | Minneapolis Institute of Art</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">In domain after domain</span>, AI has compressed work that used to be expensive — generating drafts, code, prototypes, and analyses. The marginal cost of a first attempt has dropped sharply. What remains expensive is what happens after the output arrives: evaluating what gets generated. That involves separating signals from noise, catching errors, capturing what was learned, and applying those lessons to the next iteration.</p>
<p>This shift changes what organizations should optimize for. The old question was “How do we produce more, faster?” The new question is “How do we systematically learn from, and with, what AI produces?”</p>
<p>Most organizations still overinvest in answering the old question. They treat artificial intelligence as a throughput accelerator: task in, output out, loop closes. This is consumption economics. A serious CFO instantly recognizes the pattern: asset depreciation.</p>
<p>The organizations pulling ahead answer the new question. They treat AI as a capability accelerator: task in, output out. But they also ask, “What worked? What failed? What should change next time?” Insights get captured, converted into shared knowledge, and applied to subsequent interactions. Each cycle makes the next more effective. This is compounding value. Serious CFOs recognize this pattern, too: asset appreciation.</p>
<p></p>
<p>The data bears this out. Organizations that build systematic feedback loops between humans and AI are six times more likely to derive substantial financial benefits from AI, according to research by <cite>MIT Sloan Management Review</cite> and Boston Consulting Group.<a id="reflink1" class="reflink" href="#ref1">1</a> Organizations that invest in learning with AI are 73% more likely to achieve significant financial impact.<a id="reflink2" class="reflink" href="#ref2">2</a>  Yet, as of 2024, 70% of companies had adopted AI, but only 15% were using it for organizational learning.<a id="reflink3" class="reflink" href="#ref3">3</a></p>
<p>Leaders seeking compound returns must build what most companies don’t yet understand, let alone possess: systems that verify AI outputs, evaluate what they reveal, and capture what was learned so that each interaction becomes a building block for the next. This type of ROI with GenAI — return on iteration — doesn’t happen by accident; it requires infrastructure. Let’s examine what that infrastructure looks like.</p>
<h3>Why This Moment Is Structurally Different</h3>
<p>This is not old productivity advice dressed in new rhetoric. Two complementary economic dynamics that reinforce each other in a virtuous cycle make compounding management an imperative. </p>
<p>In his 1966 book <cite>The Tacit Dimension</cite>, philosopher Michael Polanyi observed that humans know more than they can articulate. For decades, that tacit knowledge protected knowledge workers. What could not be explicitly described could not be automated. Tacit expertise was a moat.</p>
<p>AI breaches that moat — not by codifying tacit knowledge but by inferring it from behavioral traces at scale. Large language models (LLMs) absorb how experts actually work, including knowledge the experts never articulated. Legal reasoning in briefs and opinions, financial judgment in analyst reports and trading patterns, strategic thinking in board presentations: As these behavioral traces become more legible to AI models, the tacit expertise embedded in them becomes readable by machines.</p>
<p></p>
<p>Boris Cherny, who led the development of Claude Code, described a revealing moment: After he gave Claude the tools to interact with his file system, the <a href="https://newsletter.pragmaticengineer.com/p/how-claude-code-is-built" target="_blank">AI began exploring the system on its own</a> to find answers. “It was mind-blowing,” Cherny said. He had not programmed that capability. The model inferred how developers work from the traces they had left behind — behaviors that no one had previously formalized.</p>
<p>The second dynamic makes the economic case for compounding even more compelling. In 1865, economist William Stanley Jevons observed that when steam engines became more efficient, coal consumption increased rather than decreased. Efficiency gains made the capability cheaper, stimulating demand. As tacit expertise becomes readable by machines, the cost of sophisticated capability drops dramatically. Projects that were previously too expensive to prototype can proliferate. Iteration cycles that once took months compress to hours. More expertise becomes readable to machines, expanding what AI can access while enhancing the AI’s knowledge base and improving its capability. More capability expands what organizations attempt. The loop feeds itself.</p>
<p>The data supports this structural shift. Organizations that combine strong organizational learning with learning specific to AI are up to 80% more effective at managing uncertainty.<a id="reflink4" class="reflink" href="#ref4">4</a> The implication is direct: Becoming better learners with AI is at least as important as using AI to create efficiencies.</p>
<p>The organizational challenge worldwide is not whether or how AI will access their people’s domain expertise — that appears computationally inevitable. The issue is developing the competence and commitment to install mechanisms that reap compounding returns on human-AI interactions before competitors do.</p>
<p></p>
<h3>Three Steps to Compounding Benefits</h3>
<p>What do those essential mechanisms look like? We argue that organizations must prioritize three distinct but interrelated operations. When all three of the following steps are present and connected, organizations can reap compounding benefits on AI use. When any step is missing, organizations merely consume AI outputs.</p>
<p><strong>1. Verification.</strong> The question here is “Does this output meet the standard?” This step produces a binary answer: correct or incorrect, usable or not. Verification compares output against a criterion that already exists. Unverified AI output is noise with a confident tone. But verification, used alone, catches errors without generating learning.</p>
<p><strong>2. Evaluation.</strong> For this step, the question is “What does this output reveal?” Where verification compares output against existing standards, evaluation may generate standards that did not exist before. This is why evaluation requires domain expertise in ways verification often does not. The expert as evaluator is not merely checking quality. They are discovering <em>what quality means</em> in this new context. With AI outputs, evaluation is required across three dimensions: volume, variety, and velocity. Human bandwidth to do evaluations, not AI access, becomes the binding constraint.</p>
<p><strong>3. Learning capture.</strong> The third question is “How do we ensure that this insight persists?” When evaluation is not recorded, knowledge does not compound; it evaporates after each interaction. Learning capture converts single insights into organizational knowledge, such as documented criteria, updated prompts, and shared repositories of what worked and why. Think of it as version control for organizational judgment. Without it, evaluation is a one-time event. And learning capture alone (documentation without verification or evaluation upstream) produces nothing but organized noise.</p>
<p>Those three steps dynamically reinforce one another. Better verification produces cleaner signals for evaluation. Better evaluation generates richer material for capture. Better capture improves the criteria used in the next round of verification. The cycle is the point.</p>
<p></p>
<p>There is yet another valuable and scalable learning dividend: Most experts cannot fully articulate what makes their judgment good. Forcing that judgment into written standards, such as the way developers write CLAUDE.md files that specify what “good” code looks like, makes the tacit explicit for colleagues and for AI alike. The gap between what an LLM delivers and what the expert wanted surfaces unspoken knowledge. </p>
<p>At Anthropic, Cherny gives the AI a way to verify its own work — a test suite, a browser check — before a human ever sees it. To evaluate the work’s quality, he concurrently runs 10 to 15 Claude instances that generate swarms of smart subagents: One checks style while another hunts bugs, then a second cohort challenges the first for false positives. Capture is key: A CLAUDE.md file gathers mistakes, corrections, and design principles inside the workflow itself — not after its completion but while it is happening. Each new session inherits what every prior session learned. For Cherny and his developers, the benefits compound.</p>
<p>There are analogous questions for leaders of other business functions: What is your equivalent of version control for organizational decisions? Of automated testing for new approaches? Of code review to make evaluation criteria explicit and shared? The “verification-evaluation-learning capture” flywheel offers both challenge and opportunity for managers and executives who want to use AI to do measurably more than simply cut costs and improve efficiencies.</p>
<p>Consider a marketing team using AI to generate campaign briefs. Verification asks whether the brief meets basic brand standards, such as consistent tone, correct product claims, and regulation-compliant disclaimers. Automation is fast and cheap. Evaluation asks what the brief reveals: Did AI surface customer insights the team hadn’t named? Did it miss the emotional register entirely? Are these insights “actionable” — meaning, can they trigger interactions and offers to cultivate relationships and/or close deals? These judgments require a senior strategist, not a checklist. </p>
<p>Learning capture asks whether that strategist’s correction — “Our brand never leads with product features; it leads with customer identity” — gets written into a shared prompt template or brief standard for the whole team to use the next time. Without that last step, the strategist’s insight dies with the session. With it, every subsequent brief starts smarter. And perhaps that brief becomes the charter for designing an intelligent marketing agent.</p>
<p>The moment a CMO and/or CFO builds dashboards around those questions and criteria, the organization has begun compounding.</p>
<h3>When Verification Masquerades as Evaluation</h3>
<p>The machinery requires a human who holds the loop open when every instinct says to close it.</p>
<p>Jaana Dogan, a principal engineer at Google responsible for developer infrastructure on the Gemini API, ran a revealing experiment. She pointed Claude Code — a rival’s tool — at a problem her team had spent many months solving. Given a short prompt with no proprietary Google data, Claude Code generated a design solution comparable to the one her team had landed on, along with a working prototype.</p>
<p>Most managers, seeing that output, would just verify: “Does this match what we built? Close enough? Adopt or reject.” Verification is fast, comfortable, and binary. It answers the question already in your head.</p>
<p>Dogan did something different. She <a href="https://x.com/rakyll/status/2007240188645581224" target="_blank" rel="noopener noreferrer">decided</a>, “It’s not perfect and I’m iterating on it.” </p>
<p>Evaluation interrogates what the output reveals — about the problem, about your assumptions, and about what you haven’t yet named. Dogan could do this because she had months of judgment to bring to the encounter. AI compressed the implementation; it could not compress the formation of expertise. Without that prior work, only two moves exist: Accept or reject. With it, a third move opens up: Stay in the encounter and learn.</p>
<p>This is the distinction most organizations miss. They treat AI outputs as verdicts to be confirmed rather than starting points to be interrogated. The result is consumption dressed up as adoption — verification mistaken for the whole job.</p>
<p>The implication: Deploy AI first in domains where your people already have deep expertise, not because AI needs hand-holding but because evaluation requires someone capable of recognizing what “not perfect” actually means and knowing what iteration may reveal. The expert as evaluator is not a transitional role.</p>
<p></p>
<p>But Dogan’s insight lives only in her head until infrastructure captures it. The question for any organization is not whether individual experts can hold loops open — some always will. It’s whether the machinery exists to convert their judgment into shared knowledge that persists.</p>
<p>That machinery is what most organizations lack. They have experts. Some even have experts with the right disposition. What they don’t have is the infrastructure that makes compounding automatic rather than incidental.</p>
<h3>Building the Capability</h3>
<p>Translating these practices into infrastructure for business functions beyond software is the work that remains for leaders. This requires a minimum of five moves.</p>
<p><strong>1. Preserve your company’s evaluation expertise.</strong> To reap compound interest, you’re dependent on people who can accurately evaluate AI output. This is domain expertise repositioned: the expert as evaluator rather than the expert as producer. Organizations that let people’s deep expertise atrophy because “AI can do that now” will lose this very valuable capability.</p>
<p><strong>2. Build verification mechanisms.</strong> As noted above, the cycle cannot begin without verification of output. Software verification is cheap: Code runs or it doesn’t. Finance has moderate verification costs; models can be stress-tested against historical data, for example. Strategic planning has expensive verification costs: Long bets may not resolve for years. Most organizations treat expensive verification costs as a good reason not to start some work with AI tools. Instead, the smart move is doing <em>minimally viable verification</em>, the cheapest credible check that an AI output is not wrong. Consider multijudge systems that surface disagreement, and consistency checks that compare outputs across different formulations of the same problem. None of these guarantees correctness, but each offers enough verification to start the cycle. </p>
<p><strong>3. Institute evaluation practices.</strong> Few organizations systematically evaluate AI outputs. After every significant AI interaction, users should ask three questions: What worked? What failed? What was interestingly wrong — wrong in a way that reveals something about the problem the team has not previously articulated? That third question is where hidden value lives. An output that fails in a way the expert noticed but had not yet named becomes new organizational knowledge: It is tacit expertise becoming explicit. People must be prompted to ask these questions as part of the existing workflow. Build evaluation into workflows to pave the way for value to compound.</p>
<p><strong>4. Create capture systems.</strong> Evaluation without capture evaporates. Capture systems operate on two levels: inferential (learning from patterns in accumulated traces, the way AI learns from historical data) and explicit (recording human judgment in retrievable form). Both matter. A practical approach to both is lightweight infrastructure: decision journals that record not just what was decided but why; prompt repositories that preserve what worked and what failed instructively; and evaluation logs that make the team’s evolving standards searchable. The design principle is retrievability, not comprehensiveness. A marketing team’s capture system is a prompt library and a shared brief template. A finance team’s is an annotated model log. Every function can build its equivalent of CLAUDE.md. Discipline, not cost or creativity, is the true constraint.</p>
<p><strong>5. Measure the cycle, not just the output.</strong> Most organizations judge an AI deployment’s success using measures like tools adopted, hours saved, or tasks completed. These are consumption metrics. Organizations trying to reap compound returns measure the cycle: How many interactions were verified? How many were evaluated? How much learning was captured? How quickly did captured learning change subsequent practice? Did your team leaders learn things from AI interactions last week that changed how they worked this week? If not, the cycle is not running.</p>
<p></p>
<h3>The Deeper Transformation</h3>
<p>Leaders want to consume AI. They ask, “How do we produce faster, better, cheaper with AI?” The new question is “How do we learn from what AI produces systematically, and at speed?”</p>
<p>Productivity in an era of generative AI is not output per unit of input. It is also determined by measurable learning per unit of interaction. Organizations that build the machinery to run the cycle — verify, evaluate, capture, apply — will build that capability over time. Those that do not will consume AI without converting it into knowledge. They’ll be busy, perhaps, but not learning and not reaping compound benefits.</p>
<p>Dogan’s eight words embody this shift: “It’s not perfect and I’m iterating on it.” She verified that the output was usable. She evaluated what it revealed. </p>
<p>She is iterating; her learning is being applied to the next interaction. The compounding cycle is running. It is available to any organization willing to build the machinery that makes it possible.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/how-to-reap-compound-benefits-from-generative-ai/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Job Pivots in the Age of AI: Lessons From Mike Mulligan and His Steam Shovel</title>
				<link>https://sloanreview.mit.edu/article/job-pivots-in-the-age-of-ai-lessons-from-mike-mulligan-and-his-steam-shovel/</link>
				<comments>https://sloanreview.mit.edu/article/job-pivots-in-the-age-of-ai-lessons-from-mike-mulligan-and-his-steam-shovel/#comments</comments>
				<pubDate>Thu, 02 Apr 2026 11:00:54 +0000</pubDate>
				<dc:creator><![CDATA[Scott F. Latham and Beth K. Humberd. <p>Scott F. Latham, Ph.D., is a professor in strategy at the Manning School of Business at the University of Massachusetts Lowell. Beth K. Humberd, Ph.D., is an associate professor of management at the Manning School of Business. </p>
]]></dc:creator>

						<category><![CDATA[Adaptation]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Career Change]]></category>
		<category><![CDATA[Employee Psychology]]></category>
		<category><![CDATA[Employment]]></category>
		<category><![CDATA[Resilience]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Disruption]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Managing Your Career]]></category>
		<category><![CDATA[Skills & Learning]]></category>

				<description><![CDATA[Matt Harrison Clough As organizations like Amazon, PwC, and Microsoft have announced AI-fueled layoffs, it’s no surprise that half of Americans have expressed concern about AI’s larger potential impact on their jobs. Of course, companies can attribute layoffs to AI efficiencies while trimming workforces for various reasons. Yet there is no question that artificial intelligence [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/Latham_Pivot-1290x860-1.jpg" alt="" class="wp-image-126336"/><figcaption>
<p class="attribution">Matt Harrison Clough</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">As organizations</span> like Amazon, PwC, and Microsoft have announced AI-fueled layoffs, it’s no surprise that <a href="https://www.pewresearch.org/science/2025/09/17/how-americans-view-ai-and-its-impact-on-people-and-society/" target="_blank" rel="noopener noreferrer">half of Americans</a> have expressed concern about AI’s larger <a href="https://doi.org/10.1038/s41598-024-75113-w" target="_blank" rel="noopener noreferrer">potential impact on their jobs</a>. Of course, companies can <em>attribute</em> layoffs to AI efficiencies while trimming workforces for various reasons. Yet there is no question that artificial intelligence is causing disruption in the job market, making both entry-level jobs and roles in functions like HR and project management, for example, harder to find. Workers and leaders are currently faced with an overwhelming amount of advice for navigating this period of uncertainty. As we move through a historic period of AI-driven labor disruption, why not turn to a place of comfort and simplicity in the pages of a well-known children’s book? </p>
<p>Our ongoing research, focused on the future of work, recently took us to the Virginia Lee Burton archives at the Cape Ann Museum in Gloucester, Massachusetts. Burton is well known for her children’s stories, including <cite>The Little House</cite>, <cite>Life Story</cite>, <cite>Katy and the Big Snow</cite>, and <cite>Mike Mulligan and His Steam Shovel</cite>. Through archival research, we learned that the story of Mike Mulligan offers powerful historic lessons on labor disruption and job adaptation that may provide comfort and guidance for workers and leaders in today’s AI age.</p>
<p></p>
<h3>The Story of Mike Mulligan and His Steam Shovel</h3>
<p>One of Burton’s most enduring stories is <cite><a href="https://www.youtube.com/watch?v=NQjHJKNyoUE" target="_blank" rel="noopener noreferrer">Mike Mulligan and His Steam Shovel</a></cite>, published in 1939, about steam shovel operator Mike and his steam shovel, named Mary Anne. (Befitting a children’s book, Mary Anne is an anthropomorphized earth-moving machine.) The story is set against a future of work that unfolded a hundred years ago. After the Great Depression, the U.S. economy experienced wide-scale mechanization, standardization, and mass production designed to lift the economic situation. As a team, Mike and Mary Anne play a significant role in the boom; they lay the foundations for buildings, open waterways for ships, level the ground for highways, cut tunnels for railroads, and smooth the earth for airfields. </p>
<p>However, their success is somewhat short-lived, as technological advancement brings superior machinery into play. At its core, <cite>Mike Mulligan and His Steam Shovel</cite> is a story of disruption, change, and adaptation. Mike and Mary Anne lose their jobs when new innovations arrive; steam shovels like Mary Anne and steam shovel operators like Mike Mulligan are no longer needed. </p>
<p>Burton writes, “Then along came the new gasoline shovels, and the new electric shovels, and the new diesel motor shovels, and took all the jobs away from the steam shovels.” As the image below conveys, Mike ends up sitting dejectedly on a log while Mary Anne cries oil tears — both of them out of a job at the hands of disruptive innovation. “No steam shovels wanted” is boldly painted on the fence in the background.</p>
<p><img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/Latham_fig1.jpg" alt="A sketchbook illustration for Mike Mulligan and His Steam Shovel showing Mary Anne the steam shovel standing idle beside a fence with "No Steam Shovels Wanted" painted on it, while Mike Mulligan sits slumped on a log in the foreground. Text above reads "Mike Mulligan and Mary Anne were VERY SAD."" class="mt20 mb8" width="100%"></p>
<p class="attribution" style="margin-bottom: 40px;">Cape Ann Museum Library and Archives</p>
<p>While at first things seem hopeless, the book shifts to a story of adaptation and ends with a successful occupational pivot. After digging a hole for the construction of a new town hall (their last job as a steam shovel and operator), Mary Anne becomes the steam furnace in the basement of the building. Mike becomes the building’s custodian, responsible for caring for the new furnace. </p>
<p>But arriving at that point was complex: Mike had to take a series of risks professionally, trust in his ability to adapt, and persevere in the face of disruption to reinvent himself in an occupational sense.</p>
<h3>Three Modern Lessons From Mike and Mary Anne’s Successful Pivots</h3>
<p>While doing our larger body of research on the future of work, we saw how this classic children’s story captures the critical underpinnings of a successful occupational pivot in the face of a dramatic, exogenous shift. It offers three key lessons for today’s workers facing a similar technological-driven disruption from AI tools.</p>
<h4>1. Embrace technology to realize a new occupational identity.</h4>
<p>The book foreshadows a dynamic that is central in today’s economy: The future of work will involve a high degree of human and technological collaboration. Not too long ago, the prospect of AI in our day-to-day work lives felt more like science fiction than reality; and yet, in the very near future, the vast majority of jobs will require employees to <a href="https://www.weforum.org/stories/2026/01/ai-agentic-workplace-human-resources/" target="_blank" rel="noopener noreferrer">work with artificial intelligence</a> to some degree. In some roles, AI has already <a href="https://www.ednc.org/how-much-could-ai-change-jobs-indeed-report-sheds-light-on-changing-labor-force-needs/#:~:text=The%20jobs%20most%20highly%20exposed,position%20fell%20into%20minimal%20transformation." target="_blank" rel="noopener noreferrer">changed the nature of the job</a> altogether. Yet workers across many professions continue to <a href="https://www.hrdive.com/news/employers-employees-resistant-hostile-to-AI/749730/" target="_blank" rel="noopener noreferrer">resist and combat</a> the inevitable rise of AI tools.</p>
<p></p>
<p> </p>
<p>The first essential lesson to be drawn from <cite>Mike Mulligan and His Steam Shovel</cite> is the need to reconsider our working relationship with technology: Rather than seeing it as a disruption, we can embrace technology as a means of discovering new opportunities, and perhaps even a new professional identity.</p>
<p>When faced with the prospect of being a custodian, Mike could have politely declined the opportunity: “No, thank you. I am a steam shovel operator.” Doing so would have echoed a degree of ignorance with respect to the larger disruption occurring (steam shovels being replaced by superior technologies). Instead, as the story illustrates, when faced with an occupational pivot, Mulligan said, “Why not?” </p>
<p>Workers today can learn a lot from this. It can be anxiety-provoking to consider an occupational pivot, especially when your identity is tied to your work (“I am a steam shovel operator. It’s who I am!”). But Mike Mulligan leans into the disruption.</p>
<p>In the context of AI, we hear a lot about human-AI collaboration and even cobots, but are workers today truly embracing the interdependence? Rather than seeing AI simply as a technological tool, they can consider how the technology might provide a renewed sense of purpose in their careers, just as it did for Mike Mulligan. </p>
<p>As the technology evolved, Mike evolved in his career and his sense of self. Today’s accountants might be toiling away on Excel spreadsheets that soon will migrate to AI platforms (if they haven’t already). They could already be working with AI agents or soon will. They can push back (“I’m an accountant, not a programmer!”), or they can learn from Mike Mulligan and say, “Why not?”</p>
<h4>2. Understand shifts in how value is delivered.</h4>
<p>Back in 2018, we wrote an article on <a href="https://sloanreview.mit.edu/article/four-ways-jobs-will-respond-to-automation/">the four ways in which jobs will respond to automation</a>. The central premise of our framework was a focus on value: We argued that every jobholder uses a set of core skills to deliver value in some form to a recipient, and thus the key to understanding job evolution is to consider adapting value provision based on emerging technologies. Ironically, Mike and Mary Anne seemed to understand this same premise better than some workers do today. </p>
<p>How did Mike and Mary Anne shift from steam shovel team to furnace team? In Burton’s world, the transition was predicated on the use of their respective skills to provide value in a new context. The last image in the book shows Mary Anne as a furnace connected to the heating ducts, applying her “skills” to deliver new value: providing heat. </p>
<p>Mike is shown sitting in a rocking chair next to Mary Anne, ensuring that her operation supports the building for many winters to come. The team once provided value through digging holes; they shifted to providing value by delivering heat to the town hall and maintaining the building.</p>
<p><img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/Latham_fig2.jpg" alt="An oval-shaped sketchbook illustration showing the basement of the Popperville town hall, where the steam shovel has been converted into a steam furnace connected to heating ducts. Mike Mulligan sits in a rocking chair reading a newspaper beside the furnace, while townspeople descend the stairs." class="mt20 mb8" width="100%"></p>
<p class="attribution" style="margin-bottom: 40px;">Cape Ann Museum Library and Archives</p>
<p>The lesson? While they were sad when their earlier jobs were taken over by superior engines, they were creative in finding a way to use their skills to provide value in a new context.</p>
<p>Several years ago, we worked on a project through a U.S. Department of Labor grant, using our job evolution framework to assist workers who were impacted by the closing of a nuclear power plant. These educated professionals, including nuclear engineers, scientists, and project managers, had expected to work their entire careers at the plant but now had to pivot to use their skills in new contexts. (The nuclear power plant job market was not booming at the time.) One of the biggest challenges we witnessed was that individuals tended to box themselves in relative to their job as prescribed; they struggled to think about how their skills could deliver value in a new context. </p>
<p>Our framework, which focuses on separately assessing skill threats and forms of value delivery, helped those workers reframe the application of their skills outside of the nuclear industry. This effort landed some of the workers in IT, data science, or even environmental consulting roles. But doing so wasn’t an easy fix: It required personal reflection, analysis, and a willingness to make creative moves. Ultimately, by focusing on value creation, those professionals landed in places they never thought they’d be, much like Mike and Mary Anne.  </p>
<h4>3. Leaders must not lose sight of organizational purpose.</h4>
<p>Our last lesson is for leaders: Don’t fall prey to the siren call of AI at all costs. AI is an enabling technology meant to help organizations create new efficiencies and sources of value. A leader’s role is to consider the company’s higher identity and purpose — and then to help employees, customers, and key stakeholders understand how AI can serve and even strengthen that sense of purpose. </p>
<p>Though not specifically referenced in the book, the historical backdrop of Burton’s story is that Mike and Mary Anne were part of the <a href="https://doi.org/10.4324/9781315743219" target="_blank" rel="noopener noreferrer">Works Progress Administration</a> — a Roosevelt-era federal jobs program that was instrumental in getting people back to work during the Depression. Yet many historians have noted that in addition to job creation, the WPA’s primary purpose was to <a href="https://www.npr.org/2020/04/04/826909516/in-the-1930s-works-program-spelled-hope-for-millions-of-jobless-americans" target="_blank" rel="noopener noreferrer">instill hope in a down-and-out country</a>.</p>
<p></p>
<p>In the book, Mike and Mary Ann’s greater purpose and value was also providing hope — to the town, through the new town hall where they worked as a team. It’s a lesson that organizational leaders need to consider. What <a href="https://sloanreview.mit.edu/article/unlock-the-power-of-purpose/">organizational purpose</a> is AI strengthening? Also, what aspects of organizational identity do your company’s AI plans reflect to workers and other stakeholders?</p>
<p>For example, Lyft’s leaders have described <a href="https://www.adweek.com/brand-marketing/purpose-driven-how-lyft-balances-tech-trust-and-human-connection/" target="_blank" rel="noopener noreferrer">the company’s AI integration work</a> as grounded in its long-standing purpose “to serve and connect.” Rather than shaping the company’s AI narrative around the tools, leaders are keeping the company’s purpose front and center. </p>
<p>Think about the underlying reason your organization exists. AI strategies should ultimately reflect who your company is (organizational identity) and its reason for being (organizational purpose).  </p>
<h3>Resilience in the Face of Disruption</h3>
<p>Collectively, these three lessons fall under a broader theme from Mike and Mary Anne’s story: resilience. On the back of Mary Anne, a sign proudly proclaims “Mike Mulligan — Dig Anything, Any Time, Any Place.” The message captures confidence in the pair’s abilities, and a willingness to work; indeed, their work ethic and perseverance are the basis of their pivot. When they are displaced by innovation, they scour the country for new jobs and believe enough in themselves to take on the challenge of building a town hall in Popperville — as a team. (Burton explicitly states that Mike couldn’t abandon Mary Anne.) They embrace resilience in the face of disruption.</p>
<p><img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/Latham_fig3.jpg" alt="A sketchbook illustration for the book's title page showing Mary Anne the steam shovel bursting dramatically through the page, her bucket raised and treads visible, with radiating lines conveying energy and motion." class="mt20 mb8" width="100%"></p>
<p class="attribution" style="margin-bottom: 40px;">Cape Ann Museum Library and Archives</p>
<p></p>
<p>While the current period of AI disruption feels new to many of us, the experience of labor disruption is truly timeless. In the Cape Ann Museum’s archives, we found a letter from a fan to Burton dated Dec. 5, 1942. The reader, Mrs. Helen Baurd, shares that her father was a steam shovel operator who, along with his colleagues, held the Mike Mulligan story near and dear to his heart and, in fact, passed the book around: “‘Mike Mulligan’ traveled all over. ... The men loved it,” she writes. Imagine a first edition of <cite>Mike Mulligan and His Steam Shovel</cite>, covered in grease and shared among operators on lunch breaks, providing inspiration for those men to continue working. The fan’s letter concludes powerfully, “I thot you would be interested to know you are not only giving pleasure to children but to many grown-ups as well.”</p>
<p>Whether it be pleasure or inspiration you take from Mike and Mary Anne’s story, it captures the real-world challenges of individuals dealing firsthand with job disruption. The letter’s closing sentiment is the basis for this article. While <cite>Mike Mulligan and His Steam Shovel</cite> is a children’s story, we believe that it offers a powerful parallel for individuals who want to write their own ending in this age of AI. One hundred years ago, the hero was a steam shovel operator; today, it might be a programmer or nuclear engineer. Whatever our role may be, we can all learn about career pivots and resilience from Mike Mulligan and Mary Anne. </p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/job-pivots-in-the-age-of-ai-lessons-from-mike-mulligan-and-his-steam-shovel/feed/</wfw:commentRss>
				<slash:comments>1</slash:comments>
							</item>
					<item>
				<title>The Best Customers to Study When Scaling Into a New Market</title>
				<link>https://sloanreview.mit.edu/article/the-best-customers-to-study-when-scaling-into-a-new-market/</link>
				<comments>https://sloanreview.mit.edu/article/the-best-customers-to-study-when-scaling-into-a-new-market/#respond</comments>
				<pubDate>Wed, 01 Apr 2026 11:00:41 +0000</pubDate>
				<dc:creator><![CDATA[Nataliya Langburd Wright. <p>Nataliya Langburd Wright is an assistant professor and a Chazen Senior Scholar at Columbia Business School. </p>
]]></dc:creator>

						<category><![CDATA[Cultural Differences]]></category>
		<category><![CDATA[Foreign Markets]]></category>
		<category><![CDATA[Global Markets & Marketing]]></category>
		<category><![CDATA[Market Strategy]]></category>
		<category><![CDATA[Strategic Innovation]]></category>
		<category><![CDATA[Technology Startups]]></category>
		<category><![CDATA[Customers]]></category>
		<category><![CDATA[Innovation]]></category>
		<category><![CDATA[Innovation Strategy]]></category>
		<category><![CDATA[New Product Development]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR &#124; Getty Images For tech companies worldwide, expanding into a new market is both a rite of passage and a moment of truth. It represents the transition from early promise to meaningful scale — an opportunity to increase revenue, signal growth potential to investors, and unlock powerful sources of differentiation, such as [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/Wright-1290x860e.jpg" alt="" class="wp-image-126267"/><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR | Getty Images</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">For tech companies worldwide</span>, expanding into a new market is both a rite of passage and a moment of truth. It represents the transition from early promise to meaningful scale — an opportunity to increase revenue, signal growth potential to investors, and unlock powerful sources of differentiation, such as network effects and economies of scale. </p>
<p>But for every company that expands successfully, many more struggle. Some push into new geographies or industry segments only to stall; others retreat quietly, having learned — too late — that the customers they thought they understood weren’t the customers who would ultimately drive growth.</p>
<p>Expansion is hard, not only because new markets differ but because the assumptions guiding early choices often travel poorly across borders, industries, or segments. A product that resonates with users in one environment may fall flat in another, even when the differences seem minor on the surface. Many companies do not have the luxury of long research cycles or deep field teams. Instead, they learn fast, often improvisationally and typically through their earliest customers, their beachhead market. </p>
<p>These first users shape a company’s understanding of demand in the target market. But despite these users’ centrality, executives often overlook the strategic choice of <em>which</em> early adopters to prioritize. Many leaders assume that early users should come from familiar markets, such as the company’s headquarters country. Others assume that successful expansion requires an immediate leap into the target market, however unfamiliar. Both approaches can work — and both can fail. The real challenge is choosing the right one.</p>
<p></p>
<p>My <a href="https://doi.org/10.1287/orsc.2023.17983" target="_blank" rel="noopener noreferrer">research</a>, which drew on global data on over 1,000 technology startups that was complemented with exploratory interviews and experiments, shows that the decision of which early adopters to target is far more strategic than is commonly appreciated. Early adopters offer information, but that information varies dramatically in its clarity and its relevance. Executives must decide whether to learn from users who are <em>familiar</em> — people whose preferences, norms, and communication patterns can be intuitively understood — or from <em>target-market</em> users, whose preferences match the broader audience the company ultimately wants to reach.</p>
<p>This choice matters because the two groups of people offer different advantages. Familiar users provide <em>clearer</em> signals. Executives understand these users better and can interpret their feedback more effectively. Target-market users, in contrast, provide <em>more transferable</em> signals: Their preferences closely align with those of the market the company hopes to serve in the long term.</p>
<p>Understanding when to prioritize clarity and when to prioritize transferability is crucial. And the answer depends on two factors that vary widely across companies: how similar customer preferences are across markets and how homogeneous or heterogeneous the company’s familiar market is. </p>
<p>When executives get this choice right, expansion is faster, more efficient, and more successful. When they get it wrong, even strong products can fail.</p>
<p>Let’s take a deeper look at the research and the trade-offs involved in choosing familiar users versus target-market users, along with real-world examples of how a company’s early-adopter choices shaped its trajectory. While the research focused on companies entering a new geography, the advice here applies more broadly to those entering new industry segments as well.</p>
<p>The findings offer a simple but powerful principle for tech leaders: Initial users are not just the first audience for your product; they are the lens through which you learn how to scale.</p>
<h3>Clear Sentiment Versus Transferable Lessons: A Strategic Dilemma</h3>
<p>My research began with a question that many tech executives launching new products undervalue: Why do some companies expand successfully into new markets while others struggle despite having comparable products, resources, and ambition? The research focused on a part of the expansion process often overshadowed by discussions of go-to-market strategy: the identity of the company’s beachhead market. These early customers shape what executives learn, how they refine the product, and how they gauge demand. </p>
<p>But my research found that the match between early users and the target market is anything but straightforward. I determined that early-user selection is governed by two opposing forces: clarity and transferability.</p>
<p><em>Clarity</em> refers to how easily a company can interpret the feedback it receives. With familiar users — those in an executive’s home country, region, or environment — communication is smooth. Shared cultural norms, language, and expectations reduce noise. A complaint, a compliment, or a hesitation is easier to decode. Executives can infer meaning more confidently because they intuitively understand what these users are reacting to.</p>
<p></p>
<p><em>Transferability</em>, meanwhile, concerns how closely those users’ preferences match the preferences of the broader target market. Local users may provide very clear signals that simply aren’t relevant to the intended market. Target-market users, on the other hand, offer feedback that directly reflects the needs of the customers the company wants to serve. But these signals can be much harder to interpret. Differences in language, norms, communication styles, or expectations sometimes obscure signals’ meaning, making it difficult for executives to discern whether feedback is driven by real demand differences or by misunderstanding.</p>
<p>The tension between clarity and transferability creates a strategic dilemma: Should companies learn from users they intuitively understand, even if those customers’ preferences aren’t fully representative? Or should they learn from users whose preferences will ultimately matter the most, even if doing so is more confusing?</p>
<h3>Examining Customer Preferences Across Borders and Markets</h3>
<p>The answer to that question depends critically on two factors: how similar or different customer preferences are across markets, and the composition of the company’s familiar market itself.</p>
<p>To quantify cross-market preference similarity, I built a measure of local fragmentation for different product categories. Categories with <em>low fragmentation</em> — such as software-as-a-service (SaaS), productivity software, and web tools — tend to have global user bases whose preferences converge regardless of geographic location. A productivity app that works well in Melbourne is likely to work well in Munich. </p>
<p>Alternatively, categories with <em>high fragmentation</em> — such as language learning, food and beverages, and industrial automation — exhibit sharp preference differences across markets. What works in one cultural context may not translate to another. </p>
<p>This difference is fundamental. In globally standardized categories, feedback from familiar users carries a double advantage: It is both clear <em>and</em> representative. Companies in these sectors benefit from starting local because local feedback mirrors target-market demand without sacrificing interpretability.</p>
<p>In locally fragmented categories, however, familiar users may offer misleading signals because their preferences are likely fundamentally different from those of people in the target market. Here, companies benefit from beginning with users from the target market, even if interpreting their feedback is harder, because the mismatch in preferences outweighs the clarity gained locally.</p>
<p>The similarity between the familiar and target markets also matters. Some markets are simply closer to one another along linguistic, cultural, and historical dimensions. France and the United Kingdom, for example, resemble each other far more than France and Japan do, which means product preferences between the former pair are more likely to overlap.</p>
<p>The second factor is the homogeneity of the company’s familiar market. Some home markets are culturally and linguistically cohesive, making familiar users highly interpretable. In a market like France, for example, the population predominantly speaks French, so a company based in that country is likely to speak the same language as other users there. Other markets are diverse, with multiple languages and subcultures. In those cases, “familiar” users aren’t truly familiar. In India, where more than a dozen languages are spoken, a venture is less likely to speak the same language as a significant number of users there. Thus, even local feedback is harder to interpret, reducing the clarity advantage and strengthening the case for starting with target-market users.</p>
<p>Together, these two dimensions — cross-market preference similarity and familiar-market homogeneity — reveal the trade-off at the heart of early-adopter strategy. They also explain why companies that might appear to be similar from the outside (such as two global software startups) often diverge dramatically in their early-user choices. </p>
<p>Some companies start close to home; others leap abroad immediately. Both choices can be right, but only if matched to the structure of the business’s product category and the nature of its local environment. Here are two ways to think about these considerations:</p>
<ul>
<li>When you are operating in a global product category, start with familiar local users to benefit from their clear feedback.</li>
<li>When you are operating in a locally fragmented product category, on the other hand, start with your target global users right away so you can learn their preferences.</li>
</ul>
<p></p>
<h3>Real-World Lessons on Choosing Wisely</h3>
<p>For managers and founders preparing to expand into new markets, the question isn’t whether early adopters matter. They always do. The question is, which early adopters will help you learn the right things fastest? My findings offer two takeaways, each illustrated by companies whose choices reveal these dynamics in action.</p>
<h4>1. Why Companies Scaling in Global Product Categories Should Start With Familiar Users</h4>
<p>When user preferences are relatively similar across markets, companies can safely rely on familiar users as their initial audience. These users provide clear signals, and because their preferences align globally, the consumer feedback still maps onto the target market. This means that it makes sense to start locally when you believe you are developing a truly “global” product. </p>
<p>Indeed, in exploratory interviews, one technology company based in Israel discussed the value of starting with local initial users: “It’s easier: same time zone, same language, same mentality. … With cultural differences, until we translate American to Israelis, it takes us time to get the actual essence of the feedback. And with Israel, it is just a lot easier because that’s the culture that we are used to.” The venture was developing a SaaS solution that had fairly standardized preferences across a product category with low fragmentation, which meant that any feedback from local initial users was relevant for its target U.S. market. </p>
<p>Canva, founded in Australia, is another example of a company operating in a category (productivity tools) with low fragmentation. Designers, educators, and small businesses across countries share remarkably similar needs: easy-to-use templates, intuitive interfaces, and tools that remove friction from creative work. This similarity meant that early feedback from Australian users, Canva’s familiar market, was highly transferable to global audiences — the company’s target market. Just as important, Australia’s relative linguistic homogeneity gave Canva unusually clear early signals. Feedback was easy to interpret and act upon. By the time Canva expanded globally, it had already refined its value proposition through an initial group of target users that offered clarity without sacrificing representativeness.</p>
<h4>2. Why Companies Competing in Locally Fragmented Categories Should Start With Target-Market Users</h4>
<p>When consumer preferences vary widely across countries, familiar-user feedback may be clear but misleading. Companies in these categories benefit from beginning with users in the target market: You can learn the right preferences early, even if interpreting that feedback is more challenging. This means starting with your target global users when you believe you are developing a “local” product.</p>
<p>To this end, another technology company based in Australia expressed concern about relying on local users to learn about demand in its target foreign markets. So it sought target-user feedback right away: “If we take too long [to go abroad] … then we can potentially box ourselves in as a product and then just find it harder and harder to … adapt the product for an international market.” The company was creating a technology solution for the construction industry — a locally fragmented product category exhibiting large differences in user needs across countries. Initial users in Australia could have fundamentally different preferences than international target-market customers. </p>
<p></p>
<p>Grammarly, launched in 2009 as an English writing assistant, serves as another example of a company operating in a more locally fragmented product category. Grammarly was built in Ukraine, but its target audience wasn’t local Ukrainians — it was international students, professionals, and English language learners in English-speaking markets, particularly at universities. Writing norms, educational expectations, and communication styles differ considerably across countries, making this a high-fragmentation product category. Had Grammarly relied primarily on local Ukrainian users, whose English proficiency patterns, contexts, and expectations differ from those of people outside of Ukraine, they would have received clear but poorly transferable signals. Instead, the company went directly to its target users in English-speaking countries. This allowed Grammarly to quickly understand the challenges that actual global users faced, shaping the product road map and accelerating the tool’s adoption in the markets that mattered most.</p>
<h3>Four Decision-Making Steps to Follow</h3>
<p>Use this four-step process to apply the lessons I described above to your company’s situation as you clarify which early adopters to target.</p>
<p><strong>Step 1: Define your target market with precision.</strong> Many companies expand too broadly or vaguely, targeting “global customers” without identifying the specific segment or geography that matters most. Begin by answering the question “Who exactly are we trying to reach?” Is the target market your home region? Another country? A specific language group? A niche industry? This clarity should anchor all subsequent decisions.</p>
<p><strong>Step 2: Assess cross-market preference similarity.</strong> Ask, “How similar are customer preferences in my familiar market to those in the target market?”</p>
<p>Indicators of high similarity include these characteristics:</p>
<ul>
<li>The product category is widely globalized (such as for productivity software).</li>
<li>Existing competitors serve multiple markets effectively.</li>
<li>Customers across markets cite the same pain points.</li>
</ul>
<p>Indicators of low similarity include these factors:</p>
<ul>
<li>Strong local brands dominate multiple regions.</li>
<li>Cultural norms shape product usage (such as communication styles or food preferences).</li>
<li>Customer needs differ meaningfully across countries.</li>
</ul>
<p>If similarity is high, familiar users may be enough. If similarity is low, target market users are essential.</p>
<p><strong>Step 3: Evaluate how homogeneous your familiar market really is.</strong> Even in globally standardized categories, the familiar market must offer clear signals. Ask:</p>
<ul>
<li>Is my local market culturally cohesive?</li>
<li>Do local users share language, expectations, and norms, or is the market diverse, segmented, or multilingual?</li>
</ul>
<p>If your familiar market is heterogeneous, clarity drops — and target-market users may become the better choice.</p>
<p><strong>Step 4: Combine the two dimensions to identify your beachhead market.</strong> Follow these simple rules:</p>
<ul>
<li>High similarity + homogeneous familiar market → Start with familiar users</li>
<li>Low similarity + heterogeneous familiar market → Start with target-market users</li>
</ul>
<p></p>
<p>This is the core decision. And, once you commit, ensure that early-user engagement is deliberate: Get specific feedback, iterate quickly, and avoid the trap of overgeneralizing from a misaligned early group.</p>
<h3>Learn From the Right People at the Right Time</h3>
<p>Market expansion is one of the most defining and difficult stages of business growth. While executives often focus on marketing strategy, product localization, or competitive positioning, my research shows that the selection of initial users is crucial. This choice is especially pivotal for companies in smaller markets: They may face pressure to rush into larger ones and, in doing so, skip the local early users who would have given them clearer signals to build better products.</p>
<p>The companies that expand successfully aren’t simply those with the best products. They’re the ones that learn from the right people at the right time. By understanding the clarity-transferability trade-off, executives can make more strategic early-adopter choices — and scale with far greater confidence.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/the-best-customers-to-study-when-scaling-into-a-new-market/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Level Up Your Crisis Management Skills</title>
				<link>https://sloanreview.mit.edu/article/level-up-your-crisis-management-skills/</link>
				<comments>https://sloanreview.mit.edu/article/level-up-your-crisis-management-skills/#respond</comments>
				<pubDate>Tue, 31 Mar 2026 11:00:22 +0000</pubDate>
				<dc:creator><![CDATA[Rick Aalbers, Killian McCarthy, and Arjan Groen. <p>Rick Aalbers is a full professor of corporate restructuring and innovation at Radboud University. Killian McCarthy is an associate professor of strategy at Radboud University and a full professor of practice at the Kyiv School of Economics. Arjan Groen is an interim executive specializing in strategic change and a visiting lecturer at Radboud University and Vrije University Amsterdam.</p>
]]></dc:creator>

						<category><![CDATA[Business Risk]]></category>
		<category><![CDATA[Disaster Preparedness]]></category>
		<category><![CDATA[Leadership Advice]]></category>
		<category><![CDATA[Narrated Article]]></category>
		<category><![CDATA[Resilience]]></category>
		<category><![CDATA[Strategic Leadership]]></category>
		<category><![CDATA[Crisis Management]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leadership Skills]]></category>

				<description><![CDATA[Michael Austin/theispot.com The Research The authors conducted in-depth interviews with senior leaders ﻿with direct experience guiding large, complex systems through unexpected shocks. Their sample included a former prime minister, CEOs, board chairs and directors of multinational corporations, a central bank governor, a﻿ national chief of defense, and a national fire marshal. Participants represented a diversity [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/2026SUMMER_Aalbers-1290x860-1.jpg" alt="" class="wp-image-126223"/><figcaption>
<p class="attribution">Michael Austin/theispot.com</p>
</figcaption></figure>
<aside class="callout-info">
<h4>The Research</h4>
<ul>
<li>The authors conducted in-depth interviews with senior leaders ﻿with direct experience guiding large, complex systems through unexpected shocks.</li>
<li>Their sample included a former prime minister, CEOs, board chairs and directors of multinational corporations, a central bank governor, a﻿ national chief of defense, and a national fire marshal.</li>
<li>Participants represented a diversity of sectors, organizational types, and sizes, and different kinds of crises, allowing the authors to capture generalizable principles of effective crisis leadership.</li>
</ul>
</aside>
<p></p>
<p></p>
<p><span class="smr-leadin">Every leader wants to believe</span> that their company is prepared to handle a crisis, but when one occurs, it often reveals the weaknesses at the heart of an organization.</p>
<p>Consider Southwest Airlines. When a winter storm hit in December 2022, the company suffered a meltdown. Seventeen thousand flights had to be canceled, 2 million passengers were left stranded, and the company lost an estimated $800 million. What lay at the heart of Southwest’s troubles wasn’t bad weather but an aging, neglected IT infrastructure that led to the collapse of its scheduling systems. Communication broke down, and front-line teams found themselves improvising in isolation.</p>
<p>Contrast this with Microsoft. A major outage in March 2021 left millions of Teams, Outlook, and Microsoft 365 users without access to the cloud-based systems. Almost immediately, however, a fully integrated crisis response kicked in. The outage was contained, and services were restored that same day. Importantly, Microsoft followed up with a detailed root-cause analysis﻿ and, in the months that followed, accelerated its investments in system redundancy and incident transparency. In other words, Microsoft not only survived the crisis but also used it to become an even more resilient organization afterward.</p>
<p>Such incidents lead to an obvious question: Why do some organizations freeze in the face of a crisis while others spring into action and skillfully minimize the damage?</p>
<p>We posed this question to leaders who have faced high-stakes disruption firsthand. Our interviews included individuals who have ﻿served in high-ranking political, military, and government roles, as well as senior executives at major global companies. (See “The Research.”)</p>
<p>Based on their insights, we identified seven capabilities that any organization must develop to withstand a crisis. Because most organizations have these capabilities, if only in a partial or uneven form, we also define what each capability looks like in terms of its level of maturity. The result is what we’ve termed the ﻿7C’s Model. (See “The ﻿7C’s of ﻿Effective Crisis Management.”) Here, we’ll introduce the model and illustrate how it can be used as an analytical lens to review and assess an organization’s strengths and weaknesses in crisis management.</p>
<p></p>
<h3>The Seven Core Capabilities of Crisis Management</h3>
<p>Our interviewees saw the ability to execute the following organizational practices as vital in any crisis.</p>
<p><strong>1. Contingency. </strong>Preparing for what might happen and defining roles upfront so that people know what to do when the crisis hits is critical to surviving a crisis. For example, the CEO of a leading electronics and health tech company told us that the organization was able to build muscle memory by running repeated supply chain stress tests. As a result, when the COVID-19 pandemic hit, employees knew exactly what to do, how to do it, and how to organize their response.</p>
<p><strong>2. Clarity. </strong>Transparent and clear communication is essential in a crisis. This does not mean spinning disaster into a polished press release; it means communicating early and honestly. The CEO of a major professional services organization advised leaders to communicate only what they know to be true, warning, “You can’t take back what you’ve already said.” As a former prime minister of a Western European country put it, “People can handle bad news; what they can’t handle is confusion.”</p>
<p><strong>3. Coordination. </strong>Effective leaders connect silos before a crisis. They build trust and a rhythm of collaboration so that when things go wrong, everyone can act as one and as needed. Capital One’s response to a data breach in 2019 is a good example of putting this principle into action: When a cybercriminal stole personal data on about 106 million North American customers, the company’s predefined cross-functional teams jumped into action to contain the damage and coordinated with law enforcement to enable the prompt arrest of the perpetrator.</p>
<p><strong>4. Compassion. </strong>Showing genuine empathy for all who may be affected by the crisis, both inside and outside the organization, builds credibility and confidence and, critically, buys time to deal with the situation. “Showing you care for your people and their safety builds unshakable trust,” a national chief of defense told us. “Leadership is not about being in charge. It is about taking care of those in your charge.”</p>
<p><strong>5. Confrontation. </strong>Leaders must face the hard truths early. Johnson &amp; Johnson’s 1982 Tylenol crisis is a classic case in point: Instead of denying that capsules of its pain medication had been contaminated with cyanide or delaying making a statement, the company immediately confronted the threat, recalled 31 million bottles of medicine, and began to rebuild public trust through its decisive transparency.</p>
<p><strong>6. Control. </strong>Good leaders maintain order in a crisis. That doesn’t mean centralizing every decision; it means defining decision rights in advance, clarifying who decides what, and empowering those closest to a situation to act when needed. As the former president of a European country’s central bank put it, “Control is not about micromanaging; it’s about keeping the system stable so others can act with confidence.” For example, during Hurricane Harvey in 2017, Walmart empowered local managers to direct trucks and reopen Houston-area stores as they saw fit, speeding recovery and earning customers’ trust that they could count on the retailer.</p>
<p><strong>7. Continuity. </strong>The best crisis managers don’t just move on once a crisis has faded. They run postmortems, capturing lessons that can help them build resilience and make the organization better prepared for the next crisis. Shell uses systematic debriefs after every incident to help ensure that every disruption strengthens the oil and gas company’s reflexes to make it even more resilient.</p>
<div class="callout-highlight callout--expand">
<aside class="l-content-wrap">
<article>
<h4>The 7C’s of Effective Crisis Management</h4>
<p class="caption">Each of the seven core crisis management capabilities can be seen as developing over five stages of maturity, from reactive to strategic. The table lists typical characteristics of each stage and can be used by managers to assess their current maturity level for the seven practice areas.</p>
<table id="Chart1" class="chart-grouped-rows no-mobile">
<thead>
<tr>
<th><strong>Maturity Stage</strong></th>
<th>Contingency</th>
<th>Clarity</th>
<th>Coordination</th>
<th>Control</th>
<th>Compassion</th>
<th>Confrontation</th>
<th>Continuity</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<strong>Level I</strong><br /><strong><span>Reactive (Ad Hoc Response)</span></strong>
</td>
<td>
<p>There has been little or no preparation. Responses are improvised.</p>
</td>
<td>
<p>Messages conflict or shift unpredictably. Communication is inconsistent.</p>
</td>
<td>
<p>Silos dominate, with minimal coordination, leading to a fragmented response.</p>
</td>
<td>
<p>Control is either overly centralized or entirely fragmented, lacking clarity and consistency.</p>
</td>
<td>
<p>Fear or detachment shapes interactions. The emotional environment is unsettled.</p>
</td>
<td>
<p>Issues are denied or deflected. Challenges are not directly faced.</p>
</td>
<td>
<p>After crises, previous habits resume, with no meaningful learning captured.</p>
</td>
</tr>
<tr>
<td>
<strong>Level II</strong><br /><strong><span>Aware (Basic Planning)</span></strong>
</td>
<td>
<p>Existing playbooks guide responses but are used inconsistently.</p>
</td>
<td>
<p>Basic communication protocols exist for crisis updates.</p>
</td>
<td>
<p>Crisis teams are designated but not fully trained or practiced.</p>
</td>
<td>
<p>Decision-making is primarily centralized, reducing speed and flexibility.</p>
</td>
<td>
<p>Responses are procedural, with limited emotional awareness.</p>
</td>
<td>
<p>Problems are recognized but lack deeper engagement.</p>
</td>
<td>
<p>Debriefs occur sporadically, with limited follow-through.</p>
</td>
</tr>
<tr>
<td>
<strong>Level III</strong><br /><strong><span>Defined (Structured Execution)</span></strong>
</td>
<td>
<p>Rehearsals, scenario testing, and playbook reviews occur at set intervals.</p>
</td>
<td>
<p>Leadership messaging is aligned and timely, and sets expectations clearly.</p>
</td>
<td>
<p>Roles, responsibilities, and escalation paths are clearly defined from the top down and well understood.</p>
</td>
<td>
<p>Decision guardrails have been established. Authority has been delegated for execution.</p>
</td>
<td>
<p>Compassion is demonstrated, though caring behaviors may be inconsistent.</p>
</td>
<td>
<p>Leaders take responsibility for decisions but may not engage deeply with systemic root causes.</p>
</td>
<td>
<p>Reviews follow major events and guide process improvements. Learning occurs but is sporadic.</p>
</td>
</tr>
<tr>
<td>
<strong>Level IV</strong><br /><strong><span>Integrated (Aligned and Embedded)</span></strong>
</td>
<td>
<p>Scenario simulations are routine and integrated across teams. Preparedness is practiced regularly.</p>
</td>
<td>
<p>Messaging is coherent across channels and functions, with minimal ambiguity. There are many voices but one narrative.</p>
</td>
<td>
<p>Cross-functional orchestration ensures unified and timely action.</p>
</td>
<td>
<p>Adaptive autonomy balances local flexibility with strategic constraints. There is flexibility within clear boundaries.</p>
</td>
<td>
<p>Leadership blends empathetic considerations with decisive action.</p>
</td>
<td>
<p>Difficult issues are surfaced early and are addressed transparently.</p>
</td>
<td>
<p>Lessons learned systematically inform planning and long-range strategy. Insights are consistently captured.</p>
</td>
</tr>
<tr>
<td>
<strong>Level V</strong><br /><strong><span>Strategic (Continuous Resilience)</span></strong>
</td>
<td>
<p>Sensing of emerging risks and opportunities is continuous, and agile adaptation is embedded in planning cycles.</p>
</td>
<td>
<p>Communication is fully transparent, trust-based, and in real time across all channels.</p>
</td>
<td>
<p>There is dynamic alignment across units, with fluid adjustment to changing conditions. Unified actions occur in real time.</p>
</td>
<td>
<p>Rapid, data-driven decision-making occurs at appropriate levels, with strong staff empowerment.</p>
</td>
<td>
<p>Proactive and visible care reinforces trust and psychological safety. Care is embedded in leadership practice.</p>
</td>
<td>
<p>Principled and decisive actions strengthen credibility and long-term reputation.</p>
</td>
<td>
<p>Learning is codified, supports continuous innovation and improved resilience.</p>
</td>
</tr>
</tbody>
</table>
<p><!--IMAGE FALLBACK FOR MOBILE BELOW --><br />
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/SU26_FE_Aalbers_table_REV.png" alt="The 7C’s of Effective Crisis Management" class="no-desktop">
</p>
</article>
</aside>
</div>
<p>It’s important not to view this simple framework as a checklist. All of the capabilities must be in place to some degree because they reinforce one another and together make a system more resilient. Contingency without clarity breeds confusion; compassion without confrontation leads to inaction; control without coordination creates bottlenecks. Each element constrains or amplifies the others. Organizations that survive crises, our interviewees told us, have these seven capabilities in place, supported by strong organizational routines and culture.</p>
<p></p>
<h3>Maturity Matters, but Progress Can Be Uneven</h3>
<p>All organizations, to a degree, have the 7C’s. As the chair of a leading technology company told us, however, some capabilities will have been battle-tested while others are still developing.</p>
<p>In other words, crisis capabilities can be understood as maturing across five stages that we characterize as reactive, aware, defined, integrated, and strategic. In the reactive stage, organizations rely on improvisation. There are no formal plans or roles, and responses depend on individual initiative — what one CEO called “firefighting with no water.” In the aware stage, the organization recognizes the need for structure. Playbooks may already exist, but they are rarely used or tested, and planning remains largely symbolic. The defined stage marks the first step toward institutional discipline: Standardized processes, rehearsals, and scenario testing transform plans from paper exercises into operational routines. At the integrated stage, capabilities become embedded across functions; cross-departmental simulations are routine, decision rights are clear, and early warning systems link detection to coordinated action. Finally, in the strategic stage, preparedness becomes anticipatory rather than reactive. The organization develops proactive sensing mechanisms, embeds agility into its governance system, and treats resilience as a source of strategic advantage to be nurtured and protected.</p>
<p>The real power of our framework is in using it to diagnose imbalances and immaturity across a system. We’ve broken down each capability by maturity stage (see “The 7C’s of Effective Crisis Management”) so that it can be used as both an assessment and a map to guide capability-building. The goal of the exercise, however, is not to create symmetry but to build coherence. Not all organizations need to be equally mature in all of the 7C’s, but to avoid buckling under the pressure of a crisis, they need sufficient maturity across the practices. As a leader in professional services put it, “A company can survive weakness in one area, but only if the rest of the system holds together.”</p>
<p></p>
<p>As they review the framework, then, leaders should begin to assess which capabilities are underdeveloped and most likely to be a source of vulnerability in a crisis. They should also look for where capabilities are out of balance. For example, are they overcentralizing control at the cost of coordination? The following case shows the need to balance the 7C’s at different maturity levels so that the system holds under pressure.</p>
<h3>How JPMorgan Chase Rode Out the Turmoil of 2023</h3>
<p>In March 2023, when the collapses of Silicon Valley Bank, Signature Bank, and, later, First Republic triggered the most significant banking turmoil since 2008, the industry faced over $500 billion in outflows within weeks and a 20% drop in the KBW Nasdaq Regional Bank Index. Amid the panic, JPMorgan Chase — which held over $3.7 trillion in assets and $1.9 trillion in deposits — acted as a systemic stabilizer. Within hours, it activated its crisis-response architecture: a 24-hour “war room” that brought together treasury, risk, communications, and legal teams; live scenario modeling of liquidity positions; and hourly updates shared with the U.S. Treasury, Federal Reserve, and Federal Deposit Insurance Corp.</p>
<p>Seen through a 7C’s lens, the company’s crisis management maturity was evident. Coordination across divisions and regulators was immediate and disciplined, supported by preset escalation protocols and shared data dashboards. Control was strong yet flexible; regional and product teams were empowered to make lending and liquidity decisions within clear limits. Clarity came from transparent, consistent messaging. CEO Jamie Dimon publicly framed the situation as “containable but serious,” signaling confidence without denial, while daily internal briefings aligned 290,000 employees worldwide. JPMorgan Chase took action that could be seen as compassionate as it extended temporary credit facilities to smaller regional banks and prioritized retail deposit access to stabilize confidence. The crisis was confronted explicitly: Dimon and CFO Jeremy Barnum publicly acknowledged the fragility of midtier balance sheets and the need for tighter liquidity oversight. Most importantly, continuity and contingency were fully institutionalized, thanks to weekly liquidity stress tests and twice-annual simulations with the board that it had been conducting since the 2008 global financial crisis. As a result, when regulators orchestrated the sale of First Republic to JPMorgan Chase, the latter absorbed $173 billion in loans and $92 billion in deposits over a single weekend without market disruption.</p>
<p></p>
<p>At JPMorgan Chase, continuity was a weak spot, though it was identified and addressed early. Earlier repeated stress tests and simulations had exposed vulnerabilities in digital resilience and liquidity concentration, prompting preemptive strengthening. The main tension — between control and coordination — was managed intentionally: Global teams operated autonomously by following clear escalation ladders, avoiding both chaos and command paralysis. The result was a system that flexed under pressure and took a hit but did not break.</p>
<p></p>
<p>Under the 7C’s Model, maturity gives leaders the structure and confidence needed to make good decisions under pressure. Our interviewees also emphasized one overarching lesson: Practice matters. This requires an additional reinforcing factor: culture — that is, the daily reality of “how we do things around here.” Cultivating a culture that supports the 7C’s is critical. And Microsoft’s response to its 2021 outage is again a case in point. Its postmortem avoided blame, focused on learning, and used the crisis as a stepping stone to improvement. That’s important because without that type of open, learning culture, organizations are doomed to make the same mistakes the next time around. As a former CEO of a global energy company told us, “You can’t control the storm, but you can control how your people behave in it. That comes down to culture.”</p>
<p></p>
<p>The capabilities framework that emerged from our interviews with leaders about crisis management provides managers with a common language to assess their readiness, explore weaknesses, and prioritize the development of their capabilities. The model also highlights that crisis management is a systemic competence, not a single skill set held by a gifted leader. It reminds us that resilience depends on a balance of complementary factors, and it offers a road map to maturity, helping managers move from awareness to mastery. </p>
<p>These takeaways from our research underscore the importance of recognizing that resilience grows from rehearsal, reflection, experience, and the routines that stabilize a system when pressure mounts. As a seasoned board member at various international companies reminded us, “Resilience isn’t what you do in the crisis; it’s what you’ve built before it.”</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/level-up-your-crisis-management-skills/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>When Not to Use AI</title>
				<link>https://sloanreview.mit.edu/article/when-not-to-use-ai/</link>
				<comments>https://sloanreview.mit.edu/article/when-not-to-use-ai/#comments</comments>
				<pubDate>Mon, 30 Mar 2026 11:00:23 +0000</pubDate>
				<dc:creator><![CDATA[Benjamin Laker. <p><a href="https://www.linkedin.com/in/benlaker/" target="_blank" rel="noopener noreferrer">Benjamin Laker</a> is a professor of leadership at Henley Business School at the University of Reading.</p>
]]></dc:creator>

						<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Communication]]></category>
		<category><![CDATA[Decision-Making]]></category>
		<category><![CDATA[Employee Psychology]]></category>
		<category><![CDATA[Management Strategy]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leadership Skills]]></category>
		<category><![CDATA[Talent Management]]></category>
		<category><![CDATA[Technology Implementation]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR &#124; Getty Images AI promises to make managers more productive and give them access to more information more quickly. It can draft plans, summarize reports, and even coach you on how to deliver feedback. Yet the same technology that accelerates decision-making can also erode your judgment, if you let it. Rely on [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/Laker-1290x860-1.jpg" alt="" class="wp-image-126264"/><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR | Getty Images</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">AI promises to make managers</span> more productive and give them access to more information more quickly. It can draft plans, summarize reports, and even coach you on how to deliver feedback. Yet the same technology that accelerates decision-making can also erode your judgment, if you let it. Rely on artificial intelligence too little, and you miss its advantages. Rely on it too much, and you risk delegating your thinking instead of sharpening it.</p>
<p>Leading well in the age of AI is about balance. You need to know when to let algorithms lighten the load — and when to carry the weight on your own shoulders so your judgment stays strong.</p>
<h3>Where AI Helps You Think Faster</h3>
<p>AI excels at compressing time. It can scan vast quantities of information, synthesize key points, and produce first drafts of documents or presentations in seconds. Used wisely, AI accelerates the slowest parts of managerial work: gathering data, preparing materials, and finding patterns.</p>
<p>When time is tight, use AI to handle the groundwork so that you can focus on sensemaking. Let it outline a report so that you can spend your energy on the real managerial work: deciding what findings matter, what signals to prioritize, and what the implications are for strategy or next steps. Have it summarize team feedback so that you can concentrate on what action to take. Use it to prepare talking points for a performance review, and then spend your time planning your tone and delivery. This keeps you in the driver’s seat of decisions rather than buried in prep work.</p>
<p></p>
<p>The key is to treat AI’s output as raw material, not finished work. Skim it, shape it, and then make it yours. If you publish or present it exactly as generated, you are not accelerating your thinking — you are bypassing it. The goal is speed with discernment, not speed alone.</p>
<h3>Where AI Can Quiet Your Judgment</h3>
<p>The danger comes when speed begins to replace scrutiny. AI <a href="https://sloanreview.mit.edu/article/validating-llm-output-prepare-to-be-persuasion-bombed/">makes suggestions confidently</a>, even when they are shallow or wrong. It can lull you into skipping the second look you would normally take, which will dull your judgment over time.</p>
<p>This risk of using AI is highest when you are making decisions that depend on values, nuance, or relationships — precisely the work that defines good management. AI cannot sense the emotional weight of a change announcement, the politics around a promotion, or the fragility of a struggling employee’s confidence. It will give you an answer with no sense of the human context.</p>
<p>In hiring, for example, AI can short-list resumes in seconds, but it cannot gauge a candidate’s resilience based on how they talk about a setback during an interview. When it comes to strategy development, AI can surface competitive trends, but it cannot sense how your team will emotionally react to a bold new direction. In these moments, your presence matters more than your productivity.</p>
<p>If you notice yourself accepting AI’s outputs without editing them, slow down. Ask yourself: Would I stand by this recommendation if my name were on it alone? Would I say it out loud to someone I respect? Those questions reinsert accountability — and accountability sharpens judgment.</p>
<p></p>
<h3>Putting AI in Its Place</h3>
<p>You have the opportunity to make deliberate choices about how and when artificial intelligence can best serve you and your team. Here are three ways to make the most of AI — and your own skills.</p>
<h4>Automate Tasks, Not Trust</h4>
<p>A practical way to stay balanced is to divide your work into tasks and trust. Tasks are the repeatable processes that benefit from speed. Trust is the human currency of management — the beliefs, emotions, and loyalties that bind a team together.</p>
<p>Use AI on tasks. Let it draft timelines, crunch numbers, or generate slides. Do not use it where trust is paramount. Deliver feedback yourself. Write the opening paragraph of a promotion announcement in your own voice. Decide when to change a goal or approve a hire with your own mind engaged, not on autopilot.</p>
<p>This distinction keeps AI working as your tool, not your proxy. It does the mechanical work while you do the meaningful work.</p>
<p>Consider your weekly team meeting. AI can help you build the agenda, surface metrics, and compile questions from your team’s project boards. But the tone of that meeting — whether people feel heard, valued, and motivated — is yours alone to create. No algorithm can do that for you. When trust is at stake, resist the urge to outsource.</p>
<h4>Use AI to Widen Perspective, Not Narrow It</h4>
<p>Another trap is using AI only to confirm what you already believe. Because these tools are designed to be agreeable, they will happily produce arguments that support your instincts. This can make you feel more decisive while actually limiting the options you consider.</p>
<p></p>
<p>To avoid getting stuck in your own ideas, occasionally instruct AI to make a counterargument to your preferred option. If you are leaning toward reorganizing a team, ask for reasons not to. If you are ready to approve a budget, ask for the strongest case to reject it. This will force you to confront counterarguments before you commit — and it protects you from becoming overly certain about a decision simply because a machine echoed your view.</p>
<p>The best managers use AI to challenge their thinking, not to cushion it. They treat it as a sparring partner, not a cheerleader.</p>
<h4>Build a Personal Guardrail</h4>
<p></p>
<p>Even experienced managers can slip from using AI wisely to leaning on it too heavily. The shift is subtle — and it often feels like efficiency. To prevent that, build a simple guardrail: Track how much of your day involves thinking that you could not delegate. Ask yourself: Did I use AI to enhance my thinking or replace it? Did I exercise my judgment critically, or did I accept recommendations more automatically? These questions force you to notice the slope before you slide.</p>
<p>Some leaders set time blocks for “AI-free thinking” each week — no prompts, no tools, just unstructured reflection. Others limit AI use to specific tasks and keep a manual list of decisions where they want to feel the full weight of responsibility. Whatever the method you choose, the point is to keep drawing on your own judgment and critical thinking.</p>
<p></p>
<p>Thriving in the AI era does not mean adopting it fastest but remaining unmistakably human while using it. AI can accelerate your work, but it cannot care. It can generate options, but it cannot hold responsibility. That is your job — and the more AI can do for you, the more deliberate you must be about what you still do yourself. Let the machine do the lifting, not the leading.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/when-not-to-use-ai/feed/</wfw:commentRss>
				<slash:comments>2</slash:comments>
							</item>
					<item>
				<title>How Morningstar’s CEO Drives Relentless Execution</title>
				<link>https://sloanreview.mit.edu/article/how-morningstars-ceo-drives-relentless-execution/</link>
				<comments>https://sloanreview.mit.edu/article/how-morningstars-ceo-drives-relentless-execution/#respond</comments>
				<pubDate>Thu, 26 Mar 2026 11:00:11 +0000</pubDate>
				<dc:creator><![CDATA[Donald Sull and Charles Sull. <p><a href="https://www.linkedin.com/in/donald-sull-1077444/" target="_blank">Donald Sull</a> (<a href="https://x.com/culturexinsight" target="_blank">@culturexinsight</a>) is a professor of the practice at the MIT Sloan School of Management and a cofounder of CultureX. <a href="https://www.linkedin.com/in/charles-sull/" target="_blank">Charles Sull</a> is a cofounder of CultureX.</p>
]]></dc:creator>

						<category><![CDATA[Corporate Values]]></category>
		<category><![CDATA[Employee Engagement]]></category>
		<category><![CDATA[Human Resources]]></category>
		<category><![CDATA[Leadership Advice]]></category>
		<category><![CDATA[Organizational Culture]]></category>
		<category><![CDATA[Organizational Learning]]></category>
		<category><![CDATA[Culture]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leading Change]]></category>
		<category><![CDATA[Skills & Learning]]></category>
		<category><![CDATA[Talent Management]]></category>
		<category><![CDATA[Workplace, Teams, & Culture]]></category>

				<description><![CDATA[Aleksandar Savic Many investors rely on Morningstar for independent financial analysis and insights, but few people are familiar with the company behind the ratings. From Morningstar’s origins rating mutual funds, the company has expanded its product line, customer base, and global footprint and realized a tenfold increase in revenues and profits between 2005 and 2025. [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/CultureChamps_KunalKapoor-1290x860-1.jpg" alt="" class="wp-image-126096"/><figcaption>
<p class="attribution">Aleksandar Savic</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">Many investors rely on Morningstar</span> for independent financial analysis and insights, but few people are familiar with the company behind the ratings. From Morningstar’s origins rating mutual funds, the company has expanded its product line, customer base, and global footprint and realized a tenfold increase in revenues and profits between 2005 and 2025.</p>
<p>Morningstar’s culture, CEO Kunal Kapoor said, “has been one of the ingredients that has allowed us to be successful, and I think will continue to allow us to be successful.” We analyzed how 32,000 employees at 15 financial data companies described their employers in Glassdoor reviews. Morningstar employees spoke about corporate culture more frequently and more positively than their counterparts at peer companies and were particularly positive about how the organization lives its core values.</p>
<p>“Execution is everything” is one of those values. In a recent podcast, Kapoor shared three principles that help Morningstar maintain its focus on relentless execution.</p>
<p></p>
<h3>1. Decentralize decision-making to instill a sense of ownership.</h3>
<p>As companies grow, employees often lose the sense of ownership that drove early success. Morningstar fights this tendency by pushing accountability down to business units. “We’re a very decentralized organization,” Kapoor explained. “And part of the inspiration for being decentralized is to make sure that people own the outcomes that they’re charged with driving.” </p>
<p>Morningstar structures its compensation and organizational design to reinforce that ownership at the business unit level. “We allow a large percentage of how bonus plans are set to be driven by the performance of our business units ... even though there’s a companywide factor as well,” Kapoor noted. “We’ve also really tried to push a lot of our central services into the business units. ... We keep trying to push whatever we can into the business units so that the accountability is at that level.”</p>
<h3>2. Use transparent objectives and key results to enable difficult discussions.</h3>
<p>Many organizations struggle to surface and discuss problems early enough to nip them in the bud. “People’s natural inclination is to rush to you with good news and try not to talk about the bad news,” Kapoor observed. “OKRs are really helpful in this context. Because there’s transparency into what’s happening across the organization, it becomes a very tangible way to see why some things are not hitting in the way that they might. … We use them to look at some of our up-and-coming initiatives and see if they’re tracking or not.”</p>
<p>This transparency creates the space to have conversations about what’s not working early enough to course-correct — and is especially valuable for experiments with new initiatives. “Within any organization, you always want to be planting a few seeds,” Kapoor said. “But some of them are going to grow into weeds, and it’s fine to pull them out very quickly because you only want to be watering a few onto the next stage. … If there’s not a system of candor, a system of key results to evaluate initiatives, it becomes problematic. Being less subjective is important and allows you to kind of make those types of decisions.”</p>
<p></p>
<p>Transparent OKRs also provide a framework for disciplined and productive feedback discussions. “Nobody likes tough feedback,” Kapoor said, but “they will come back and thank you at some point, as long as the feedback was actionable and fair.” He structures check-ins with his team around their OKRs: “Here’s what you signed up for. Let’s go through and see where you are tracking to them, and let’s talk about what you want to do in the second half of the year.” The key, Kapoor said, is being “very disciplined around just repeating ourselves and following through on OKRs and, as a leader, talking about them publicly.”</p>
<h3>3. Instill urgency and set ambitious goals.</h3>
<p></p>
<p>Successful execution requires the urgency and speed to seize fleeting opportunities faster than competitors; complacency is the mortal enemy of urgency. “I don’t walk into any meeting today where I’m not pushing people to get things done faster than they think they’re going to get them done,” Kapoor said. “Otherwise, they start to think that they have infinite timelines and infinite resources just because we’re bigger. ... The shorter the timeline, the more motivated people are to go get after things.” As leaders, Kapoor said, “our mandate is to challenge and get the business units moving and to ensure that they are not being complacent.” </p>
<p>Setting ambitious goals is a powerful way to fight complacency, but many organizations struggle to set goals that are ambitious yet achievable. Kapoor explained Morningstar’s approach: “You want to have ambition in your long-term plans, and you want to set those in a way that feels difficult. ... Every quarter is a step toward achieving that three-year plan. You need to be realistic as to what needs to get done in a quarter, and you need to be super ambitious in the three-year plan.” </p>
<p>Kapoor cautioned against punishing people for missing ambitious targets, because it could discourage people from setting stretch goals in the future. “It’s important not to penalize people when they are overly ambitious and they don’t get to an outcome,” he said. “The key thing then is to have an honest reflection on why something wasn’t achieved.”</p>
<p><em>Want to hear more advice from Kapoor? Watch this conversation and the entire series on the <a href="https://www.youtube.com/@culturexculturexculturex" target="_blank">CultureX YouTube channel</a>, on <a href="https://open.spotify.com/show/6oSF9YHbZGhj8UHrFE6mCf?si=8bb3324edb1f4e44" target="_blank">Spotify</a>, or on <a href="https://podcasts.apple.com/us/podcast/culture-champions-by-culturex/id1774969910" target="_blank">Apple Podcasts</a>.</em></p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/how-morningstars-ceo-drives-relentless-execution/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>An AI Reckoning for HR: Transform or Fade Away</title>
				<link>https://sloanreview.mit.edu/article/an-ai-reckoning-for-hr-transform-or-fade-away/</link>
				<comments>https://sloanreview.mit.edu/article/an-ai-reckoning-for-hr-transform-or-fade-away/#respond</comments>
				<pubDate>Wed, 25 Mar 2026 11:00:53 +0000</pubDate>
				<dc:creator><![CDATA[Brian Elliott. <p><a href="https://www.linkedin.com/in/belliott/" target="_blank" rel="noopener noreferrer">Brian Elliott</a> is an executive adviser and speaker. He’s the CEO of <a href="https://www.workforward.com/" target="_blank" rel="noopener noreferrer">Work Forward</a> and author of the Work Forward newsletter.</p>
]]></dc:creator>

						<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Digital Transformation]]></category>
		<category><![CDATA[Employee Performance]]></category>
		<category><![CDATA[Human Resources]]></category>
		<category><![CDATA[Human-Machine Collaboration]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Automation]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leading Change]]></category>
		<category><![CDATA[Talent Management]]></category>
		<category><![CDATA[Workplace, Teams, & Culture]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR &#124; Getty Images For decades, human resource leaders have talked about the need to shift their focus from having responsibility for compliance to acting as architects of talent strategy. And for decades, the pattern of HR being stuck in age-old roles has persisted. But there is new pressure to redefine the role. [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/Elliot-1290x860-1.jpg" alt="" class="wp-image-126195"/><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR | Getty Images</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">For decades, human resource leaders</span> have talked about the need to shift their focus from having responsibility for compliance to acting as architects of talent strategy. And for decades, the pattern of HR being stuck in age-old roles has persisted. </p>
<p>But there is new pressure to redefine the role. Thanks to artificial intelligence, the gap between what HR currently is and what it could be has never been wider. AI hasn’t created this divergence, but it’s making it impossible to ignore. </p>
<p>Recent conversations I’ve had with dozens of chief HR officers (CHROs) indicate that they are facing a clear fork in the road. One path leads to a weakening of the HR role, with more functions automated (like onboarding and learning) or taken on by other business leaders armed with new tools (like skill-based screening during recruitment). The other path leads to the kind of evolution that elevates the HR role, with HR not just taking the lead in driving organizational transformation and engagement but taking ownership of the ways in which incoming and current employees interact with AI.</p>
<p>The path organizations end up on will depend on whether HR can prove its strategic value before business leaders decide they don’t need it.</p>
<p></p>
<h3>Technology’s Mixed Role in HR’s Status</h3>
<p>The HR function has become the place where organizations send the people problems that other leaders prefer not to own. Kit Krugman, the senior vice president of people and culture at Foursquare, noted that “the genesis of HR was the genesis of management simultaneously,” born in the post-industrial era, with the view of humans as capital to be optimized — hence the poorly aged term “human resources.” Over time, HR’s mandate expanded to learning, engagement, and culture, but the core programs persisted. That explains the unusual bundle of responsibilities HR teams have accumulated: compliance enforcement and culture building, policy sharing and employee advocacy, benefits management and hiring processes, termination supervision and engagement assessment. </p>
<p>Each wave of HR technology over the past 25 years promised to help tame part of the job. For overall organization, HR information systems automated recordkeeping. In hiring, applicant-tracking systems digitized hiring workflows. In education, learning management systems scaled training delivery.</p>
<p></p>
<p>Having to keep up with all of that made it hard for HR leaders to get ahead and focus on bigger-picture tasks. Eric Severson described walking into a room filled with dozens of binders of performance reviews, meticulously completed, filed, and tracked, when he was head of HR at The Gap more than 10 years ago. The HR team was proud of reaching 98% compliance. But the reviews didn’t answer questions like whether the company was <a href="https://sloanreview.mit.edu/article/how-to-get-real-about-measuring-to-outcomes/">reducing unwanted attrition</a> or whether employees were developing new skills. The metric was completion, not impact, and the entire apparatus of performance management — the forms, the ratings, the calibration sessions, the documentation — seemed to have become its own purpose. </p>
<p>Because HR has historically been treated as a cost center rather than a strategic partner, it’s especially in the crosshairs of AI technologies looking to automate away costly humans and labor-heavy tasks. Artificial intelligence introduces something different from previous technology waves: It automates content creation and analysis, not just transactions. AI systems can now draft job descriptions, screen job applications, analyze compensation data, answer policy questions, and facilitate coaching conversations.</p>
<p>The <a href="https://cms.vistapointadvisors.com/system/uploads/fae/file/asset/727/HR_Tech_Quarterly_Report_Q4_2024.pdf" target="_blank" rel="noopener noreferrer">HR technology market</a> is projected to grow from $40 billion in 2024 to over $82 billion by 2032. Much of that growth will come from tools that can take on work HR professionals currently perform. The question is whether HR will lead its own transformation or have change imposed on it.</p>
<p></p>
<h3>How HR Leaders Must Respond</h3>
<p>The risk of <a href="https://sloanreview.mit.edu/article/ai-unlocks-new-power-for-employees-are-hr-leaders-ready/">AI displacing HR</a> is one that people have predicted for years. Now it is no longer hypothetical: Business partners are using AI tools for work HR has traditionally owned. </p>
<p>As HR leaders consider how to lead through this shift, it’s important to note the technology’s limitations. AI cannot determine why high performers are quietly job hunting, why innovation has stalled in a particular team, or how to rebuild trust after a failed reorganization. Addressing such challenges requires an understanding of human motivation and the ability to codesign solutions — capabilities that remain distinctly human. AI can identify patterns of discord in a company faster than any analyst, but redesigning the systems that produce those patterns requires different capabilities.</p>
<p>With that context in mind, here’s how HR professionals can realize the potential to become strategic partners in their organizations as they face the pressure of AI coming for their jobs.</p>
<p><strong>Train for strategic thinking.</strong> Specialist HR career paths that reward depth in narrow domains like recruiting, compensation, or compliance often provide limited exposure to strategic thinking until those jobholders attain senior leadership roles. By then, HR practitioners have spent years reinforcing transactional approaches. “What this job requires is the ability to understand organization systems and group dynamics,” Foursquare’s Krugman said. HR hiring has historically prioritized interpersonal warmth over analytical ability, but “getting along with everyone might actually be a challenge in this role,” Krugman said.</p>
<p><strong>Lean in to the right metrics and data.</strong> In their 2007 book <cite>Beyond HR: The New Science of Human Capital</cite>, John W. Boudreau and Peter M. Ramstad argue that HR needs to develop a decision science comparable to what finance built around ROI and marketing built around customer value. The authors documented implementations at Disney, Corning, and other organizations that had connected talent investments to strategic outcomes. Nearly two decades later, Severson, now an executive coach, uses their book to educate future CHROs and told me that such examples remain exceptions rather than standard practice.</p>
<p>As former Levi Strauss & Co. CHRO Tracy Layney put it to me, HR leaders should be held accountable for people outcomes with the same rigor “as you would around financial rigor, around customer rigor, around all your marketing metrics.”</p>
<p></p>
<p><strong>Jettison low-value work.</strong> “HR never met a program it didn’t like,” Layney noted. But are all those programs necessary? Samantha Gadd, founder of employee experience consultancy Humankind, told me that she recommends an exercise for HR teams: “Choose a wall and put all the initiatives that everyone’s working on up there, and then ask, ‘If we stopped doing some of these, what would employees actually notice?’” Gadd said that HR practitioners should look to eliminate “activity without outcomes,” such as engagement surveys that generate reports but not action. </p>
<p><strong>Incorporate AI where it makes sense.</strong> AI may be able to answer routine inquiries and perform initial screenings of job candidates faster and more consistently than HR professionals. Wharton’s <a href="https://www.youtube.com/watch?v=O0wSHdgMbMM" target="_blank" rel="noopener noreferrer">Ethan Mollick has noted</a> that “people are turning to AI all the time as a coach, for help with work,” and he’s written that <a href="https://www.oneusefulthing.org/p/latent-expertise-everyone-is-in-r" target="_blank" rel="noopener noreferrer">everyone is in R&D</a> when it comes to the technology. “The source of any real advantage in AI will come from the expertise of their employees, which is needed to unlock the expertise latent in AI,” Mollick said. Could HR leadership spearhead experimentation with generative AI within their companies? Too many HR teams have seen cuts to their workforces and then struggled to keep up with their workloads to acquire the capabilities needed to take that on.</p>
<p><strong>Retain and elevate people-centric tasks.</strong> HR leaders should focus on core experiences like learning, from onboarding through C-level leadership, in skills-development programs cocreated with employees. They should build systems of outcomes-based accountability instead of policy processes, and they should position themselves as the leaders who are able to address problems that algorithms cannot resolve, and to create measurable business value.</p>
<h3>Two Paths Forward</h3>
<p>The HR function, as we’ve known it, will be forced to choose one of two ways forward. </p>
<p>The first path leads to marginalization. In this scenario, AI handles more and more transactions, and line managers are supplemented by AI tools to address routine people questions. HR contracts, becoming a compliance function that handles the emergencies that continually arise. The strategic space HR never fully claimed gets allocated elsewhere.</p>
<p>This is essentially the status quo, turbocharged. It won’t be a surprise to see HR teams in many companies go this route: <a href="https://www.shrm.org/enterprise-solutions/insights/hr-maturity-weighs-on-business-outcomes" target="_blank" rel="noopener noreferrer">SHRM research</a> found that only 1 in 8 HR teams operates at a high maturity level, which includes the ability to apply data well and to hold on to the right people, among other criteria. The average score across HR organizations was just 3.85 out of 6.00 on the maturity assessment.</p>
<p></p>
<p>The second path leads to what Krugman called an “internal organizational effectiveness engine,” meaning a function staffed with designers, strategists, and systems thinkers who operate as internal consultants. Their job is to scope out problems, establish useful metrics, run experiments, and iterate based on results. This HR function also uses AI to automate transactional work, but it focuses on aligning directly to business objectives the way that innovation teams do, so that humans can focus on system design.</p>
<p>Humankind’s Gadd framed the choice as a shift from expertise to facilitation — from being “the answer people” to recognizing that “the solutions you seek lie in the population you serve.” It means asking better questions, through direct employee conversations rather than survey instruments. It means designing with employees rather than for them.</p>
<p></p>
<p></p>
<p>The structural traps that have shaped HR — like absorbing all of an organization’s people problems and following career paths that reward specialization over systems thinking — have produced a role that HR leaders need to extract themselves from to evolve.</p>
<p>The proactive stance holds more promise: Lean in to the potential for AI to take on the task of answering policy questions, shaping job descriptions, and compiling insights for performance and career conversations. That should free up more time for the coaching, human-centered design, and organizational-insight work that’s long been neglected as the urgent has crowded out the important. </p>
<p>Change also requires a shift in other leadership roles. Yes, there’s a critical need for HR leaders to learn about and adopt business metrics and to tie design to outcome. But the same is true in reverse: Functional leaders need to take more responsibility for their own teams’ people strategies, performance, and outcomes.</p>
<p>AI will change the HR function regardless of whether HR professionals lead that change. The question is whether these leaders will experience AI as something that happens to them or harness AI to drive their own transformation.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/an-ai-reckoning-for-hr-transform-or-fade-away/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Shifting AI From Fear to Optimism: U.S. Department of Labor’s Taylor Stockton</title>
				<link>https://sloanreview.mit.edu/audio/shifting-ai-from-fear-to-optimism-u-s-department-of-labors-taylor-stockton/</link>
				<comments>https://sloanreview.mit.edu/audio/shifting-ai-from-fear-to-optimism-u-s-department-of-labors-taylor-stockton/#respond</comments>
				<pubDate>Tue, 24 Mar 2026 11:00:52 +0000</pubDate>
				<dc:creator><![CDATA[Sam Ransbotham. <p><cite>Me, Myself, and AI</cite> is a podcast produced by <cite>MIT Sloan Management Review</cite> and hosted by Sam Ransbotham. It is engineered by David Lishansky and produced by Allison Ryder.</p>
<p><a href="https://sloanreview.mit.edu/sam-ransbotham/">Sam Ransbotham</a> is a professor in the information systems department at the Carroll School of Management at Boston College, as well as guest editor for <cite>MIT Sloan Management Review</cite>’s Artificial Intelligence and Business Strategy Big Ideas initiative.</p>
]]></dc:creator>

						<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Employee Performance]]></category>
		<category><![CDATA[Employee Productivity]]></category>
		<category><![CDATA[Labor]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Skills & Learning]]></category>
		<category><![CDATA[Talent Management]]></category>
		<category><![CDATA[Workplace, Teams, & Culture]]></category>

				<description><![CDATA[In this episode of the Me, Myself, and AI podcast, host Sam Ransbotham speaks with Taylor Stockton, chief innovation officer at the U.S. Department of Labor, about how artificial intelligence is reshaping the workforce. Taylor emphasizes that AI is having an economywide impact, transforming tasks within nearly every job rather than affecting only certain industries [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<p>In this episode of the <cite>Me, Myself, and AI</cite> podcast, host Sam Ransbotham speaks with Taylor Stockton, chief innovation officer at the U.S. Department of Labor, about how artificial intelligence is reshaping the workforce. Taylor emphasizes that AI is having an economywide impact, transforming tasks within nearly every job rather than affecting only certain industries or specific roles. He stresses the importance of helping workers and businesses adapt. </p>
<p>He also argues that AI literacy is becoming a foundational skill and should be prioritized alongside soft skills like relationship building, which will remain essential for differentiation in an AI-driven economy. Taylor calls for shifting the public narrative from fear to optimism, toward highlighting the ways that AI expands opportunity, mobility, and meaningful work, instead of deepening uncertainty.</p>
<aside class="callout-info">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/02/MMAI-S13-E2-Stockton-US-DOL-headshot-600.jpg" alt="Taylor Stockton"></p>
<h4>Taylor Stockton, U.S. Department of Labor</h4>
<p>As the chief innovation officer of the U.S. Department of Labor, Taylor Stockton leads an exploration into how artificial intelligence and emerging technologies impact the labor market and American workers, as well as what new innovations can support workers in achieving the American dream. Stockton cofounded venture capital firm Pathway Ventures, which focuses on the future of work, and was the chief operating officer of an AI-powered workforce development company. He received his bachelor’s in management at Boston College and Master of Business Administration from Harvard Business School.</p>
</aside>
<p>Subscribe to <cite>Me, Myself, and AI</cite> on <a href="https://podcasts.apple.com/us/podcast/me-myself-and-ai/id1533115958" target="_blank" rel="noopener">Apple Podcasts</a> or <a href="https://open.spotify.com/show/7ysPBcYtOPVgI6W5an6lup" target="_blank" rel="noopener">Spotify</a>.</p>
<h4>Transcript</h4>
<p><strong>Allison Ryder:</strong> In the summer of 2025, the U.S. federal government released its “AI Action Plan.” Today, we talk to one of the executives behind it, from the U.S. Department of Labor, to understand how the agency is thinking about labor trends nationwide in the age of AI.</p>
<p><strong>Taylor Stockton:</strong> I’m Taylor Stockton from the U.S. Department of Labor, and you’re listening to <cite>Me, Myself, and AI</cite>. </p>
<p><strong>Sam Ransbotham:</strong> Welcome to <cite>Me, Myself, and AI</cite>, a podcast from <cite>MIT Sloan Management Review</cite> exploring the future of artificial intelligence. I’m Sam Ransbotham, professor of analytics at Boston College. I’ve been researching data, analytics, and AI at <cite>MIT SMR</cite> since 2014, with research articles, annual industry reports, case studies, and now 13 seasons of podcast episodes. In each episode, corporate leaders, cutting-edge researchers, and AI policy makers join us to break down what separates AI hype from AI success.</p>
<p>Hey, listeners. Thanks again to everyone for joining us. I’m excited to be talking with Taylor Stockton, chief innovation officer at the U.S. Department of Labor. He leads the exploration of how AI and emerging technologies are affecting the labor market. He’s got [an] interesting background, and I hope we can get into it, but most importantly, he and I met when he was an undergraduate student at an introduction to IS course I taught. Because of this, I feel completely justified in taking full credit for any of Taylor’s successes. And I hope our listeners realize that’s completely ridiculous. Taylor, great to talk to you again after so long. </p>
<p><strong>Taylor Stockton:</strong> Sam, it’s great to see you. I feel like I should still say, “Professor Ransbotham.” But it’s great to reconnect so many years later. </p>
<p><strong>Sam Ransbotham:</strong> After this much time, I think we can go with “Sam.” Let’s start with your current role. You’ve got a great overview of the economy right now. What’s artificial intelligence doing to the economy? </p>
<p><strong>Taylor Stockton:</strong> It’s such an exciting time to be at the Department of Labor because I think, in many ways, technology, especially because of AI, is reshaping what labor is more than any inflection point in American history. I think what we’re seeing in the economy is that AI’s impact is not specific to a certain sector or a certain occupation. It is truly economywide. And even if there are jobs that won’t increase or decrease dramatically because of AI, every job is being transformed. So I think our role at the Department of Labor is to say, “How do we have to evolve ourselves in terms of programs and policies to make sure that businesses and workers can really benefit from AI’s benefits and navigate its challenges as well?”</p>
<p><strong>Sam Ransbotham:</strong> Well, that’s the crux of the whole problem, though — getting the good but not getting the bad there. Let’s get a little detail: What kinds of changes are happening? What kinds of things are you seeing? </p>
<p><strong>Taylor Stockton:</strong> We’re working with a lot of different business and industry associations across the economy, and we really believe that AI can shape [the] benefits of productivity and job growth. So we want businesses to be able to have the tools and resources to adopt this technology and integrate it within their companies. </p>
<p>Part of the challenge that we are hearing from a lot of these businesses in terms of adopting AI is the change management. The barrier itself is not actually the capabilities of the technology. In many cases, it is traditional change management processes of how do you get a workforce to buy in to the benefits that this technology can bring to them, not just to the enterprise overall? But then how do you translate those benefits through to the different workflows, through to the different job descriptions and org charts? </p>
<p>I remember from my days in management consulting right out of Boston College, some of these systems for a large enterprise to incorporate might take multiple years, just for one system. And as you’re looking at AI, needing to reshape all of the systems within an enterprise, all of the workflows, all of the job descriptions, these are, in many cases, things that are going to take multiple years. But our role as the Department of Labor is to make sure that we provide the resources and the funding and the guidance that can hopefully help accelerate that a little bit, such that workers and businesses can see these benefits even earlier. </p>
<p>I think a lot of what we are seeing is that there’s a lot of industries where AI’s capabilities increasingly can do different tasks that are core to certain roles. You see these knowledge workers such as accountants and legal professionals and management consultants, and suddenly, the new AI models are really good at reviewing long documents and summarizing documents and making small edits. And they can do these things in a much more rapid way than humans can. So I think the big question on people’s minds is “What does that mean? Does that mean the jobs are going away?” </p>
<p>I think what we’ve actually seen though is that the change … being experienced in the economy is that roles are shifting, and the tasks within each role and within each occupation are shifting. AI and these AI applications can increasingly take on some of these aspects of work and actually shift the roles that humans take on — our hope is — more meaningful and more fulfilling work that only humans can do. </p>
<p><strong>Sam Ransbotham:</strong> I think that’s the hope here. Now the tough part’s always in the details. I’ve got kids that are in high school right now. We teach people in college. What should we be telling people to do now that’s different than what it was back a decade ago, when you and I met?</p>
<p><strong>Taylor Stockton:</strong> I think the first thing that comes to mind for me, as someone who’s been in the startup world, is I think entrepreneurship and small business ownership are more possible and more feasible than any time before. I think the reason is a lot of these AI applications allow for some of the automation of back-office functions where, maybe in the past, entrepreneurs would have had to raise capital or take a lot longer to really build the infrastructure to launch a business. Suddenly, you can create a web page in 15 minutes. You can file the forms in a much more streamlined way. So I think I’m personally excited about some of that entrepreneurship and encouraging young people to see how feasible some of those paths are. </p>
<p>But what I would also say is that even though AI is transforming some of these roles massively, there’s a lot of tasks that I think AI can’t do that are more in this category of soft skills that people haven’t maybe focused on as much in the past, which is to say relationship building, trust building, all of these skills that are across a lot of industries and roles. AI can’t replace that. And I think it’s going to be only more important as AI automates other parts of the job. So I would encourage young people to think about, regardless of the industry that you’re working in, how do you make sure you develop those relationship-building skills and other soft skills that may be even more important in the age of AI?</p>
<p><strong>Sam Ransbotham:</strong> Yeah, I buy that. At the same time, part of me also wonders about technical skills. When some new AI something or other comes out, I feel like people [who] have deeper technical backgrounds are going to be better able to assimilate the new thing going on. And that’s almost a complete counter to the focus on soft skills. How do we reconcile that? I don’t know what the right answer is on that. </p>
<p><strong>Taylor Stockton:</strong> I think you’re right, and I don’t think it necessarily has to be reconciled. I think both of those things can be true at once. Briefly, looking back to the class we took together 16 years ago, part of what we looked at was the basics of Microsoft Excel, the basics of different software in a business context, as students thought about different paths in the business world. </p>
<p>I think a lot of those same types of views are relevant here to say, “What are the core AI literacy skills? What are the core AI skills individuals need to have in all areas of the economy, in a health care context, in a manufacturing context, in an accounting context? What [do] AI literacy and AI skills development look like to make sure that [people are very comfortable with] the tools and workflows that are increasingly common in the age of AI?” In my mind, it is both the soft skills as well as the ability to manage a lot of the AI tools that will increasingly be prevalent across the economy. </p>
<p><strong>Sam Ransbotham:</strong> That’s a tough answer to everybody listening, because then it’s not A or B. What you’re saying is A and B. Naively, it’d be great if I knew everything, but you’ve got to make choices about where you spend [your time]. Let’s say you have one hour this afternoon to spend on something. Should you spend it on a soft skill, or should you spend it on a technical skill? Where does that incremental marginal hour go? </p>
<p><strong>Taylor Stockton:</strong> You’re pushing me to prioritize here. I will say I know I started with soft skills, but I think the Department of Labor believes that AI literacy and foundational AI skills truly are going to be the gateway to opportunity in the AI economy. So if you force me to prioritize, I’ll go with the AI literacy skills, because we’re seeing so many new jobs [being] created, new forms of productivity [being] unlocked across the economy, but I think we recognize that a lot of that is only going to be possible for workers if they have those foundational skills. So it’s been a massive push for us to say, “How do we make those core AI literacy skills as accessible as possible across the economy?” </p>
<p><strong>Sam Ransbotham:</strong> One thing that feels like a nightmare to me is all this stuff changes so quickly. I’m sure that everyone’s having trouble with that, but what kinds of things are you doing within the Department of Labor to try to keep your finger on that pulse? </p>
<p><strong>Taylor Stockton:</strong> It’s a terrific point, Sam, because I think a lot of the conversations that we have [are] around the challenges of AI. And there’s a lot of headlines about some of this doomerism about mass job loss, which we’re not seeing in the data, and we don’t think it’s going to be the case. But to your question, a lot of times we note that the biggest challenge in our mind around AI and work is going to be the speed of change, because a lot of the cycles of education systems and workforce systems and enterprise transformation are so much longer than the speed of change of AI.</p>
<p>Some of these cycles in an enterprise — [for looking at] strategy or enterprise transformation or systems — change maybe once a year, maybe once every few years, but there [are] new AI models and new AI applications every six weeks. So I think the core capability that we are both encouraging businesses to think about — but also the capability that we are trying to think about ourselves — is agility. </p>
<p>To your question of specific projects and initiatives on our side, the big initiative that we’re about to launch is called the AI Workforce Hub, which is really going to be a little bit of an R&D lab around how we support workers in the age of AI with a core capability … being how we collect the right data around how AI is impacting the labor market, what we’re seeing from an AI adoption standpoint, [and] what we’re seeing from a productivity and time-saving standpoint. To your point, a lot of these metrics have not been available in the past, certainly not at the speed needed to truly measure in the age of AI. So we’re super, super excited to launch that initiative and make sure that we’re able to support businesses and workers in this faster-moving economy. </p>
<p><strong>Sam Ransbotham:</strong> Talk more about that initiative. When is that happening? How long does that process take? Give us some details. </p>
<p><strong>Taylor Stockton:</strong> It’s been in the works for a while. It was originally announced in the White House’s “AI Action Plan” that came out in the summer of 2025, and then, now early 2026, we’re looking to launch as soon as possible. I think the overall vision is to have that type of agility that can not only take in more information in real time about how AI is impacting work [and] not just have it be a passive research exercise [but to] really be something that’s a research and innovation exercise to say, “Let’s translate the data that we’re seeing into new policies, into new types of resources, and into funded innovation pilots to say, for example, if there are challenges that we’re seeing in the data around entry-level workers and the types of skills that perhaps entry-level workers need in an AI-driven economy, let’s also have a set of funded innovation pilots that further explores new models to support those individuals.” </p>
<p>So I think that type of research and data collection is one part of it. But that muscle of action, whether it’s through policy, through guidance, or through experimenting new models, is something that we’re super, super excited about. </p>
<p><strong>Sam Ransbotham:</strong> It does seem like something that would be very nice to do at a national level, given that we don’t want everyone out there making these idiosyncratic duplicative efforts, and that’s exactly the sort of role that we would hope [for] a common good.</p>
<p><strong>Taylor Stockton:</strong> What I would also say is that part of the vision is trying to address a challenge that we see right now, which is that the narrative around AI and work feels very fragmented. It feels very speculative. Everyone has their own thought leadership that they’d like to share on LinkedIn of what they think [will happen with] the AI workforce in five to 10 years. And that’s OK. I don’t want to discourage that or look down upon that. But it is also something I think businesses and state workforce agencies struggle with sometimes, to say, “How should we make decisions when there’s so much noise around the possible outcomes for workers and for businesses?” I think our goal is, to your point, how do we use our role as the U.S. Department of Labor to really be the signal through the noise and really be a central source of truth that businesses, that workers, and that state and local governments can come to really understand what’s happening and the possible levers to support workers?</p>
<p></p>
<p><strong>Sam Ransbotham:</strong> I like that because at the core of this, there’s a lot [of] new [things] going on, and we’ve focused a little bit here on the speed at which it’s happening, which I think is important, but there’s also just the fundamental problem of: It’s new. One of the analogies that I think about is, we’re very good at measuring stuff like how many things we make. When we’re talking about how many cars we make, how many X we make, we’ve got great metrics in our world about counting how many of those we make, and we know how much we sold a car for. But when a company makes an open-source AI model that it provides to people out there that literally billions of people use, we don’t have good metrics around how much value that’s creating. If we fail to measure these things, then it’s going to be hard for the Department of Labor to figure out if an initiative is working or not.</p>
<p>I’m going to switch back to your comments about entrepreneurship, because I think those are fascinating. When these tools are available to everybody, how does one entrepreneur differentiate themselves from everyone else? </p>
<p><strong>Taylor Stockton:</strong> I think it’s a great point of the two things that are true in the age of entrepreneurship and the age of AI, which is that it’s easier to get started and perhaps easier to get off the ground. Perhaps that’s more commoditized, to be able to get going, build a web page, launch a product, get initial feedback from the market — but to your point, others will be in the same position. So there will still be difficulty and challenges in scaling up and further differentiating. </p>
<p>Perhaps, counterintuitively in a certain way, in an age where technology is so abundant, and AI-generated content and products and services are so abundant, it may come back to humanity and relationship building to say in many products and services [that] consumers and enterprises may still prefer the solution that, yes, has the great AI-generated content but also has someone you can go to that is a human you trust and you’ve actually built a relationship built with and built a rapport with. So I’m hopeful [that] among the different aspects of differentiation,[there] will still be the human element. </p>
<p><strong>Sam Ransbotham:</strong> I was reading about your registered apprenticeship programs and some of these things. Like you say, there’s a change management process both in industry and in society. I don’t know how all these things are going to play out. How long does that take? How is that going to happen? </p>
<p><strong>Taylor Stockton:</strong> We put out a big report as the Department of Labor alongside the Department of Education and the Department of Commerce called “America’s Talent Strategy.” Part of what we outlined in that strategy is that the traditional idea of different pathways to economic opportunity is broken. And we need radical change to really make sure that individuals are able to see pathways into the workforce. We think that this notion of the “college for all” movement that was true for a while didn’t work. Higher education and four-year degrees will still be a great path for many people, but there’s also a lot of other paths that may make sense depending on their interests, depending on their context. I think part of the value of registered apprenticeships is that you’re learning on the job, but you’re also getting paid from the very beginning. </p>
<p>You’re not only not taking on debt, you’re getting paid. And there’s also not the risk of this mismatch that I think we so often see, whether it’s higher education or a boot camp or training program. Sometimes, individuals get to the other side, and they figure out the hard way that the skills that they’ve built, unfortunately, still aren’t fully connected to the skills that employers may be looking for. So the beauty of work-based learning models like registered apprenticeships is that you’re learning on the job, and you’re learning those skills that are so deeply intertwined with where the workforce is moving. That’s one of the reasons that we’re really investing in that model, to make sure more workers and more businesses can benefit from it. </p>
<p><strong>Sam Ransbotham:</strong> You’ve alluded to a couple of times [with] some of your past consulting and [when] you phrased one of your jobs as “Hey, that’s before AI took over for that.” Take us [through] a little bit of history of what happened since Boston College. What have you been doing? How did you end up at the U.S. Department of Labor from an intro to IS class? </p>
<p><strong>Taylor Stockton:</strong> Well, again, I credit any future success back to Professor Ransbotham. </p>
<p><strong>Sam Ransbotham:</strong> Got that on the record. </p>
<p><strong>Taylor Stockton:</strong> I fell in love with the idea that education is one of the greatest levers to unlock economic opportunity and this idea of the American dream. As I looked more into education and the concept of the American dream, I looked at this dynamic of technology reshaping the workforce in such a profound way that really requires us to totally change and transform the way that we approach education workforce development. So coming out of Boston College, I did start in consulting but with a specific focus on education and workforce projects. </p>
<p>From there, I actually moved to South Africa for a couple years at an education startup that was thinking about what is the future of K-12 education, and how do we make sure that we’re embedding technology in the way that we prepare students for the future workforce? From there, I did get my MBA at Harvard Business School, but while I was getting that MBA, [I] started the Future of Work Club [and] started a future of work <a href="https://robotsarecoming.org/" target="_blank" rel="noopener">blog</a> that may or may not be somewhere still on the internet. </p>
<p>I helped start a workforce tech company that partnered with government agencies at the local and state level to address some of these issues and [say,] “Hey, how can we actually use technology to better match job seekers [who] are looking for jobs, looking for retraining, and businesses that are hiring in the economy?” So I was thinking about a lot of these issues, [which are] outside of government in the private sector, but because we were partnering with government agencies, I began to see the tremendous role and influence that government agencies have in really shaping the type of innovation that’s possible to support businesses and workers. So I was really grateful for the opportunity, especially from the deputy secretary of labor, who’s leading on a lot of these AI issues, and [I was] able to have conversations with him about leading a portfolio here, to really double down on the areas that the Department of Labor has focused on [in] the past and really make this a core part of how we think about the agency’s future. </p>
<p><strong>Sam Ransbotham:</strong> One of the phrases I liked from your startup, too, was a “GPS for your career.” I like that, because earlier I was kind of pushing you on “If I have one incremental hour, where do I spend it?” I worry about that a lot, and I think about that a lot. It’s uncertain for me: What’s the best next thing for me to do? There’s one thing that I could spend an hour [with] this afternoon that would really help me, but what is that? If I could have that GPS, it feels like that’s a great mission. </p>
<p><strong>Taylor Stockton:</strong> One of the pillars that we spoke about in our big workforce report, <a href="https://www.dol.gov/sites/dolgov/files/OPA/newsreleases/2025/08/Americas-Talent-Strategy-Building-the-Workforce-for-the-Golden-Age.pdf" target="_blank" rel="noopener">“America’s Talent Strategy,”</a> is worker mobility. I think part of what we observed and reflected on is that individuals need to be able to move through the economy, perhaps so much more than they did in the past. A lot of people [are] examples of how much they’ve moved in their career compared to their parents. My mom worked for a real estate company for 42 years. I’ve already had more jobs than her, still feeling relatively early in my career. </p>
<p>So I think because of that, the need is navigation. There’s a need to say, “How am I able to see the different possible pathways that I might be able to take?” But, also more critically, not just what those pathways are, but how to get there, and how do I further equip myself with the right skills and right resources to better set myself up for those future options that I’m really interested in? What our startup did — and many other startups are doing — is say, “How do we use technology and AI to personalize that navigation experience to really support people in those future endeavors?” </p>
<p><strong>Sam Ransbotham:</strong> That personalization level seems critical. Let me transition a bit here. We have a segment where we ask you some short questions, rapid-fire. What’s the first thing that comes off the top of your head? What’s moving faster or slower about artificial intelligence than you expected? </p>
<p><strong>Taylor Stockton:</strong> I think part of what’s moving faster about artificial intelligence is the underlying model itself, the capabilities itself. It feels like there’s a new model [or] there’s a new feature every week, every two weeks, especially with the competition. But the challenge or the slower speed is the infusion within the enterprise. Again, I think there’s this hope by technologists that enterprises will just automatically adopt every new feature, whereas in reality, there’s a lot more cycles to getting to full usage. </p>
<p><strong>Sam Ransbotham:</strong> What about artificial intelligence frustrates you the most? </p>
<p><strong>Taylor Stockton:</strong> I think I am a perfect example of someone who constantly gets in arguments with my [large language models]. I am very pro-AI. I think there’s going to be a lot of benefits, but they are certainly not perfect yet. They certainly still make mistakes, and I may have even raised my voice a few times when interacting with them. </p>
<p><strong>Sam Ransbotham:</strong> I think we’ve all been there. How are people approaching artificial intelligence wrong?  </p>
<p><strong>Taylor Stockton:</strong> I wouldn’t necessarily say “wrong,” but I worry. One of the things that I worry about is that there are still too many people and too many businesses, especially small businesses, that are sitting on the sidelines and that are waiting, doing a wait-and-see approach to say, “Let’s see how this evolves before I jump in.” This is a wave that is not going to slow down. It’s only going to accelerate. So I would encourage all individuals and all businesses to, even if you’re busy, even if there are other priorities, [ask] how you [will] make time to really make sure you’re building the skills to make sure you’re not left behind in a lot of the benefits that we’re going to see. </p>
<p><strong>Sam Ransbotham:</strong> That’s great because we’re all busy. We all fill every day. It’s really hard to take that incremental hour to do anything new and different. I find that true. What do you wish that AI could do better? </p>
<p><strong>Taylor Stockton:</strong> I think a lot of the areas of AI that I’m most excited about are in scientific and medical research. I think there is so much promise to really address issues that affect a lot of people’s lives, and I think it’s something that we’re seeing some investment by AI companies, but I wish the emphasis and the prioritization would be even more. There [are] sometimes new features or new product offerings that they launch, and you say, “Is this really the greatest impact option for humanity?” I won’t name any, but perhaps your listeners can think of a few. [On the] medical research side, again, there [are] so many people [who] suffer from long-term diseases that I think AI, if used the right way, can save lives. And I’m really hopeful about that. </p>
<p><strong>Sam Ransbotham:</strong> I agree with that. That makes sense to me, because if you think back a hundred years ago, we were still sort of barely doing surgery. We were barely using anesthesia. We’ve had massive advances over the last hundred years. It’s kind of exciting to think what the next hundred could bring. </p>
<p>What should I have asked you? Is there anything you wanted to cover that we didn’t get in? </p>
<p><strong>Taylor Stockton:</strong> I would just say that one of the biggest things on my mind right now is how we shift the societal narrative around AI and work. Even as the job data and productivity data looks positive, I think the reality is that there is a public sentiment question here that we have to talk about, that people are fearful and people are skeptical and people are uncertain. </p>
<p>A lot of what the Department of Labor is trying to do is say, “How do we shift the narrative from fear to optimism?” and make sure that everyone is able to benefit from the jobs, from the productivity, and from the meaningful nature of some of the shifts that AI can bring. I think it’s going to be a long journey. And I think the Department of Labor is up for that journey, but we’re looking for private sector partners and others to partner with, to make sure we are telling the story in the right way to benefit American workers across the country. </p>
<p><strong>Sam Ransbotham:</strong> That’s great. In the big picture, it makes sense, but in the small picture, it’s really important, too. </p>
<p>This has been great. It’s been fun catching up. I hope maybe in 15, 16 years we’ll catch up again, and we’ll see what other wonderful things you’ve done. But thanks for taking the time to talk with us today. </p>
<p><strong>Taylor Stockton:</strong> Thanks again, Sam. Really important conversation and appreciate you having me on. </p>
<p><strong>Sam Ransbotham:</strong> Thanks for tuning in today. On our next episode, I’ll speak with Jacqui Canney, chief people and AI enablement officer at ServiceNow. Please join us.</p>
<p><strong>Allison Ryder:</strong> Thanks for listening to <cite>Me, Myself, and AI</cite>. Our show is able to continue, in large part, due to listener support. Your streams and downloads make a big difference. If you have a moment, please consider leaving us an Apple Podcasts review or a rating on Spotify. And share our show with others you think might find it interesting and helpful.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/audio/shifting-ai-from-fear-to-optimism-u-s-department-of-labors-taylor-stockton/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Why Leaders Lose the Room in High-Stakes Meetings</title>
				<link>https://sloanreview.mit.edu/article/why-leaders-lose-the-room-in-high-stakes-meetings/</link>
				<comments>https://sloanreview.mit.edu/article/why-leaders-lose-the-room-in-high-stakes-meetings/#comments</comments>
				<pubDate>Mon, 23 Mar 2026 11:00:16 +0000</pubDate>
				<dc:creator><![CDATA[Nancy Duarte. <p><a href="https://www.linkedin.com/in/nancyduarte/" target="_blank" rel="noopener noreferrer">Nancy Duarte</a> is CEO of <a href="https://www.duarte.com/" target="_blank" rel="noopener noreferrer">Duarte Inc.</a>, a communication company in the Silicon Valley. She’s the author of six books, including <cite>DataStory: Explain Data and Inspire Action Through Story</cite> (Ideapress Publishing, 2019). </p>
]]></dc:creator>

						<category><![CDATA[Communication]]></category>
		<category><![CDATA[Decision-Making]]></category>
		<category><![CDATA[Information Sharing]]></category>
		<category><![CDATA[Management Strategy]]></category>
		<category><![CDATA[Problem-Solving]]></category>
		<category><![CDATA[Strategic Communication]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leadership Skills]]></category>
		<category><![CDATA[Talent Management]]></category>
		<category><![CDATA[Workplace, Teams, & Culture]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR &#124; Getty Images Most advice about leadership communication focuses on presentation skills: Be concise, be clear, tell better stories. But the most consequential leadership communication happens in meetings where tough issues are being discussed and real decisions are being made. Even some of the most skilled leaders find themselves in moments where [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/Duarte-1290x860-1.jpg" alt="" class="wp-image-126160"/><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR | Getty Images</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">Most advice about leadership</span> communication focuses on presentation skills: Be concise, be clear, tell better stories. But the most consequential leadership communication happens in meetings where tough issues are being discussed and real decisions are being made.</p>
<p>Even some of the most skilled leaders find themselves in moments where communication breaks down. The potential rewards are high, your preparation is solid, and you’re pretty sure the thinking is sound. And yet, after you’ve made your case, the room goes quiet, alignment diverges just when it’s needed most, and the decision stalls.</p>
<p>When this happens, leaders usually look for the flaw in execution; maybe the framing wasn’t quite right, or the slides weren’t clear enough, or the audience was distracted. What they rarely examine is how their own presentation process changed under pressure and how that shift inadvertently increased the effort required of the audience to process and respond in real time.</p>
<p></p>
<p>After decades of working with executives in high-stakes decision meetings, board discussions, strategy offsites, and pivotal moments where real choices must be made, I’ve seen a clear pattern emerge. These are some of the strongest communicators in the organizations we serve, but pressure exposes each leader’s particular way of making sense of complexity and the signals they’re inadvertently sending to the other people in the room in high-stress moments. </p>
<p>Here’s how to self-diagnose your own patterns and understand why you might be losing the room when you’re most impassioned.</p>
<h3>Leaders Have Their Own Thinking Processes — and Expect Others to Keep Up</h3>
<p>Some leaders think best through preparation. They work through ideas in advance, refining language and logic until it feels precise and defensible, and then bring those ideas to the table. Others think best in the moment by presenting an issue and deciding on a direction out loud, adjusting in real time to move people forward. Other leaders distribute thinking across teams by laying out the issue and then relying on others to analyze and shape viable options. And still others discover insight through exploration, testing ideas through conversation as they go.</p>
<p></p>
<p>These are not personality traits. They are thinking processes. And in most cases, they are the reason those leaders advanced. Each process is a strength. </p>
<p>The problem emerges under pressure. When the stakes rise, leaders tend to lean harder on the process they know best. And under pressure, what usually serves them well in a meeting becomes more pronounced, a little harsher, less forgiving, a little more chaotic. That overreliance changes how the message is experienced by the audience. (See “How Pressure Amplifies How Leaders Think.”)</p>
<div class="callout-highlight">
<aside class="l-content-wrap">
<article>
<h4>How Pressure Amplifies How Leaders Think</h4>
<p class="caption">When leaders feel stressed, their decision-making style can become more intense and can distort the messages employees hear.</p>
<table id="Chart1" class="chart-grouped-rows no-mobile">
<thead>
<tr>
<th><strong>When pressure rises, leaders tend to &hellip;</strong></th>
<th><strong>Positive strength it reinforces</strong></th>
<th><strong>Negative signals others might receive</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>
					<strong>Tighten preparation</strong>
				</td>
<td>
					Precision and certainty
				</td>
<td>
					&ldquo;Step back. Your input isn&rsquo;t welcome right now.&rdquo;
				</td>
</tr>
<tr>
<td>
<strong>Take control</strong>
</td>
<td>
Speed and forward motion
</td>
<td>
&ldquo;The decision has already been made.&rdquo;
</td>
</tr>
<tr>
<td>
<strong>Hand off the thinking</strong>
</td>
<td>
Shared responsibility
</td>
<td>
&ldquo;Tell me what I&rsquo;m supposed to do after you&rsquo;ve figured out what matters.&rdquo;
</td>
</tr>
<tr>
<td>
<strong>Explore ideas in real time</strong>
</td>
<td>
Creativity and discovery
</td>
<td>
&ldquo;There&rsquo;s still a lot of uncertainty about how to move forward.&rdquo;
</td>
</tr>
</tbody>
</table>
<p>Source: Duarte Inc.</p>
<p><!--IMAGE FALLBACK FOR MOBILE BELOW --><br />
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/Duarte_Blind_Fig_REV.png" alt="How Pressure Amplifies How Leaders Think" class="no-desktop">
</p>
</article>
</aside>
</div>
<p></p>
<p>Under pressure, thinking processes that are usually strengths can become weaknesses. I’ve worked with leaders who felt fully prepared while their teams felt constrained and unsure whether input was welcome. People stopped contributing. Some leaders believed they were being decisive and failed to recognize that their colleagues were disengaged, sensing that the outcome had already been determined; these team members didn’t believe that their input would matter. I’ve seen other leaders pride themselves on efficiency while audiences quietly struggled to understand what mattered most. Delegation can leave teams unsure what actually needs to be decided. Exploration can head in wild directions, leaving people struggling to track what is real. </p>
<p>These kinds of breakdowns are rarely visible to leaders because pressure changes how they perceive the room. As the weight of making the right decision rises, attention narrows toward certainty and forward motion. Familiar strengths feel safe. In a pressure state, it becomes harder to notice when participation is shrinking or when clarity for the leader is creating effort for everyone else.</p>
<p>I’ve done this myself. In moments when the risks were high and time was tight, I explored ideas out loud in real time, assuming that the room would follow my reasoning. The exploration energized me, but it created ambiguity for my executive team. In another instance, I pushed a companywide decision forward quickly as a mandate to drive momentum. Resistance emerged because not enough people were invited to shape the outcome.</p>
<p>When a leader leans too hard into one style of thinking, it shifts work onto the audience by asking them to wait, comply, infer intent, or tolerate uncertainty longer than they should have to. And when the stakes are high, that extra work for the audience shows up in stalled decisions along with quiet resistance and weakened trust, even when the idea itself is strong.</p>
<p></p>
<p>What leaders often miss is this: You judge your communication by intent, whereas audience members judge it based on what they think you’re asking of them. Leaders ask: Was the thinking rigorous? Was the recommendation correct? Was the message accurate? Audiences ask: How hard is this to follow? What am I supposed to do? Where do I place my confidence? Under pressure, that gap widens.</p>
<h3>How to Self-Adjust Under Pressure</h3>
<p>After leaders notice these moments of disconnect, they often attempt to change their style by trying to become more spontaneous, more structured, or more flexible. But that rarely works. The leaders who communicate best under pressure don’t try to become someone else. Instead, they learn to recognize how their thinking process changes the experience for the audience, and they make adjustments. Here are the best ones:</p>
<p><strong>Anticipate challenges.</strong> The most effective leaders I’ve worked with anticipate how pressure will distort their strengths and then design safeguards that reduce confusion and protect shared decision-making in their meetings. This demands that they reflect on their failures of the past and the patterns that underlie their thinking style.</p>
<p><strong>Confirm what people know.</strong> Leaders who rely on preparation build in explicit moments to test understanding, not just accuracy. </p>
<p><strong>Force a pause.</strong> Leaders who default to control create pauses before decisions lock, signaling that real input is still welcome. </p>
<p><strong>Clarify process.</strong> Leaders who delegate make sure they clearly state who will shape the final recommendation, rather than leaving others unsure about whether they are advising or deciding. </p>
<p><strong>State your openness to new options.</strong> Leaders who explore ideas in real time make it clear when they are thinking aloud. This helps everyone in the room understand what parts of the ideas are still forming and what parts are firm. </p>
<p>I have had to learn that last one myself. When I start brainstorming with my team in real time, I make sure that I say, “I’m thinking out loud right now. Please help me bring this to clarity.” That small signal changes the energy. It gives people more direct permission to shape the thinking with me instead of trying to determine whether I’ve already made up my mind.</p>
<p></p>
<p></p>
<p>Making adjustments doesn’t come easily, because it initially feels inefficient, especially to leaders whose success has been built on speed or precision. But over time, these small adjustments can become part of how you lead, allowing you to stay grounded in your strengths without overburdening the audience.</p>
<p>When leaders understand how their thinking shows up under pressure, they reduce confusion. They build trust by making it easier for people to move together when it counts.</p>
<p>Organizational success happens in moments when people understand what matters, what is changing, and what they are being asked to do. Instead of working toward being more engaging or concise, leaders would benefit from asking themselves, “When pressure rises in my meetings, how does my thinking process show up, and what gap is it requiring others to fill?”</p>
<p>Leaders who can answer that question honestly begin to see their communication the way others experience it. They recognize where their strengths create friction and then adjust to maintain their team’s goodwill and drive decision-making.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/why-leaders-lose-the-room-in-high-stakes-meetings/feed/</wfw:commentRss>
				<slash:comments>1</slash:comments>
							</item>
					<item>
				<title>How Goldman Sachs Stays Agile: HR Leader Jacqueline Arthur</title>
				<link>https://sloanreview.mit.edu/article/how-goldman-sachs-stays-agile-hr-leader-jacqueline-arthur/</link>
				<comments>https://sloanreview.mit.edu/article/how-goldman-sachs-stays-agile-hr-leader-jacqueline-arthur/#respond</comments>
				<pubDate>Thu, 19 Mar 2026 11:00:02 +0000</pubDate>
				<dc:creator><![CDATA[Donald Sull and Charles Sull. <p><a href="https://www.linkedin.com/in/donald-sull-1077444/" target="_blank">Donald Sull</a> (<a href="https://x.com/culturexinsight" target="_blank">@culturexinsight</a>) is a professor of the practice at the MIT Sloan School of Management and a cofounder of CultureX. <a href="https://www.linkedin.com/in/charles-sull/" target="_blank">Charles Sull</a> is a cofounder of CultureX.</p>
]]></dc:creator>

						<category><![CDATA[Corporate Values]]></category>
		<category><![CDATA[Employee Engagement]]></category>
		<category><![CDATA[Human Resources]]></category>
		<category><![CDATA[Leadership Advice]]></category>
		<category><![CDATA[Organizational Culture]]></category>
		<category><![CDATA[Organizational Learning]]></category>
		<category><![CDATA[Culture]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leading Change]]></category>
		<category><![CDATA[Skills & Learning]]></category>
		<category><![CDATA[Talent Management]]></category>
		<category><![CDATA[Workplace, Teams, & Culture]]></category>

				<description><![CDATA[Aleksandar Savic After World War II, Goldman Sachs ranked 10th among the top 30 U.S. investment banks. Twenty-seven of those once-mighty Wall Street rivals, including Salomon, Lehman, and First Boston, have been relegated to the annals of business history. Goldman, in contrast, is a global powerhouse, employing more than 46,000 people, operating in more than [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/CultureChamps_JaquelineArthur-1290x860-1.jpg" alt="" class="wp-image-126099"/><figcaption>
<p class="attribution">Aleksandar Savic</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">After World War II</span>, Goldman Sachs ranked 10th among the top 30 U.S. investment banks. Twenty-seven of those once-mighty Wall Street rivals, including Salomon, Lehman, and First Boston, have been relegated to the annals of business history. Goldman, in contrast, is a global powerhouse, employing more than 46,000 people, operating in more than 40 countries, and supervising $3 trillion in total assets.</p>
<p>Goldman owes much of its survival and success not to financial acumen, which its rivals shared, but rather to organizational agility — sustained not for quarters or years but for decades. The firm has repeatedly evolved to thrive through market upheavals, geopolitical turbulence, technological revolutions, and unrelenting waves of product and service innovation.</p>
<p>CultureX recently analyzed feedback from more than 250,000 employees across 50 diversified financial services and asset management institutions. We benchmarked Goldman Sachs against its peers in terms of how frequently and positively the firm’s employees discussed more than 125 topics in their online job reviews. Goldman ranked first among its peers in terms of agility, scoring nearly two standard deviations above the industry average.</p>
<p></p>
<p>A critical engine of Goldman’s sustained agility is its ability to attract, develop, and retain ambitious and talented employees. In an industry that attracts high-caliber talent across the board, Goldman Sachs stands out. Employees speak about the firm’s talent density — colleagues who are impressive, intelligent, hardworking, and ambitious, with the latter trait being mentioned more than three standard deviations above the peer group average.</p>
<p>In a recent podcast, Jacqueline Arthur, global head of human capital management at Goldman Sachs, shared the approaches to hiring, internal mobility, performance management, and development that have helped the firm maintain its agility over decades.</p>
<h3>1. To fight complacency, hire ambitious people.</h3>
<p>Sustained success breeds complacency in most organizations. Goldman fights this by hiring ambitious people who constantly challenge the status quo. “Our people are ambitious, motivated, hardworking, and resilient, which means open to feedback and open to challenging discussions,” Arthur said. “When you have talent that’s looking to grow and an environment that champions that mindset, it becomes the bedrock of an organization that’s constantly challenging itself to do things better. It feels like it’s part of our DNA.”</p>
<p>That restless drive to improve extends well beyond financial products and services. “Cultivating a culture that encourages and supports innovation and experimentation — it’s not just about what new products can we offer,” Arthur said. “It’s also about thinking about every dimension of our business, the markets, and even how we do things internally. Are there better or smarter ways to evolve how we manage day-to-day tasks?”</p>
<h3>2. Reduce bureaucratic obstacles to action.</h3>
<p>Agility requires speed, and speed works best when employees aren’t slowed down by unnecessary red tape. “In our day-to-day businesses, we are really encouraging our employees to act like owners,” Arthur said. “And really, what does that mean? Intentionally reducing bureaucratic layers and empowering our teams with significant autonomy so that, ultimately, decision-making is swift [and] accountability is clear, even as we operate at a global scale and at a very fast pace.”</p>
<p>“We’re very data-oriented,” Arthur continued. “In our annual sentiment surveys, [we ask,] do our employees feel empowered to come up with new and better ways of doing things? And we score extraordinarily high on that question, really, across all levels of the organization.”</p>
<p></p>
<p>One risk: Ownership without context can lead to well-intentioned decisions that don’t align with the firm’s strategic direction. Goldman addresses this by providing employees at all levels with the strategic context they need to make good decisions. “A few years ago, our earnings town halls were open to our managing directors and partners,” Arthur said. “One of the things we did was open up these town halls to our broader employee base, because it’s imperative that they understand the firm strategy and [that] they feel aligned to it, even if they’re responsible for [only] a discrete piece of that in their day-to-day role.” </p>
<h3>3. Encourage internal movement to retain ambitious employees.</h3>
<p>The ambitious employees Goldman hires need room to grow — and will leave if they don’t find it. Internal mobility is Goldman’s answer to this challenge. “We focus on helping our managers understand having an employee-first focus on talent development,” Arthur said, “because sometimes that best opportunity for someone doesn’t necessarily exist in your team. It exists someplace else in the firm.</p>
<p></p>
<p>“A core part of our culture and value proposition has been this opportunity for mobility to facilitate a long-term career at Goldman Sachs,” Arthur explained. “When we actively encourage and facilitate this internal movement, essentially we’re rehiring our best talent and recontracting with them in terms of the value proposition of a career at Goldman Sachs, which keeps them inspired and motivated.” Arthur exemplifies this mobility: “I started in our revenue businesses; I was in the executive office; I’m now in human capital management. I’m a recovering lawyer. I did not start in human capital or in the HR department.”</p>
<p>Mobility enhances agility for the firm beyond retaining ambitious employees. “It creates incredibly well-rounded professionals who have a much broader exposure,” Arthur said. “Understanding how various parts of the firm connect, understanding our culture better from that perspective, provides significant advantages in terms of judgment and clarity, which are really critical for agile decision-making.”</p>
<p><em>Want to hear more advice from Arthur? Watch this conversation and the entire series on the <a href="https://www.youtube.com/@culturexculturexculturex" target="_blank">CultureX YouTube channel</a>, on <a href="https://open.spotify.com/show/6oSF9YHbZGhj8UHrFE6mCf?si=8bb3324edb1f4e44" target="_blank">Spotify</a>, or on <a href="https://podcasts.apple.com/us/podcast/culture-champions-by-culturex/id1774969910" target="_blank">Apple Podcasts</a>.</em></p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/how-goldman-sachs-stays-agile-hr-leader-jacqueline-arthur/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Retro-Innovation: How Smart Companies Profit From the Past</title>
				<link>https://sloanreview.mit.edu/video/retro-innovation-how-smart-companies-profit-from-the-past/</link>
				<comments>https://sloanreview.mit.edu/video/retro-innovation-how-smart-companies-profit-from-the-past/#respond</comments>
				<pubDate>Wed, 18 Mar 2026 11:00:17 +0000</pubDate>
				<dc:creator><![CDATA[MIT Sloan Management Review. ]]></dc:creator>

						<category><![CDATA[Customer Behavior]]></category>
		<category><![CDATA[Product Design]]></category>
		<category><![CDATA[Product Development]]></category>
		<category><![CDATA[Product Strategy]]></category>
		<category><![CDATA[Video]]></category>
		<category><![CDATA[Webinars & Videos]]></category>
		<category><![CDATA[Customers]]></category>
		<category><![CDATA[Innovation]]></category>
		<category><![CDATA[Innovation Strategy]]></category>
		<category><![CDATA[New Product Development]]></category>

				<description><![CDATA[AI may be today’s hot topic, but there’s a robust market for old-fashioned products. Board games, vinyl records, and even 1990s-style video game consoles are making a comeback, especially with Generation Z. What does this mean for teams building modern products? In this video, MIT Sloan Management Review senior features editor Kaushik Viswanath explains “retro-innovation” [&#8230;]]]></description>
								<content:encoded><![CDATA[<p>AI may be today’s hot topic, but there’s a robust market for old-fashioned products. Board games, vinyl records, and even 1990s-style video game consoles are making a comeback, especially with Generation Z. What does this mean for teams building modern products?</p>
<p>In this video, <cite>MIT Sloan Management Review</cite> senior features editor Kaushik Viswanath explains “retro-innovation” — how smart companies are finding inspiration in older technologies to build products for today’s consumers. You’ll learn what’s driving this shift, along with three strategic questions every product leader should be asking.</p>
<p>For a deeper dive into how companies can differentiate through simplicity, extend product life cycles, and tap into new markets by mining old ones, read the full article that inspired this video, “<a href="https://sloanreview.mit.edu/article/how-to-profit-from-retro-innovation/">How to Profit From Retro-Innovation</a>,” by Vijay Govindarajan, Tojin T. Eapen, and Gautham Vadakkepatt.</p>
<h5>Video Credits</h5>
<p><strong>Kaushik Viswanath</strong> is the senior features editor at <cite>MIT Sloan Management Review</cite>.</p>
<p><strong>M. Shawn Read</strong> is the multimedia editor at <cite>MIT Sloan Management Review</cite>.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/video/retro-innovation-how-smart-companies-profit-from-the-past/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Bridge the Intergenerational Leadership Gap</title>
				<link>https://sloanreview.mit.edu/article/bridge-the-intergenerational-leadership-gap/</link>
				<comments>https://sloanreview.mit.edu/article/bridge-the-intergenerational-leadership-gap/#respond</comments>
				<pubDate>Tue, 17 Mar 2026 11:00:34 +0000</pubDate>
				<dc:creator><![CDATA[Felix Rüdiger, Kaspar Köchli, Matthew Hunter, and Nolita Mvunelo. <p>Felix Rüdiger is a doctoral researcher at the University of St. Gallen and formerly served as head of content and research for the St. Gallen Symposium. Kaspar Köchli is head of research and of the Singapore office at the St. Gallen Symposium, and a doctoral researcher at the University of St. Gallen. Matthew Hunter is the partnerships lead at the United Nations Youth Office. Nolita Mvunelo is a principal at The Club of Rome.</p>
]]></dc:creator>

						<category><![CDATA[Decision-Making]]></category>
		<category><![CDATA[Demographics]]></category>
		<category><![CDATA[Diversity]]></category>
		<category><![CDATA[Employee Development]]></category>
		<category><![CDATA[Employee Recruitment and Retention]]></category>
		<category><![CDATA[Leadership Development]]></category>
		<category><![CDATA[Boards & Corporate Governance]]></category>
		<category><![CDATA[Collaboration]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Talent Management]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR &#124; Getty Images Today’s workforce spans five generations, with millennials and Generation Z together accounting for over 60% of workers globally — a share projected to reach 74% by 2030. Yet there’s a widening intergenerational gap in business leadership. While age diversity in the workplace is growing, decision-making power increasingly rests with [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/Rudiger-1290x860-1.jpg" alt="" class="wp-image-126117"/><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR | Getty Images</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">Today’s workforce spans</span> five generations, with millennials and Generation Z together accounting for over 60% of workers globally — a share projected to reach 74% by 2030. Yet there’s a widening intergenerational gap in business leadership. While age diversity in the workplace is growing, decision-making power increasingly rests with more senior generations.</p>
<p>The average age of CEOs at S&P 1500-listed companies has risen significantly over the past several years, from 54 in 2008 <a href="https://www.businessinsider.com/joe-biden-corporate-gerontocracy-aging-ceos-retirement-succession-replacement-2024-8" target="_blank" rel="noopener noreferrer">to nearly 59 in 2023</a>. Only 5% of directors on S&P 500 boards are under 50. Similar dynamics can be observed worldwide. The average age of board members across major markets such as Brazil, the European Union, and India ranges from 58 to 64 years old — around 20 years older than the median age (about 39) of the global workforce.</p>
<h3>Why Age-Diverse Leadership Drives Better Decision-Making</h3>
<p>While experience is undoubtedly important for effective leadership, it also comes with the risk of relying on the same mental models that have underpinned past successes. When the context of business changes rapidly, maintaining the same strategy can hinder adaptability exactly when new thinking is required. </p>
<p>Enter younger leaders. More age-diverse leadership teams have been found to excel at <em>ambidextrous learning</em>: They’re better at communicating important tacit know-how from one generation to the next. This helps organizations retain critical expertise over time. Simultaneously, younger leaders help counterbalance experience with curiosity and a willingness to question the status quo, which supports a continuous update of organizational knowledge.</p>
<p></p>
<p>Such ambidextrous learning and a diversity of ideas can also unlock innovation. Age diversity has been found to accelerate product innovation and foster creative problem-solving, particularly in times of crisis, such as international conflict or the COVID-19 pandemic. Research has found that intergenerational leadership teams perform particularly strongly in the realms of sustainable business model innovation and eco-innovation. </p>
<p>Importantly, this does not in any way suggest that older managers are less capable or willing to innovate; rather, analyses emphasize the potential of a greater <em>diversity</em> of generational perspectives. Recent research highlights the positive effects that “grey entrepreneurs” on age-diverse teams of founders can have on measures of innovation performance and business growth.<a id="reflink1" class="reflink" href="#ref1">1</a></p>
<h3>Three Ways to Advance Intergenerational Leadership</h3>
<p>The case for intergenerational leadership is increasingly clear. A more systematic, bold approach to involving younger leaders promises to drive progress on key strategic goals, such as innovation, talent recruitment, and sustainability. For a <a href="https://symposium.org/intergenerational-leadership/" target="_blank" rel="noopener noreferrer">recent report</a>, the four of us conducted a structured evidence review and interviews, which led us to identify three main approaches to increasing the influence of younger leaders: consultation, decision rights, and an intergenerational leadership pipeline.</p>
<h4>Consultation</h4>
<p>When taking a consultation approach, senior leadership actively seeks opportunities to learn from younger generations. Typically, the focus is on employees from within the organization, but businesses can also find ways to involve external emerging talent. Consultation programs that have recently gained prominence include <em>reverse mentoring</em> — where younger employees mentor senior leaders on selected issues — and <em>shadow boards</em>, in which teams of younger experts act as sparring partners for the executive committee of the board. </p>
<p></p>
<p>Gucci offers an effective example of the potential of shadow boards as “<a href="https://cmr.berkeley.edu/2024/03/a-board-of-disruptors-for-hyper-transformation/" target="_blank" rel="noopener noreferrer">boards of disrupters</a>.” In 2015, the Italian fashion giant assembled its first shadow board, which, according to then-CEO Mario Bizzarri, served as a “wake-up call for the executives.” The shadow board helped rejuvenate the brand, embrace digital marketing channels, and advance sustainability, such as by reducing unnecessary leather waste. Its role as a sparring partner for the C-suite was credited with an increase in sales in the years that followed. </p>
<p>In consultative approaches, the role of younger generations is usually confined to providing senior executives with new insights or fresh perspectives. But with no guarantee of any meaningful follow-up, consultation risks encouraging talk without corresponding action. </p>
<p></p>
<h4>Decision Rights</h4>
<p>For real change, it may be necessary to go beyond consultation and integrate younger perspectives into executive-level venues.</p>
<p>This is the core idea behind shared decision rights: Younger leaders are included in key leadership structures and empowered with formal roles on executive teams, project teams, and boards. This shared leadership combines the range of perspectives and strengths across people of different ages in everyday and strategic direction-setting.</p>
<p>Telstra, a leading Australian telecommunications company, has prioritized intergenerational leadership on its board. In 2020, amid a major digital transformation, the company appointed Bridget Loudon, then 32 and founder of Australia’s largest skilled-talent platform, as its nonexecutive director. Telstra chairman John Mullen <a href="https://symposium.org/intergenerational-leadership/" target="_blank" rel="noopener noreferrer">noted that Loudon</a> “is a leader in how organizations transform themselves to capture the opportunities presented by developments in technology.” Beyond providing strategic input, she has driven concrete policy innovations, such as advising on flexible working models in the wake of COVID-19 and helping Telstra establish the first director of parental leave policy at any $20 billion-plus listed company worldwide.</p>
<p>At Ford Motors, Alexandra Ford English joined the board in 2021 at age 32 — continuing the Ford family legacy while representing a new generation’s perspective on technology and transformation. With experience leading Ford’s autonomous vehicle deployment, she now serves on the Sustainability, Innovation, and Policy Committee, helping to bridge the gap between the company’s legacy leadership and its digital future.</p>
<p>Formally involving younger generations in key decision-making structures can add value for organizations, but the practice can also meet significant resistance from more entrenched leadership. Strong and determined advocacy to rally and sustain support for change must be deployed, especially by senior leadership, such as board presidents or CEOs. Again, context matters: Smaller and privately held companies may be more independent than publicly listed companies that face strong shareholder pressure.</p>
<p>Finding pragmatic solutions that are fit for purpose in an organization’s specific context may require some adjustment. For example, if extending full board membership to a young leader is unrealistic, a business may be able to add “permanent guests” or strategic advisers to the board or a subcommittee. Even if these members lack voting rights, their valuable input can inform deliberations and decision-making. </p>
<h4>An Intergenerational Leadership Pipeline</h4>
<p>Most consultation and co-leadership practices typically remain confined to episodic one-off engagements for specific segments of the organization’s workforce. A more holistic approach is to embed age-diverse leadership as a guiding principle across organizational structure, culture, and leadership development strategy. Organizations should thereby seek to build and sustain an ongoing intergenerational leadership pipeline by deliberately recruiting younger talent into leadership tracks, accelerating their advancement, and integrating their perspectives into decision-making at every level. Organizations that pursue this enterprisewide approach earlier and more decisively than competitors can turn intergenerational leadership into a lasting source of strategic advantage.</p>
<p></p>
<p>Companies considered some of the <a href="https://time.com/7333715/best-companies-future-leaders-2026/" target="_blank" rel="noopener noreferrer">best for future leaders</a> — including IBM and Procter & Gamble in the U.S. — illustrate how multiple approaches to intergenerational leadership can be combined across career stages. <a href="https://us.pg.com/blogs/secrets-to-the-art-of-leadership/" target="_blank">Procter & Gamble</a>, for instance, exemplifies this model through a “build from within” philosophy, ensuring that 99% of senior leaders are developed internally and that every top role has three ready successors. Its leadership academy provides structured pathways from entry-level roles to the C-suite, embedding C-suite potential at every level. Common strategies across all leading companies include accelerating younger leaders’ learning and growth through structured rotational programs that expose them to different roles across the organization, as well as succession-focused initiatives designed to ensure that high-potential talent is systematically identified, supported, and prepared to step into top decision-making roles.</p>
<p></p>
<p>Embedding intergenerational leadership requires a cultural shift in work routines and inclusive behaviors: Moving away from traditional hierarchies to more participatory, decentralized structures can empower younger employees to take on greater responsibility. Recent research found that <a href="https://www.protiviti.com/sites/default/files/2024-10/lse-generational-survey-report-2024-global-sm.pdf" target="_blank" rel="noopener noreferrer">generationally inclusive meetings</a> are a proven approach to normalizing shared leadership in operational and strategic decisions. The research also found that companies with balanced generational meeting participation report tangible gains: For example, 82% of executives at such companies said they outperformed competitors, and 60% of employees in those inclusive-meeting cultures were unlikely to leave their job within a year (versus just 36% in less-inclusive environments). But most organizations still have work to do: Three-quarters of executive meetings include no Gen Z representatives. For effective age-diverse meetings, inclusive participation and facilitation are essential. Leaders should ensure that all contributions are valued, consider all participants’ insights to avoid groupthink, and remain open to new ideas.</p>
<p></p>
<p>These three approaches to embedding age-diverse perspectives — consultation, decision rights, and an intergenerational leadership pipeline — can encourage innovation, enhance corporate resilience, and cultivate a more inclusive and forward-thinking culture. Businesses that embrace this intergenerational shift — still a relatively new and largely untapped opportunity — will be best positioned to navigate future complexities while reaping tangible benefits.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/bridge-the-intergenerational-leadership-gap/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>How Schneider Electric Scales AI in Both Products and Processes</title>
				<link>https://sloanreview.mit.edu/article/how-schneider-electric-scales-ai-in-both-products-and-processes/</link>
				<comments>https://sloanreview.mit.edu/article/how-schneider-electric-scales-ai-in-both-products-and-processes/#respond</comments>
				<pubDate>Mon, 16 Mar 2026 11:00:12 +0000</pubDate>
				<dc:creator><![CDATA[Thomas H. Davenport and Randy Bean. <p><a href="https://www.linkedin.com/in/davenporttom/" target="_blank" rel="noopener">Thomas H. Davenport</a> is the President’s Distinguished Professor of Information Technology and Management and faculty director of the Metropoulos Institute for Technology and Entrepreneurship at Babson College, as well as a fellow of the MIT Initiative on the Digital Economy. His latest book is <cite>The New Science of Customer Relationships: Delivering the One-to-One Promise With AI</cite> (Wiley, 2025). <a href="https://www.linkedin.com/in/randy-bean-6903882/" target="_blank" rel="noopener">Randy Bean</a> has been an adviser to Fortune 1000 organizations on data and AI leadership for over four decades. He is the author of <cite>Fail Fast, Learn Faster: Lessons in Data-Driven Leadership in an Age of Disruption, Big Data, and AI</cite> (Wiley, 2021).</p>
]]></dc:creator>

						<category><![CDATA[Analytics & Organizational Culture]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Energy Industry]]></category>
		<category><![CDATA[Environmental Sustainability]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Analytics & Business Intelligence]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Innovation]]></category>
		<category><![CDATA[New Product Development]]></category>
		<category><![CDATA[Sustainability]]></category>

				<description><![CDATA[Matt Harrison Clough/Ikon Images At the World Economic Forum Annual Meeting in Davos, Switzerland, in January 2026, Schneider Electric CEO Olivier Blum accepted awards recognizing the company’s AI solutions as part of the WEF’s MINDS (Meaningful, Intelligent, Novel, Deployable Solutions) program — for the second time. The distinction highlighted two of the company’s AI-enabled applications: [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/03/DavenportBean-Schnieder-1290x860-1.jpg" alt="" class="size-full wp-image-125999" /><figcaption>
<p class="attribution">Matt Harrison Clough/Ikon Images</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">At the World Economic Forum Annual Meeting</span> in Davos, Switzerland, in January 2026, Schneider Electric CEO Olivier Blum accepted awards recognizing the company’s AI solutions as part of the WEF’s MINDS (Meaningful, Intelligent, Novel, Deployable Solutions) program — for the second time. The distinction highlighted two of the company’s AI-enabled applications: EcoStruxure Microgrid Advisor and SpaceLogic Touchscreen Room Controller — for delivering measurable impact in energy management. Schneider Electric is the only company to be recognized twice by the program.</p>
<p>“It is clear we have entered a new era where AI and energy are inseparable, and together they will reshape every business,” Blum declared at Davos. “AI requires compute, and compute requires energy. That is why the world needs greater energy intelligence.”</p>
<p>This interdependency between artificial intelligence and energy puts Schneider Electric — a global leader in energy management technology — at the center of one of business’s most critical challenges: powering the AI revolution while advancing sustainability goals. To meet both its customer needs and its internal process objectives, Schneider Electric has built an organizational model that deploys AI at scale, deliberately skipping the pilot phase that consumes resources without delivering business impact at so many companies. (More on this later.)</p>
<p></p>
<p>Philippe Rambach, Schneider Electric’s chief AI officer since 2021, is leading this effort. With nearly 100 AI use cases now running in production — split roughly evenly between customer-facing solutions and internal operations — Schneider Electric has demonstrated that AI can deliver value across every dimension of enterprise operations, including manufacturing floors, customer care centers, and complex energy-optimization systems. The company’s recognitions extend to manufacturing as well: In January 2026, the WEF’s Global Lighthouse Network awarded Schneider Electric’s Wuhan factory in China the Lighthouse designation — the company’s ninth — this time in a newly introduced category honoring talent development and people-centric workforce models.</p>
<p>Rambach described a strategy grounded in business value rather than technological experimentation. “We always start from the business and customer needs, pain points of employees, where AI can help,” he told us. Every initiative must demonstrate clear business value and plan for deployment at scale from its inception. Rambach and his senior management colleagues are also concerned about AI governance and ethics, but, as he noted in a <a href="https://sloanreview.mit.edu/projects/winning-with-intelligent-choice-architectures/">2025 report</a> produced by <cite>MIT Sloan Management Review</cite> and Tata Consultancy Services, “Explainability matters — but in the boardroom, consequence matters more.”</p>
<h3>Balancing Two AI Portfolios: Internal and Customer-Facing</h3>
<p>Schneider Electric pursues AI opportunities across two distinct fronts, each with different strategic imperatives, approaches to measuring success, and timelines for realizing value.</p>
<p>Internal AI applications deliver more immediate financial returns, helping employees work faster and better while providing enhanced support for customers. </p>
<p>Customer-facing AI, meanwhile, represents a longer-term strategic play focused on capturing market share in emerging and evolving markets. “For customers, we want to be first to market and take strong positions, even in markets where AI-assisted energy management isn’t fully developed,” Rambach said. For instance, each country in which Schneider Electric operates shows different rates of renewable energy penetration and different challenges as large new electrical loads come onto power grids. This requires the company to adapt its AI solutions to diverse market conditions. </p>
<p>While much of the business world has rushed to embrace generative AI, Schneider Electric maintains a balanced portfolio of AI technologies. <a href="https://hbr.org/2024/12/how-gen-ai-and-analytical-ai-differ-and-when-to-use-each" target="_blank" rel="noopener noreferrer">Analytical AI</a> — traditional machine learning applied to structured data — still accounts for roughly 60% of the company’s overall AI work, particularly in customer solutions. “Analytical AI is very important and provides a lot of value,” Rambach emphasized. “We are not giving up on that.”</p>
<p></p>
<p>Generative AI represents about 40% of customer-facing applications and roughly 70% of internal, employee-focused tools. The technology excels at making systems easier to use and providing support capabilities, and at generating code, though Rambach stressed that significant human involvement in system development remains essential. Schneider Electric has also incorporated generative AI into its smart-grid solutions and is exploring the application of foundational transformer models to analyze internet-of-things and time-series data, and to create multitask models.</p>
<p>One of Schneider Electric’s most important applications of generative AI addresses a challenge common to large enterprises: making organizational knowledge accessible and usable. The company needed systems with robust security, clear information provenance, and the ability to cite sources. This required building vertical knowledge bases tailored to specific functions rather than deploying a one-size-fits-all solution.</p>
<p>The curation of unstructured data for these use cases proved instructive. “Asking people to clean their own data for data quality’s sake doesn’t work,” Rambach said. People are naturally resistant to what can feel like make-work. “But if you show them what you can do with it in an AI context, they are much more amenable,” he noted. When employees could see the direct impact of better data, they willingly performed curation work.</p>
<p>This insight reflects a broader principle at the company: Employees must be integrated into the AI development process. “People at the front lines are doing the work — they are at the core of Schneider’s approach to AI,” Rambach said. “We start from the business domain and bring in anybody else who is needed. Central experts don’t have the domain knowledge.”</p>
<p></p>
<h3>Embedding AI, Not Building Stand-Alone Products</h3>
<p>Schneider Electric deliberately avoids creating separate AI products for internal users or customers. Instead, the company embeds AI capabilities into existing systems and processes, such as energy management applications, field service tools, customer care platforms, and sales aids. A prime example is an AI-powered tool that it implemented for the company’s sales force, which must navigate an extremely complex product catalog. Rather than launching a stand-alone application, the company built AI recommendation capabilities into Sales Copilot. The company applies product management discipline to AI use cases, overseeing AI-powered processes and products from conception through deployment to eventual retirement.</p>
<p>This integration strategy extends to emerging capabilities like agentic AI, where Schneider Electric is already seeing practical value today despite the technology’s relative immaturity. The company has built an agentic system for processing requests for quotations that extracts key information, reformulates it, and summarizes it for salespeople. The system isn’t perfect, but it significantly improves sales productivity. “In many situations in companies, 80% to 90% accuracy is enough when there is human review,” Rambach noted. The key is educating users to review and improve the AI’s output rather than accepting it blindly. Schneider Electric is progressively moving toward more agentic process automation, shifting away from traditional robotic process automation while using AI as an adviser and recommender rather than a fully autonomous decision maker.</p>
<p>Schneider Electric takes employee understanding and behavior change seriously, implementing a tiered training approach that recognizes different needs across the organization. The company has made AI training mandatory for everyone but tailors the curriculum to four distinct groups. First, all employees, including those on production lines, receive foundational AI training. Second, management gets specialized training on leading AI initiatives and managing AI-enabled teams. Third, AI experts on Rambach’s team receive deep technical training. Finally, and most unusually, product managers, process owners, and IT owners receive training focused on how AI can enable transformation of their domains.</p>
<h3>An AI Organization Designed for Scale — Without Pilots</h3>
<p>Perhaps Schneider Electric’s most distinctive feature is its organizational model, which is explicitly designed to achieve impact across the organization quickly rather than generate pilots and experiments. “Our goal is not to have pilots and experiments: Use cases are deployed at scale,” Rambach emphasized.</p>
<p>This model rests on three components: a team of more than 350 people dedicated to AI; a comprehensive technical platform incorporating Microsoft Azure, Amazon Web Services, Databricks, large language model operations with retrieval-augmented generation, LangChain, and various APIs; and, perhaps most important, a structured process that guides initiatives from vision through ideation and incubation to deployment at scale.</p>
<p></p>
<p>At each gate in this process, the company confirms the business plan and business case. Success requires the merger of domain knowledge with AI expertise, which brings together product owners, IT professionals, data specialists, trainers, and marketing personnel. Some other organizations employ this <a href="https://hbr.org/2026/01/manage-your-ai-investments-like-a-portfolio" target="_blank" rel="noopener noreferrer">stage-gate approach to AI initiatives</a>, and we believe it is a useful way to increase the likelihood of a valuable outcome. It’s much more common, however, in new product development processes that don’t necessarily involve AI. </p>
<h3>Measuring Value Without Waiting for Certainty</h3>
<p>Measuring AI’s economic value presents challenges, particularly for customer-facing products, where the significance of advances is tricky to isolate. “It can be difficult in the customer product space to show value from tech improvements,” Rambach said. “What is the ROI from moving from desktops to laptops? What is the value of adding a communications protocol to a product?” The company does, however, track both usage rates and outcomes with AI-enabled products, such as energy savings achieved by customers.</p>
<p>For internal applications, Schneider Electric starts with a clear business value proposition and tracks two KPIs: an adoption target and a performance metric. The performance metric varies by use case; it might be accuracy, customer satisfaction scores, or a reduction in credit defaults. “One KPI in each,” Rambach said. “We follow whether it performs.”</p>
<p>Business stakeholders own the value proposition and help develop appropriate KPIs for their use cases. The company calculates total annual AI value and reports estimates to the board, projecting the technology’s impact over a four-year horizon, but it keeps these figures confidential.</p>
<p>Rambach cautioned against waiting for the perfect measurement approach before acting. “If you wait for clear measurement of value, you will miss a lot of opportunity,” he warned. This willingness to move forward with reasonable confidence rather than absolute certainty has enabled Schneider Electric to scale AI applications while competitors remain stuck in pilot purgatory.</p>
<h3>AI Management Lessons for Other Enterprises</h3>
<p>Schneider Electric’s approach to AI offers several lessons for companies seeking to scale beyond experimentation:</p>
<p><strong>Start with business value, not technology.</strong> Every AI initiative at Schneider Electric begins with business needs and customer pain points, not with questions about what’s possible with the latest AI models.</p>
<p><strong>Engage front-line employees in development.</strong> The people doing the work have essential domain knowledge that central AI experts lack. Effective AI requires that these perspectives be merged from the start.</p>
<p><strong>Embed AI in existing workflows.</strong> Rather than asking customers or employees to adopt new stand-alone tools, Schneider Electric builds AI capabilities into the systems people are already using.</p>
<p></p>
<p><strong>Design for scale from the beginning.</strong> Schneider Electric’s organizational model, technical infrastructure, and governance processes are all built to deploy production systems, not to create pilots.</p>
<p><strong>Invest in differentiated training.</strong> Different roles require different levels and types of AI literacy. A one-size-fits-all training program won’t cultivate the capabilities needed across the organization.</p>
<p><strong>Balance analytical and generative AI.</strong> Despite the current excitement around generative AI, traditional machine learning on structured data continues to deliver substantial value in many contexts.</p>
<p></p>
<p>As AI capabilities continue to evolve rapidly, Schneider Electric’s disciplined, business-driven approach provides a model for enterprises seeking to move beyond experimentation to genuine operational impact. By designing for scale, engaging front-line workers, and maintaining focus on measurable business value, the company has built an AI program that meets the objectives of both customers and employees. </p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/how-schneider-electric-scales-ai-in-both-products-and-processes/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Leaders at All Levels: Kraft Heinz’s 5X Speed Secret</title>
				<link>https://sloanreview.mit.edu/video/leaders-at-all-levels-kraft-heinzs-5x-speed-secret/</link>
				<comments>https://sloanreview.mit.edu/video/leaders-at-all-levels-kraft-heinzs-5x-speed-secret/#respond</comments>
				<pubDate>Thu, 12 Mar 2026 11:00:01 +0000</pubDate>
				<dc:creator><![CDATA[MIT Sloan Management Review. ]]></dc:creator>

						<category><![CDATA[Corporate Culture]]></category>
		<category><![CDATA[Leadership Vision]]></category>
		<category><![CDATA[Manufacturing]]></category>
		<category><![CDATA[Product Development]]></category>
		<category><![CDATA[Video]]></category>
		<category><![CDATA[Webinars & Videos]]></category>
		<category><![CDATA[Culture]]></category>
		<category><![CDATA[Innovation Strategy]]></category>
		<category><![CDATA[Leadership Skills]]></category>
		<category><![CDATA[Organizational Structure]]></category>
		<category><![CDATA[Workplace, Teams, & Culture]]></category>

				<description><![CDATA[Is 36 months too long for a new-product cycle? It was for Kraft Heinz. So, starting with a pilot project, it was able to cut time to market to just six months by redesigning how people worked. Today, units throughout the company are applying that model’s step-by-step approach to change and are seeing measurable improvements [&#8230;]]]></description>
								<content:encoded><![CDATA[<p>Is 36 months too long for a new-product cycle? It was for Kraft Heinz. So, starting with a pilot project, it was able to cut time to market to just six months by redesigning how people worked. Today, units throughout the company are applying that model’s step-by-step approach to change and are seeing measurable improvements in both performance and employee satisfaction. </p>
<p>In this episode of <cite>Leaders at All Levels</cite>, Carolina Wosiack, Kraft Heinz’s global head of agile transformation, shares the playbook behind Kraft Heinz’s five-times-faster product launches — and the mindset shift it required.</p>
<h3>The Kraft Heinz Playbook: Borrow These Ideas</h3>
<ul>
<li><strong>Limit the pipeline.</strong> Teams were juggling as many as 20 projects. The “golden number” for Kraft Heinz? Seven. “Strategy is not just what you do — it’s what you don’t do,” Wosiack said.</li>
<li><strong>One list, locked.</strong> Replace disconnected project plans with a single backlog tied to financial outcomes.</li>
<li><strong>Pull, don’t push.</strong> Let results create demand.</li>
<li><strong>Grant decision rights.</strong> Remove hierarchical approvals. A project team in Canada was able to shift from endless sign-offs to quarterly check-ins with two or three people.</li>
<li><strong>Stop providing all of the solutions.</strong> Wosiack’s new standard response to questions: “I know you know the answer.”</li>
<li><strong>Fix the system before it breaks the business.</strong> The turning point in a transformation journey, Wosiack said, “is when teams are moving faster than the system.”</li>
</ul>
<h3>How It Works</h3>
<p>Wosiack’s group works with teams that <em>want</em> to change; they’re not forced into it. Collaborating with a team in Brazil, for example, they changed employees’ time allocation, expanded decision rights, and increased worker autonomy — and launched a new pasta sauce in six months instead of three years. Time spent in meetings dropped 31%, while employee engagement rose 55%. </p>
<p>Watch until the end, when hosts Kate W. Isaacs and Michele Zanini share four principles worth borrowing for your own team.</p>
<h5>Video Credits</h5>
<p><strong>Carolina Wosiack</strong> is the global head of agile transformation at Kraft Heinz.</p>
<p><strong>Kate W. Isaacs</strong> is a senior lecturer at the MIT Sloan School of Management.</p>
<p><strong>Michele Zanini</strong> is coauthor of the <cite>Wall Street Journal</cite> bestseller <cite>Humanocracy</cite> (Harvard Business Review Press, 2020).</p>
<p><strong>M. Shawn Read</strong> is the multimedia editor at <cite>MIT Sloan Management Review</cite>.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/video/leaders-at-all-levels-kraft-heinzs-5x-speed-secret/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
			</channel>
</rss>