<?xml version="1.0" encoding="UTF-8" standalone="no"?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" version="2.0">

	<channel>
		<title>MIT Sloan Management Review</title>
		<atom:link href="http://sloanreview.mit.edu/feed/" rel="self" type="application/rss+xml"/>
		<link>https://sloanreview.mit.edu</link>
		<description>Sustainable Innovation</description>
		<lastBuildDate>Fri, 15 May 2026 14:11:40 +0000</lastBuildDate>
		<language>en-US</language>
				<sy:updatePeriod>hourly</sy:updatePeriod>
				<sy:updateFrequency>1</sy:updateFrequency>
		<generator>https://wordpress.org/?v=6.9.4</generator>
			<item>
				<title>How Job Design for Disability Improves Work for Everyone</title>
				<link>https://sloanreview.mit.edu/article/how-job-design-for-disability-improves-work-for-everyone/</link>
				<comments>https://sloanreview.mit.edu/article/how-job-design-for-disability-improves-work-for-everyone/#respond</comments>
				<pubDate>Thu, 14 May 2026 11:00:09 +0000</pubDate>
				<dc:creator><![CDATA[David Dwertmann, Stephan Böhm, Kristie McAlpine, and Mukta Kulkarni. <p>David Dwertmann is an associate professor of management at the Rutgers University-Camden School of Business. Stephan Böhm is an associate professor of diversity management and leadership at the University of St. Gallen in Switzerland, where he directs the Institute for International Management and Diversity Management. Kristie McAlpine is an assistant professor of management at the Rutgers University-Camden School of Business. Mukta Kulkarni is a professor of organizational behavior and human resource management at the Indian Institute of Management.</p>
]]></dc:creator>

						<category><![CDATA[Creativity]]></category>
		<category><![CDATA[Design]]></category>
		<category><![CDATA[Employee Safety]]></category>
		<category><![CDATA[Innovation Process]]></category>
		<category><![CDATA[User Experience]]></category>
		<category><![CDATA[Diversity & Inclusion]]></category>
		<category><![CDATA[Innovation]]></category>
		<category><![CDATA[Talent Management]]></category>

				<description><![CDATA[Gary Waters / Ikon Images Disability-related innovations are all around us. Curb cuts in sidewalks, originally designed for wheelchair users, benefit caregivers with strollers, travelers with suitcases, and delivery workers with hand trucks. Automatic doors intended for individuals with mobility impairments are convenient for all. Blurred backgrounds in video calls, standing desks and ergonomic keyboards, [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/05/Dwertman-1290x860-1.jpg" alt="" class="wp-image-127174" /><figcaption>
<p class="attribution">Gary Waters / Ikon Images</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">Disability-related innovations</span> are all around us. Curb cuts in sidewalks, originally designed for wheelchair users, benefit caregivers with strollers, travelers with suitcases, and delivery workers with hand trucks. Automatic doors intended for individuals with mobility impairments are convenient for all. Blurred backgrounds in video calls, standing desks and ergonomic keyboards, and speech and voice recognition tools were all designed to assist people by minimizing distractions, easing lipreading, reducing chronic pain, and supporting people with mobility impairments — and all are now widely used by the general public. Every day, people with and without disabilities use numerous innovative accommodations that have become indispensable mainstream tools — to such an extent that few people realize that the features were originally developed to address disability-related needs.</p>
<p>In short, what's often labeled a burden can be a source of practical innovation. But many managers still view disability at work through the negative lens of cost and compliance. Our research suggests a more positive, generative perspective.<a class="reflink" id="reflink1" href="#ref1">1</a> When a team includes someone with a disability, coworkers often view their own work with fresh eyes. They notice previously overlooked inefficiencies and barriers, question operating assumptions about how tasks "must" be done, and propose better ways to design work. Those changes typically improve work for all, making the job easier and safer for everyone, not just the person with a disability who needed an accommodation.</p>
<p></p>
<p>Functional impairments associated with disability, then, can signal suboptimal job design. Many workplaces are implicitly built for an "ideal," able-bodied worker who never tires, strains, or loses focus. Designing for a broader range of workers is not just fair; it's a necessary way to reflect reality. For example, as workforces age — a looming reality for many industrialized countries — jobs that function effectively only for the ideal worker will become harder to staff and sustain.</p>
<h3>From Accommodations to Task Redesign</h3>
<p>The usual organizational response to disability-related functional limitations is to grant an individual an accommodation so that person can keep doing the job as designed. While this fulfills legal requirements, it misses a larger opportunity: Treating a functional limitation as a spotlight can reveal where job design may be ineffective. For example, coworkers may step back and ask simple, practical questions: Why is this light so bright or this office so noisy? Why must this process be carried out in one continuous stretch? Why do we lift heavy objects here at all? Why is this reach overhead? Such questions can lead to changes in work design that reduce strain and errors and — importantly — improve conditions for all team members.</p>
<p></p>
<p>To illustrate, imagine a manufacturing job that regularly requires workers to lift 40 pounds unassisted. An employee with chronic back pain cannot safely do that and, therefore, requests an accommodation. The conventional response would be to treat the problem as individual and exceptional: to reassign duties, add a second person to assist with lifting, or add in breaks for recovery.<a class="reflink" id="reflink2" href="#ref2">2</a> A more sustainable response would be to redesign the work process itself so that the task no longer depends on unassisted human strength. That might involve the use of load-sharing equipment, height-adjustable fixtures, or a newer tool, such as industrial exoskeletons, to offload spinal strain.</p>
<p>Initially introduced to enable a single employee with an impairment to perform a job, such redesigns have broader value: coworkers experiencing less fatigue and a decline in injury risk overall. What begins as a disability accommodation becomes a more effective way of organizing work for everyone.</p>
<div class="callout-highlight">
<aside class="l-content-wrap">
<article>
<h4>Research: The Presence of Disability Changes How Coworkers Think</h4>
<p>While conducting research in a large automotive manufacturing setting, we observed that workers exposed to a teammate's functional impairment thought more broadly about how to improve work tasks. Coworkers examined their routines more carefully and noticed more opportunities for process improvements. They generated ideas more creative than straightforward "try harder" solutions, such as redesigning the workspace layout, reconsidering lighting and background noise, building more effective team processes, and using assistive tools. Critically, employees' ideas didn't just pile up in a suggestion box; instead, an expert panel reviewed all of them regularly, and the employees who suggested the best continuous improvement ideas were invited to participate in their implementation.</p>
<div class="callout-toggle">
<p>We also engaged people in thought experiments in which they were prompted to consider new solutions for the same work task on behalf of a fictional colleague with a common physical disability in a manufacturing setting — specifically, chronic back pain or rheumatoid arthritis in the fingers and hands. Not only did participants generate more suggestions beyond the typical "train more" and "work harder" mold; they provided additional categories of more novel ideas concerning assistive tools, ergonomic setups, and health-related ideas.</p>
</div>
</article>
</aside>
</div>
<p></p>
<h3>Innovation to Decrease Inefficiencies</h3>
<p>When teams begin to brainstorm possible improvements, they often suggest incremental adjustments within the existing system, such as ways to make it easier to lift tools, rather than questioning current norms, such as asking why tools are stored at floor level in the first place. It's common for people to take their current job setups for granted and rely on well-worn patterns and processes.</p>
<p>But when employees work alongside a colleague with a disability, inefficiencies become more salient. Disability can function as a prompt for people to rethink a task entirely and make new, creative connections between formerly disparate concepts, leading them to adopt ideas from other areas of life. Perhaps the magnetic strip they use to store knives along their kitchen wall will inspire a new arrangement of work materials at waist height rather than on the floor, effectively bringing the tools to the worker rather than the worker to the tools.</p>
<p>Taking the perspective of a colleague with a functional limitation can be the genesis for higher idea counts and greater idea novelty in the office, just as on the production floor. Once teams stop accepting able-bodied and neurotypical defaults as inevitable, they start proposing solutions that challenge the job's inherent design and could improve outcomes for all workers, such as in these examples.</p>
<p><strong>Documented workflows.</strong> Much of work relies on tacit and ambiguous knowledge, which becomes more obvious when a neurodivergent employee asks for clearer instructions and fewer unwritten rules. Structured processes designed as an accommodation — step-by-step guides, documented workflows, and simplified interfaces — can lower the cognitive load for the whole team, resulting in more effective knowledge sharing, fewer errors, and smoother collaboration.</p>
<p></p>
<p><strong>Enhanced audio tools.</strong> Captions in video calls highlight how much workplace communication depends on audio-only information, which can be a challenge for colleagues who are deaf or hard of hearing. Originally introduced as an accessibility support and now built into most video-meeting tools, captioning creates searchable transcripts, improves attendee comprehension, and makes it easier to follow along in noisy or distracting environments. The result is easier documentation and more inclusive, efficient communication overall.</p>
<p><strong>Structured brainstorming processes.</strong> "Think fast" dynamics can silence good ideas and stymie participation efforts. During fast-paced brainstorming and immediate critique, some colleagues may take the floor as others tend to withdraw, due perhaps to an anxiety disorder, fear of public speaking, or simple introversion. Structured practices — such as two-phase ideation (silent generation followed by later discussion); anonymous digital brainstorming, which separates ideas from the individual; and feedback templates that pre-structure forms and ways to provide feedback — can increase psychological safety. The result is broader participation and more generative, collaborative meetings not dominated by the fastest thinkers or the loudest voices.</p>
<h3>What Leaders Can Do Right Now</h3>
<p>Designing with disability in mind uncovers friction points, streamlines processes, and enhances the work experience for all. But managers may face some common concerns.</p>
<ul>
<li>"Will this slow us down?" Not if you keep changes small and reversible. Many tests take little time to set up and can run during normal operations. The goal is to save time by removing wasted motion and to prevent errors that may require hours of rework later.</li>
<li>"Won't people feel singled out?" Keep the discussion about the job, not the person. Use generic prompts ("assume no overhead reach") to depersonalize the analysis. Participation by anyone with a functional limitation should be voluntary. The aim is safer, steadier work for everyone.</li>
<li>"We tried a suggestion program and it fell flat." This is not about collecting more suggestions. It is about exploring more kinds of potential solutions, promptly trying them out, and integrating what works into existing operations.</li>
</ul>
<p></p>
<p>To more successfully tap your team's creativity in rethinking job design, we offer the following suggestions.</p>
<p><strong>Involve the people who live with the consequences.</strong> The best redesigns come from the people who do the work: the employee who raised the issue, two or three coworkers, their supervisor, and a safety or ergonomics partner. Ten minutes at the workstation beats an hour in a conference room. Keep the tone neutral and the focus on the task: What does the job need to look like so more of us can do it well and safely?</p>
<p><strong>Run short "assumption reviews" on your highest-friction tasks.</strong> For primarily physical jobs, identify a work task with frequent near misses, rework, or strain complaints. Film 30 to 45 seconds of the work being done, and then watch the clip with the people who do the job and a few of their peers. Then ask questions about what you see. For example, you might ask: What would we change if no one could lift more than 20 pounds? If overhead reach were not an option, how would we set this up? If glare and noise were dialed down, what would change? If we had to add a short pause every 20 minutes, when should it occur?</p>
<p>For desk jobs, an employee might record their screen as they complete a given task, or document each step and subtask if making a recording isn't feasible. Then have team members review and investigate, asking questions like: How many different tools are being used to complete this task? Are they all needed? Is there any manual duplication or cutting and pasting that can be eliminated or automated? Is every subtask necessary? Is there any unnecessary data being entered and tracked?</p>
<p>Then ask for improvement ideas, aiming for breadth. Collect ideas in buckets: for example, for computer-based tasks, creating templates, introducing automation, or discontinuing the use of duplicative tracking tools; and for manual tasks, considering ergonomics, assessing tools and fixtures, or rethinking pace and scheduling. Encourage more categories of solutions, not just more versions of the same one. Then select the most promising ones and test and fine-tune them to arrive at the best solutions.</p>
<p></p>
<p><strong>Treat accommodation requests as design leads, not paperwork.</strong> When someone requests an accommodation, walk the job together and capture the underlying friction in plain terms. Turn that into a design hypothesis: The load needs to be at shoulder height to ease strain, the door needs to be closed to accommodate hearing disabilities, or the light needs to be diffused to reduce eyestrain. Try the smallest change that could work. If the change makes the job better for the original worker, keep it. If it doesn't, debrief what you learned and try the next simplest idea. This way, you'll learn a lot more about the intricacies of the job and can identify promising directions for job redesign.</p>
<p></p>
<p>Rather than treating disability solely as an exception to be managed, try thinking of it as natural variation and a prompt for redesigning work. With this mindset, your team can find better ways to do the work that benefit everyone. Start small, with one high-friction task. Look at it through the eyes of someone who cannot do it the way it is designed today. Try a couple of small changes. Keep what works.</p>
<p>That simple practice — noticing, questioning, experimenting, and adopting — is how practical innovation can thrive. Thoughtful organizational and societal design with and for people with disabilities often leads to improvements that benefit everyone. Over time, improvements will compound and garner lasting advantages and ongoing innovation.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/how-job-design-for-disability-improves-work-for-everyone/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Resolve the Conflict Between Efficiency and Resilience</title>
				<link>https://sloanreview.mit.edu/article/resolve-the-conflict-between-efficiency-and-resilience/</link>
				<comments>https://sloanreview.mit.edu/article/resolve-the-conflict-between-efficiency-and-resilience/#respond</comments>
				<pubDate>Wed, 13 May 2026 11:00:40 +0000</pubDate>
				<dc:creator><![CDATA[Vishal Ahuja, Yasin Alan, and Mazhar Arıkan. <p>Vishal Ahuja is an associate professor and a Corrigan Research Professor at the Southern Methodist University’s Cox School of Business. Yasin Alan is an associate professor at Vanderbilt University’s Owen Graduate School of Management. Mazhar Arıkan is an associate professor and an Anderson Family Fellow at the University of Kansas School of Business.</p>
]]></dc:creator>

						<category><![CDATA[Analytics & Performance]]></category>
		<category><![CDATA[Customer Experience]]></category>
		<category><![CDATA[Efficiency]]></category>
		<category><![CDATA[Key Performance Indicators]]></category>
		<category><![CDATA[Narrated Article]]></category>
		<category><![CDATA[Performance Strategies]]></category>
		<category><![CDATA[Resilience]]></category>
		<category><![CDATA[Customers]]></category>
		<category><![CDATA[Operations]]></category>
		<category><![CDATA[Quality & Service]]></category>

				<description><![CDATA[Ellice Weaver/Ikon Images Operational efficiency is critical for both financial success and customer satisfaction. Efficient systems, characterized by minimal buffers and idle time, tight schedules, and maximum asset utilization, allow organizations to do more with less, thereby boosting revenue and appealing to time-sensitive customers. However, such systems often lack resilience, increasing an organization’s vulnerability to [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/05/2026SUMMER_Ahuja-1290x860-1.jpg" alt="" class="wp-image-127112"/><figcaption>
<p class="attribution">Ellice Weaver/Ikon Images</p>
</figcaption></figure>
<p></p>
<p></p>
<p><span class="smr-leadin">Operational efficiency is critical</span> for both financial success and customer satisfaction. Efficient systems, characterized by minimal buffers and idle time, tight schedules, and maximum asset utilization, allow organizations to do more with less, thereby boosting revenue and appealing to time-sensitive customers. However, such systems often lack resilience, increasing an organization’s vulnerability to operational disruptions.</p>
<p>The tension between efficiency and resilience is especially visible in the airline industry. A resilient airline network can absorb disruptions, protect passengers from severe service failures, and recover quickly without incurring excessive costs. But airlines also face constant pressure to offer faster itineraries and maximize the use of costly resources, such as their fleet of aircraft and flight crews. Meanwhile, passengers strongly favor efficiency as well, in the form of shorter travel times with minimal layovers. Those preferences can lead to itineraries with little to no time buffer to absorb the delays or cancellations common to air travel, leaving passengers frustrated and stuck waiting in terminals. Moreover, such disruptions propagate throughout interconnected networks, affecting passengers, flight connections, crew schedules, and aircraft positioning. These ripple effects result in significant financial and reputational damage for airlines.<a id="reflink1" class="reflink" href="#ref1">1</a></p>
<p>This challenge is not unique to airlines, however. Supply chain managers need to balance inventory costs against the risks of stockouts. Health care systems strive to optimize patient flows and increase throughput while maintaining quality of care. Despite the operational differences across these contexts, the fundamental challenge is the same: How can organizations design operations that are both efficient and resilient?</p>
<p>Our analyses of millions of flights and airline passenger journeys in several academic studies reveal why managers do not need to treat efficiency and resilience as opposing goals. We identify three actionable strategies that enable organizations to achieve both objectives, whether managing flight schedules, patient flows, call center operations, or global supply chains.</p>
<p></p>
<h3>Strategy 1: Measure What Matters to Customers</h3>
<p>Traditional operational performance metrics often do not reflect customer experience, leading to perverse incentives that can weaken actual service quality and system resilience. For example, in the U.S. airline industry, the Department of Transportation (DOT) publishes its <em>Air Travel Consumer Report﻿</em> each month. The report includes statistics for all major carriers on the percentage of flights that arrive within 15 minutes of the scheduled arrival time. Known as on-time performance (OTP), this metric serves as a proxy for service quality and reliability in the DOT’s and others’ rankings of airlines.</p>
<p>At first glance, publicizing OTP appears to benefit consumers: One might expect that being judged on this metric incentivizes airlines to reduce flight delays. However, as is often the case with KPIs, organizations give in to the temptation to game the metric: Airlines often add time to the flight durations they publish, to improve their chances of being “on time” — a practice known as <em>schedule padding</em>.<a id="reflink2" class="reflink" href="#ref2">2</a> This helps airlines boost their OTP statistics without meaningfully improving their reliability.</p>
<p>Putting the airlines’ strategic gaming behavior aside, it is questionable how useful a measure OTP is for customers. Non-stop-flight passenger﻿s are likely to experience a 20-minute﻿-late arrival as only a marginal difference﻿, and they’re unlikely to be concerned that their flight missed the scheduled arrival time by 14 minutes but was “on time” within the DOT’s 15-minute cutoff. However, for a connecting passenger with a tight layover, a 14-minute delay may be long enough to cause a missed connection — and hours spent in the airport waiting to get onto another flight.</p>
<p>Given that OTP does not accurately capture what matters to passengers, in a recent study we published, we urged the DOT to release more informative passenger-level statistics, such as the proportion of passengers reaching their final destinations within defined delay time intervals (within 15 minutes, 15 minutes to ﻿one hour, one to two hours, two to three hours, and more than three hours of the scheduled arrival time).<a id="reflink3" class="reflink" href="#ref3">3</a> In the long run, shifting the focus to passenger travel times (including total flight times, layover times, and potential missed connections and flight delays in each leg) can incentivize airlines to pay more attention to passengers’ travel experiences. Both efficiency and resilience could improve as a result, because airlines would have a stronger incentive to create more efficient routes and itineraries, with shorter total travel times that are less susceptible to missed connections and long delays.</p>
<p>Similar issues arise in other industries, where the operational performance metrics an organization uses can negatively affect efficiency and reliability. For example, in health care, hospitals often track operating room utilization rates as a key efficiency metric. While high utilization sounds desirable, it can incentivize hospital administrators to create extremely tight surgery schedules, with back-to-back procedures. This practice leaves little buffer for unexpected complications or overruns, causing subsequent patients to experience long delays or even cancellations and negatively affecting efficiency, care quality, and patient satisfaction.</p>
<p>In supply chain management, companies typically use high inventory turnover ratios as a proxy for operational efficiency. However, an excessive focus on this metric can prompt businesses to cut safety stock too aggressively, heightening the risk of shortages, delaying downstream production, and leaving customers with long wait times for the products they want to purchase. And in customer service, when call centers measure agents’ performance by the number of calls they handle, that may motivate agents to rush their interactions with customers. That, in turn, can lead to a reduction in problem-resolution quality and an increase in repeat calls, thereby prolonging the total time it takes to resolve an issue. These examples illustrate how narrowly defined metrics can distort incentives, prompting companies to optimize for the metric rather than the actual service experience.</p>
<p>Defining the right performance metrics can be challenging because metrics must balance an organization’s efficiency objectives with other managerial considerations (such as service reliability, customer satisfaction, and fairness). One way to attain this balance and ensure that high efficiency does not come at the expense of customers is to treat performance measurement as a paired system, with one metric that tracks operational efficiency and another that measures what matters to customers. For example, in our recent study, we developed two metrics — one to track efficiency and the other to assess resilience — to ensure that an airline’s focus on efficiency (measured by short scheduled travel times for passengers) would not lead to long travel delays due to missed connections.<a id="reflink4" class="reflink" href="#ref4">4</a> Similarly, hospitals can measure both operating room utilization and the delays surgical patients experience when extremely tight surgery schedules are disrupted.</p>
<p>Appropriate performance metrics should be easy for stakeholders to understand while accurately capturing an organization’s performance objectives and customer satisfaction levels. Defining such measures requires input from key stakeholders, including customers. Misaligned incentives pose another risk, especially when metrics are tied too rigidly to employee performance evaluations: Such correlations may incentivize employees (or even an entire organization, as documented in the airline industry) to game the system rather than improve service quality. Thus, designing incentive structures that reward long-term service quality based on a combination of efficiency and resilience rather than short-term metric gains is another important step. Finally, periodic reviews and stakeholder input can help ensure that metrics evolve with changing customer expectations and operational dynamics.</p>
<p></p>
<h3>Strategy 2: Avoid a One-Size-Fits-All Approach and Deploy Buffers Strategically</h3>
<p>Measuring what matters to customers can motivate an organization to be more proactive in preventing major service or supply failures and improving its resilience to disruptions. In the airline industry, major service failures often take the form of flight cancellations and long delays,﻿ which hurt airlines’ current and future financial performance.<a id="reflink5" class="reflink" href="#ref5">5</a> While some disruption triggers, such as severe weather, are beyond companies’ control, airlines can influence how disruptions propagate through their networks by strategically designing flight schedules.</p>
<p>Strategic scheduling involves building sufficient flight and ground time buffers to reduce the risk of delays cascading from one flight to the next. Traditionally, some airlines have relied on simple rules of thumb, such as using historical data to estimate the average time to complete a flight and then adding a fixed buffer. For example, if a flight averages one hour, adding a 15-minute buffer results in a scheduled flight time of one hour and 15 minutes. However, this one-size-fits-all approach can negatively affect both efficiency and resilience: A buffer may be unnecessarily long for some flights, reducing efficiency, but insufficient for others, increasing an airline’s vulnerability to disruptions.</p>
<p>A more strategic and data-driven approach considers both the likelihood and consequences of a disruption. In the airline industry, our analyses revealed that airport congestion levels, layover times, weather conditions, and the time of day predict the likelihood of a disruption.<a id="reflink6" class="reflink" href="#ref6">6</a> Moreover, the operational consequences of a disruption vary by flight: A delayed flight with many connecting passengers can create severe ripple effects, whereas a delayed aircraft on its last flight of the day, carrying no connecting passengers, will have minimal impact on the network. Allocating larger buffers where disruption risk and impact are high, and smaller buffers where they are low, can improve efficiency and resilience simultaneously.</p>
<p></p>
<p>The same principles apply to other industries. In health care, setting the duration and sequence of surgeries based on patients’ risk profiles and the consequences of potential delays to subsequent procedures can improve overall system performance.<a id="reflink7" class="reflink" href="#ref7">7</a> In call center operations, staffing schedules can be determined by considering not just the expected length of each call but also the likelihood of follow-up interactions resulting from unresolved issues. In supply chain management, rather than applying uniform inventory policies across all products, companies can tailor their safety stock levels and replenishment intervals to account for supply disruption risks and their downstream consequences, such as production delays due to material shortages. In project management, allocating contingency time to tasks with high interdependencies or critical-path activities can prevent small delays from cascading into major schedule overruns.</p>
<p>Deploying buffers strategically is easier said than done. It requires granular data, advanced analytics, and coordination across multiple business functions (such as marketing, network planning, and an airline’s ground operations teams). Historical averages and one-size-fits-all approaches are simple and familiar, which makes them hard to replace with dynamic, context-specific, risk-based approaches. Moreover, managers often face resistance when buffer adjustments appear to reduce efficiency in the short term, even though they can improve both efficiency and resilience in the long run. To overcome these challenges, organizations should start small by simulating what-if scenarios and piloting data-driven scheduling in high-risk areas within a division. They should also invest in predictive analytics to identify disruption patterns and communicate the long-term benefits of resilience to stakeholders. Embedding analytics into planning processes and getting buy-in from stakeholders can ensure that buffer allocation becomes a strategic lever rather than an ad hoc decision.</p>
<h3>Strategy 3: Curate Personalized Customer Options to Maximize System Performance</h3>
<p>In many organizations, operations teams prefer to limit the number of product or service options presented to customers to keep processes streamlined. Sales and marketing teams often advocate for more options to increase the likelihood of meeting diverse customer needs. However, offering a broad choice set significantly increases operational complexity, which can compromise reliability.</p>
<p></p>
<p>In the airline industry, itineraries with layovers highlight the risks of offering too many options to passengers. For instance, consider a passenger who planned to travel from Knoxville, Tennessee (TYS)﻿, to Greensboro, North Carolina (GSO), on Sunday, Jan. 11, 2026. Upon searching for flights on a popular travel website, we found that American Airlines offered 13 itinerary options via its hub in Charlotte, North Carolina (CLT), including one with only a 30-minute layover, where the first leg would arrive at CLT at 1:38 p.m. and the second leg would depart for GSO at 2:08 p.m. Similarly, Delta Air Lines offered six options through its hub in Atlanta (ATL), one of which allowed just 40 minutes between flights, with the first leg arriving at ATL at 5:13 p.m. and the second leg departing from ATL at 5:53 p.m. Given the sizes and congestion levels of CLT and ATL, both itineraries posed a significant risk of a missed connection. More broadly, airline reservation systems routinely display such options as long as they meet the minimum connection times published in the International Air Transport Association database, even when they leave little margin for actual delays.</p>
<p>In our recent study, we used a proprietary passenger-level data set provided by Southwest Airlines to simulate the impact of tight layovers on system performance.<a id="reflink8" class="reflink" href="#ref8">8</a> Our analysis found that identifying itineraries with short layovers and preventing passengers from booking them can significantly enhance resilience by reducing missed connections. Notably, curating the choice set by removing risky itineraries does not materially deteriorate efficiency, given that switching from an itinerary with a short layover to one with a slightly longer layover typically increases the total scheduled travel time only marginally (by 10 to 15 minutes, in many cases). Indeed, the actual travel time of an itinerary with a slightly longer layover may be significantly shorter due to the elimination of a missed connection.</p>
<p>The concept of curating customer choices to enhance resilience can be applied to many industries. In health care, to reduce the risk of cascading delays, hospitals could limit scheduling options for elective surgeries when operating room capacity is constrained. To avoid missed commitments, retailers might restrict delivery time slots during periods of high uncertainty in supply chains. By strategically shaping the choice set rather than leaving all options open, businesses can mitigate vulnerabilities without materially compromising customer experience, especially when the curated choice sets impose only minor trade-offs in efficiency.</p>
<p></p>
<p>Implementing curated choice sets is far from straightforward. It requires predictive models that accurately assess risk under varying conditions, as well as real-time systems to update available options dynamically. Internal tensions naturally arise when multiple functions, such as marketing, operations, and IT, must agree on where to curtail customer choices. In particular, marketing teams may worry about lost revenue from eliminating certain options. It is thus essential that organizations quantify both the revenue potential of risky options and the actual costs of disruptions that arise as a result of offering them. (For airlines, paying for stranded travelers’ hotel accommodations, meal vouchers, and rebooking expenses erodes profitability.) Organizations should first pilot curated options in high-risk contexts to validate their benefits. They can use the same analytical insights that helped them curate options to explain to customers that their choices have been limited to safeguard reliability.</p>
<p></p>
<p>In an era of rising customer expectations and increasingly complex service systems, overcoming the traditional efficiency-resilience trade-off is a critical operational skill that can define the trajectory of a service organization. Companies that can deliver both speed and reliability will not only meet customer demands but also differentiate themselves in highly competitive markets. The key managerial takeaway from our research studies collectively is that organizations should proactively design their operations to build resilience into their systems rather than relying on reactive, ad hoc fixes after disruptions occur. Such proactive design requires that they choose performance metrics that reflect customer experience, build systems that can absorb variability, and shape customer choices so that the organization continues to run reliably when conditions become challenging. As disruptions become the norm rather than the exception, reconciling efficiency and resilience by design rather than reaction can separate companies that cope from those that succeed.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/resolve-the-conflict-between-efficiency-and-resilience/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Beyond Verification — What Responsible AI Really Demands of Human Experts</title>
				<link>https://sloanreview.mit.edu/article/beyond-verification-what-responsible-ai-really-demands-of-human-experts/</link>
				<comments>https://sloanreview.mit.edu/article/beyond-verification-what-responsible-ai-really-demands-of-human-experts/#respond</comments>
				<pubDate>Tue, 12 May 2026 11:00:27 +0000</pubDate>
				<dc:creator><![CDATA[Elizabeth M. Renieris, David Kiron, Steven Mills, and Anne Kleppe. ]]></dc:creator>

						<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Human-Machine Collaboration]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[IT Governance & Leadership]]></category>
		<category><![CDATA[Managing Technology]]></category>
		<category><![CDATA[Technology Implementation]]></category>
		<category><![CDATA[Workplace, Teams, & Culture]]></category>
		<category><![CDATA[Responsible AI]]></category>

				<description><![CDATA[For the fifth year in a row, MIT Sloan Management Review and Boston Consulting Group (BCG) have assembled an international panel of AI experts that includes academics and practitioners to help us understand how responsible artificial intelligence is being implemented across organizations worldwide. In our first post this year, we explored how organizations should think [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/BCG-RAI_2026_ExpertPanel01-1290x860-2.jpg" alt="" /><br />
</figure>
<p>For the fifth year in a row, <cite>MIT Sloan Management Review</cite> and Boston Consulting Group (BCG) have assembled an international panel of AI experts that includes academics and practitioners to help us understand how responsible artificial intelligence is being implemented across organizations worldwide. In our first post this year, we explored how organizations should think about AI’s impact on the workforce, with our experts stressing that responsible AI means looking beyond the safety of AI systems to address real-world consequences for workers and economic stability. </p>
<p>This time, we asked our panel to react to the following provocation: <em>Responsible AI efforts fail if they don’t cultivate human experts who can verify AI solutions</em>. On the surface, there is broad consensus, with a clear majority (84%) of our panelists agreeing or strongly agreeing with the statement. But a deeper dive reveals that panelists define <em>verification</em> far more expansively than the provocation implies. Rather than treating it as a narrow, output-by-output check, they describe verification as the work of applying human judgment across an AI system’s life cycle, interpreting context, designing tests, auditing workflows, setting thresholds, weighing when AI should not be relied on at all, and carrying the accountability that machines cannot. Understood this way, verification is not a final checkpoint but the connective tissue of responsible AI, encompassing the design, oversight, and accountability that organizations need to scale alongside the systems themselves. Below, we share panelist insights and offer our practical recommendations for organizations seeking to cultivate the human expertise their responsible AI governance efforts depend on.</p>
<div class="callout-highlight callout-highlight--transparent">
<aside class="l-content-wrap">
<article>
<h4>If human experts cannot verify AI solutions, RAI efforts have failed.</h4>
<p class="caption mb30">Eighty-four percent of panelists agree or strongly agree that RAI efforts have failed if they do not cultivate human experts who can verify AI solutions.</p>
<p><img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/RAI2026-HumanExperts-Article2.png" alt="Bar Chart: Strongly disagree: 6%; Disagree: 6%; Neither agree nor disagree: 3%; Agree: 42%; Strongly agree: 42%"/></p>
<p class="attribution">Source: Panel of 31 experts in artificial intelligence strategy.</p>
</article>
</aside>
</div>
<p><strong>Humans provide the context for verifying AI outputs.</strong> ForHumanity founder Ryan Carrier backed the consensus that responsible AI efforts must cultivate human expertise to verify AI outputs because, as he puts it, “context matters.” Similarly, TÜV AI.Lab CEO Franziska Weindauer notes, “AI solutions operate within complex real-world contexts, and human experts are essential to interpret results, detect failures, and ensure that systems function as intended.” As GovLab chief research and development officer Stefaan Verhulst explains, “Many of the most significant risks of AI are societal rather than technical, such as misalignment with public values, harmful impacts on vulnerable groups, or inappropriate deployment contexts.” Those risks, many experts contend, are precisely the ones hardest to address with a wholly technical solution. </p>
<p>For some, context is irreducibly human and cannot be captured in machine-readable form alone. As OdeseIA president Idoia Salazar explains, “Not everything is translated into data, such as context in a specific situation.” Distinguished member of the investments committee of the Co-Develop fund’s Yasadora Cordova agrees that responsible AI requires “contextual sensitivity” — a quality that, in her view, “cannot be automated.” Jai Ganesh, Ph.D., vice president of technology, connected services, engineering, at Wipro Ltd., adds, “Situational awareness is another area of concern for AI systems where an output that is correct may be culturally insensitive or legally problematic in a specific country or situation.” Automation Anywhere’s Yan Chow similarly observes that “humans identify sociopolitical nuances and shifts that data cannot capture.” For these reasons, National University of Singapore provost Simon Chesterman concludes that “however sophisticated the model or elaborate the governance framework, someone must still be capable of asking whether a system is reliable, lawful, and appropriate in context,” a responsibility, in his view, that requires human expertise. </p>
<p></p>
<p>If context cannot be fully captured by machines, the practical consequences are significant. Carrier argues that “domain experts are necessary to provide feedback and risk assessments that result in well-tailored controls, treatments, and mitigations designed to tackle the specific and unique risks presented by context-dependent AI deployment and usage.” Salazar goes further, contending that “no matter how advanced a tool is, it cannot be the one to guarantee that its outputs are fair, safe, or appropriate to the context.” For Ganesh, the risks are heightened with “edge cases, rare scenarios, and new contexts where AI systems tend to break down,” and he believes “catching these failures requires human judgment and deep domain expertise.” Chow agrees that human expertise is critical for building “expert-validated guardrails for the edge cases where AI is most fragile.” Moreover, he argues that “responsible AI frameworks collapse into compliance theater without human experts because AI cannot perceive dynamic context.”</p>
<p><strong>Losing human expertise poses an existential threat to organizations.</strong> The concern is not only that AI systems will fail without human expertise to verify outcomes but that organizations may lose human expert capacity over time. Cordova argues that “organizations that delegate verification only to AI erode the institutional capacity to audit it as expertise atrophies and junior staff never develop independence.” Likewise, consultant Linda Leopold cautions, “If we always let AI do the work for us, we gradually lose the expertise needed to oversee it,” and “we need to keep human judgment sharp enough to challenge it.” EnBW chief data officer Rainer Hoffmann says, “Responsible AI efforts fail not because humans cannot verify every AI decision but because organizations lack the expertise to govern how AI systems should be evaluated, monitored, and deployed responsibly.” </p>
<p>The business stakes, through this lens, are fundamentally human. As Australian National University’s Belona Sonna contends, “The core objective of responsible AI is not only to design systems that align with ethical principles but also to ensure that humans remain capable of intervening when misalignment occurs.” Put differently, Salazar says that responsible AI “needs people who are prepared not to delegate to machines what remains a fundamentally human responsibility.” Without this capacity, the question of whether responsible AI requires human verification of AI outputs becomes moot — as no one left has the expertise to do it.</p>
<p><strong>Human verification alone does not scale.</strong> Despite broad support for the importance of cultivating human expertise, many experts cite concerns about the scale and scope of human verification. Wharton School professor Kartik Hosanagar explains: “There are many settings where it’s helpful to have human verification. But there are many others where human verification is infeasible because of the scale of verification needed.” Hoffmann agrees that for “applications that process large volumes of data or detect patterns beyond human capability, output-by-output human verification is neither feasible nor meaningful.” For some experts, requiring human verification to scale in this way would undermine the entire value proposition of using AI in the first place. As Öykü Işik puts it, “the core value of AI lies in its speed and scale,” such that “requiring human verification for every output would effectively neutralize these efficiency gains.”  </p>
<p></p>
<p>The solution, for these experts, is not to abandon human judgment but to deploy it more strategically. Philip Dawson, head of AI policy at Armilla AI, believes that “as AI systems grow in complexity and deployment velocity, human-only verification becomes a structural bottleneck” and requires a different approach. Citing cybersecurity as an example, Işik contends that a system needs the ability to identify when human intervention is needed “while relying on automated decision-making for the bulk of the workload to avoid massive operational bottlenecks” and argues that “the most successful responsible AI efforts treat human expertise and automated tools as a combined system.” Alyssa Lefaivre Škopac, director of trust and safety at Alberta Machine Intelligence Institute, advocates for a “defense-in-depth approach that spans everything from front-line users who can meaningfully question an output to the professionals building the assurance ecosystem around these systems.” Dawson similarly contends that “the field must invest in automated evaluation frameworks and agentic assurance pipelines that extend, not replace, human judgment at scale.”</p>
<p><strong>Oversight and accountability remain paramount.</strong> In addition to relying on a combination of human and machine verification, our experts believe that oversight and accountability remain paramount to any responsible AI strategy. Chesterman argues that “verification should not be understood too narrowly.” He adds, “In some settings, human experts will directly validate outputs; in others, they will design tests, audit workflows, set thresholds for acceptable use, or decide when AI should not be relied upon at all.” In other words, as Chow puts it, “Human expertise is a design-time necessity, not just a run-time check.” Former DBS Bank chief analytics officer Sameer Gupta agrees that “governance and oversight should be embedded into every stage of an AI solution’s design and deployment rather than treated as a final checkpoint on the outputs alone.”</p>
<p></p>
<p>Many experts argue that human verification of AI outputs is essential not as an end but as a core part of meaningful oversight and accountability over AI systems. IAG chief AI scientist Ben Dias explains that as “a technological construct … AI systems lack the agency to be held legally or ethically accountable for the consequences of their actions.” For this reason, Dias says, “every AI solution needs an accountable human who is responsible for ensuring that the system’s outputs are properly understood and verified.” ADP’s chief product owner Naomi Lariviere agrees, saying, “AI systems can generate recommendations and automate decisions, but they can’t carry accountability.” Mike Linksvayer, vice president of developer policy at GitHub, argues that “as systems become more agentic, the limiting factor is no longer the ability to check individual outputs but the ability to exercise informed judgment over goals, constraints, escalation paths, and responsibility.”</p>
<h3>Recommendations</h3>
<p>If the limiting factor is the ability to exercise informed judgment, not just check AI outputs, then organizations need to invest in that judgment deliberately. We offer the following recommendations for organizations looking to cultivate human expertise that scales with their AI ambitions:</p>
<p><strong>1. Verify designs, not just outputs.</strong> A narrow view of human verification that only addresses system outputs is insufficient. Human verification, in the broader sense of human oversight, should be embedded at every stage of an AI solution’s design and deployment, not treated as a final checkpoint. This means human experts setting thresholds, designing tests, auditing workflows, and deciding when AI should not be relied on, not just reviewing individual outputs after the fact.</p>
<p><strong>2. Don’t rely on human verification alone.</strong> Because human verification of every AI output doesn’t scale, organizations committed to responsible oversight should invest in a variety of approaches that use automated tools to extend or augment human judgment. Human verification should be emphasized where human judgment is essential, including edge cases, high-stakes decisions, and novel contexts, while automated tools can handle the remaining volume of tasks. The goal is a combined system that extends human judgment at scale rather than either replacing or being bottlenecked by it.</p>
<p><strong>3. Invest in human expertise.</strong> Organizations should invest in human expertise to verify the outputs of AI systems and provide ongoing oversight over how systems are designed and whether they are working as intended. In fact, as technical capabilities grow, the need for human expertise only increases. If junior staff never develop independent judgment and senior employees’ expertise atrophies because they are not part of this process, the organization risks losing its ability to govern AI systems. This may mean maintaining human involvement in processes or tasks that build expertise and judgment, even when they could be automated with AI. In these cases, the efficiency gains that are forgone should be viewed as strategic investments in the future.</p>
<p><strong>4. Verify what is learned, not just what is produced.</strong> Organizations tend to focus verification on whether an AI system’s outputs are correct, but they also need to scrutinize the lessons they draw from AI deployments and outcomes. When teams interpret pilot results, measure performance gains, or decide what worked and what didn’t, those conclusions become the foundation for future investments, scaling decisions, and organizational narratives about AI’s value. If those lessons are flawed (the wrong metrics were tracked, edge cases were ignored, or success was declared prematurely), organizations risk perpetuating bad assumptions at increasing scale. Human experts should be involved not only in verifying what AI systems produce but in critically evaluating what the organization believes it has learned from deploying them.</p>
<p><strong>5. Treat verification as a strategic imperative, not just a responsibility practice.</strong> According to a global executive survey conducted in 2025 by <cite>MIT Sloan Management Review</cite> and BCG, 86% of top management teams consider AI to be a significant part of their strategic priorities. When AI is central to how an organization competes, grows, and makes decisions, the quality of human oversight directly affects strategic outcomes, not just ethical ones. Flawed outputs, unchecked deployments, and poorly drawn lessons don’t just create responsibility risks; they lead to misallocated resources, failed initiatives, eroded competitive position, and lost customer trust. The preceding recommendations — verifying designs, combining human and automated oversight, investing in expertise, and scrutinizing what is learned — are not merely aspirational additions to a responsible AI program. They are preconditions for effective strategic management.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/beyond-verification-what-responsible-ai-really-demands-of-human-experts/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>﻿How Leaders Can Move Past Personal Obstacles</title>
				<link>https://sloanreview.mit.edu/article/how-leaders-can-move-past-personal-obstacles/</link>
				<comments>https://sloanreview.mit.edu/article/how-leaders-can-move-past-personal-obstacles/#respond</comments>
				<pubDate>Mon, 11 May 2026 11:00:37 +0000</pubDate>
				<dc:creator><![CDATA[﻿Katherine W. Isaacs and Richard C. Schwartz. <p>﻿Katherine W. Isaacs is a senior lecturer in work and organization studies at the MIT Sloan School of Management. Richard C. Schwartz is the creator of the Internal Family Systems psychotherapeutic model and founder of the IFS Institute.﻿ He is also a teaching associate in the Department of Psychiatry at Cambridge Health Alliance, which is affiliated with the Department of Psychiatry at Harvard Medical School.</p>
]]></dc:creator>

						<category><![CDATA[Cognitive Styles]]></category>
		<category><![CDATA[Human Behavior]]></category>
		<category><![CDATA[Human Psychology]]></category>
		<category><![CDATA[Leadership Development]]></category>
		<category><![CDATA[Managerial Psychology]]></category>
		<category><![CDATA[Narrated Article]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leadership Skills]]></category>

				<description><![CDATA[Brian Stauffer/theispot.com Imagine you’re Gabrielle, a senior leader at a fast-growing tech company. Two of your top performers are also your biggest headaches, and they’re making everyone miserable — most of all, you. One is technically brilliant but undermines colleagues’ ideas with sly sarcasm and strategic inaction. The other is a creative powerhouse but belittles [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/05/2026SUMMER_Isaacs-1290x860-1.jpg" alt="" class="wp-image-127105"/><figcaption>
<p class="attribution">Brian Stauffer/theispot.com</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">Imagine you’re Gabrielle</span>, a senior leader at a fast-growing tech company. Two of your top performers are also your biggest headaches, and they’re making everyone miserable — most of all, you.</p>
<p>One is technically brilliant but undermines colleagues’ ideas with sly sarcasm and strategic inaction. The other is a creative powerhouse but belittles junior teammates with open disdain. Managing these two “brilliant jerks” is hard enough on its own. But even worse is their bitter rivalry, which is poisoning the team. You’ve tried coaching, feedback, and even professional mediation, but nothing has worked. Morale is plummeting and so are your chances of hitting this year’s goals.</p>
<p>You feel stuck. One part of you — the people pleaser — wants to preserve harmony and make sure everyone feels respected and included. It dislikes conflict and avoids confrontation. Another part — the performance driver — demands results and wants to make good on the promises you’ve made to your boss and your customers. It knows that if you don’t fix this problem now, everything you’ve worked for is at risk.</p>
<p>These competing internal voices are stuck in an exhausting stalemate, and they’re blocking you from taking action. What’s happening here is not just a tough management call. It’s a conflict between different parts of yourself, each with its own voice, agenda, and intentions.</p>
<p></p>
<h3>Human Development and the Multiple Mind, in Brief</h3>
<p>The poet Walt Whitman famously wrote, “Do I contradict myself? / Very well then I contradict myself, / (I am large, I contain multitudes.).” He recognized that our minds are not monolithic but composed of multiple, interdependent parts that operate in a dynamic relationship. Just as our bodies function as complex living systems with many organs playing a role in keeping us healthy and adaptive, our minds are composed of conscious and unconscious parts that function in a dynamic relationship with one another. (For the purposes of this article, we use the term <em>mind</em>, or <em>psyche</em>, to refer to the totality of all our conscious and unconscious processes — including perceiving, thinking, feeling, remembering, imagining, motivating, and willing ourselves to action.)</p>
<p>The idea of a “multiple mind” has long shaped modern psychology. Sigmund Freud theorized the psyche as comprising the id, ego, and superego; Carl Jung, one of his students, later widened the theory of the psyche to include a personal unconscious that holds forgotten memories and lived experience, and beneath it a collective unconscious — a deep, shared reservoir of archetypes and symbols that surface in myths, dreams, and stories across cultures. He introduced the idea of a <em>persona</em>, the social mask shaped to fit expectations, and the <em>shadow</em>, where we exile disowned qualities that continue to inform our thoughts and actions, often without our awareness.</p>
<p>At the center of Jung’s model is what he termed the <em>self</em>, the central organizing energy of the psyche, which drives the lifelong process of psychological growth. This developmental process, which Jung termed <em>individuation</em>, involves integrating the psyche’s layers into a unified, harmonious, and flexible whole. Much of that work, in turn, involves locating and retrieving what Jungian analyst and author Robert A. Johnson calls “the gold in the shadow” within our personal histories and our shared human experience.</p>
<p>Insights from psychotherapy have profoundly influenced how we think about leadership and organizational culture. Emotional intelligence and psychological safety both began as clinical concepts before psychologist Daniel Goleman and Harvard professor Amy Edmondson, respectively, put them on the map as essential to leadership. The idea that the mind as comprising multiple parts with an integrative self at the center is not uncommon in management and leadership theory. MIT Sloan’s Deborah Ancona and leadership expert and executive coach Dennis Perkins have shown that “ghosts” from past childhood experiences influence how executives lead.<a id="reflink1" class="reflink" href="#ref1">1</a> (Ancona’s book on “family ghosts” at work will be published next year.) London Business School professor Herminia Ibarra writes in her book <cite>Working Identity</cite> that “we are not one true self but many selves and that those identities exist not only in the past and present but also, and most importantly, in the future.”</p>
<p>One of us (Kate) teaches leadership courses at MIT Sloan that cover “multiple selves” in relationship to the core self. She uses many of the tools described herein to help students and executives bring unconscious patterns and habits into conscious awareness, where they can take charge of their present behaviors and future development as leaders.</p>
<h3>The Internal Family Systems Methodology</h3>
<p>Originally developed in the 1980s as a clinical therapy model by one of us (Richard), Internal Family Systems (IFS) offers leaders a simple framework for accessing and working with their inner parts to achieve greater functionality and flourishing, personally and professionally. The IFS approach draws on the long arc of psychological theory outlined above. In response to IFS’s growing popularity among licensed therapists, some members of the psychotherapy community are calling for more evidence to support the safety and efficacy of the approach. A 2025 scoping review of 27 studies on IFS concluded that it’s a promising therapeutic model for addressing chronic pain, depression, and post-traumatic stress disorder, and for cultivating self-compassion and self-forgiveness.<a id="reflink2" class="reflink" href="#ref2">2</a></p>
<p>We’ve used IFS methods successfully to help senior executives and startup founders manage inner conflicts that undermine their decision-making. In each case, IFS helped them find clarity amid a complex internal landscape of competing voices, each carrying its own wisdom and warnings. Applying a few core IFS principles can help individuals gain clarity on their purpose and priorities, and to stay grounded under pressure.</p>
<p>IFS works with an inner family of parts that sometimes cooperate and sometimes clash. Its classification of parts into <em>exiles</em>, <em>managers</em>, and <em>firefighters</em> has parallels with Freud’s and Jung’s models. Like Jungian psychology, IFS incorporates the body directly into the process of therapeutic inquiry, embracing what neuroscientist Antonio Damasio has called the “felt experience” of being alive. The six-step IFS process described below illustrates how this works.</p>
<p>Richard holds that our inner parts function as distinct subpersonalities, influencing how we think, feel, and act. One part may want to avoid conflict, another may want to push hard for results, and another may prefer to simply escape. Leaders often experience this as inner tugs-of-war — such as Gabrielle’s people pleaser versus her performance driver. Recognizing that we all have these parts and that they often disagree is the first step toward deeper self-understanding in leadership.</p>
<p>Perhaps the most important principle behind IFS is the idea that all parts are trying to <em>help</em> us — even the ones whose behaviors we dislike. No part is inherently “bad.” A perfectionistic part may drive us to work long hours, a critical part may lash out to protect us, or a conflict-avoiding part may shut us down before our nervous systems get overwhelmed. Even when a part of us behaves in unhelpful or extreme ways, it’s doing its best with the tools it has.</p>
<p>This perspective can feel counterintuitive or even controversial. What about destructive parts — an angry impulse that erupts in rage, or a cynical inner voice that sabotages collaboration? IFS suggests that these parts are trying to serve a purpose — whether protecting against perceived threats, fulfilling old survival vows, or motivating us through fear. The behavior may be unhelpful, but the underlying drive is not malicious. Based on decades of clinical experience, Richard has found that once these parts are truly heard, appreciated, and relieved of their burdens, they can completely transform. Inner critics can become wise advisers, workaholics can turn intowor reasonable motivators, and rageful parts can set healthy boundaries.</p>
<p>Leaders don’t have to (and should not) indulge in every impulse that arises from within, but they can learn to listen and respect their parts’ underlying positive intentions. A common adage in IFS is “All parts are welcome; all behaviors are not.” We must be able to draw firm boundaries around harmful behaviors while continuing to explore our underlying motivations.</p>
<p>For leaders, this insight is powerful. By shifting from judgment to curiosity about their own inner voices, they not only reduce inner conflict but also build the muscle to extend that same compassion and discernment outward — to colleagues, teams, and organizations.</p>
<h3>The Crucial Role of the Self</h3>
<p>Like Jung’s theory of mind, IFS contains the idea of a core organizing force in the psyche called the self. The self is like an orchestra conductor — a calm and centered presence guiding our parts to work together harmoniously and developing each one’s potential so that together they can express the best of who we are.</p>
<p>A vivid example comes from Renee Zaugg, who has held executive roles at Aetna and CVS Health and served as vice president and CIO at Otis. Zaugg was in her late 20s when she worked in ﻿Aetna data centers as a junior computer operator, usually covering uneventful Sunday shifts and monitoring changes made by vendors.</p>
<p>But one Sunday, everything changed. The entire data center ground to a halt when a vendor accidentally hit the emergency power-off switch. Unable to reach a manager, Zaugg realized that she would have to lead — and as a junior staffer and the only woman present in what was then a male-dominated work culture, she had to summon courage. “I climbed up on a table (since I was small and needed to command attention), and I started directing everyone to take specific responsibilities,” she said. “One person kept calling higher-ups, others were assigned to restart the systems, and everyone was told to report back to me every 30 minutes.”</p>
<p></p>
<p>That decisive moment put Zaugg on the map at Aetna. Looking back, she realized that what allowed her to step up wasn’t just quick thinking — it was her ability to center herself first. In IFS terms, she accessed self-energy: a calm, clear presence that allowed her to see the whole system and act from strength. That moment revealed her intuitive gift for reading people, not just through words but through eye contact, body language, and subtle cues. That awareness shaped how she led from then on, guiding her career in ways she never expected and propelling her to C-suite roles and board leadership despite her lack of a college degree.</p>
<p>IFS describes self-led leadership as having eight recognizable qualities (known as the 8 C’s): compassion, curiosity, clarity, creativity, calmness, confidence, courage, and connectedness. When these qualities are present, it signals that a person is leading from the self, not from a reactive part. In high-pressure moments, a leader grounded in self can hold contradictory evidence or objectives, resist panic, and take decisive action rather than being directed by anxious or controlling parts.</p>
<p></p>
<h3>A Guide to Using IFS in Leadership</h3>
<p>Understanding the three core IFS principles — that our minds comprise multiple parts, that no parts are bad, and that we can develop access to our wiser self to guide them — gives leaders a new perspective on their inner conflicts. It provides a starting point for learning from their parts and responding to challenges with steadier action.</p>
<p>To put these principles into practice, IFS practitioners guide individuals through a process to identify the parts of themselves that are active in each situation, approach them with curiosity and compassion, and help shift them into healthier roles. This process unfolds in two stages: first, becoming aware of the part; and, second, forming a new relationship with the part. We’ll look at both stages and walk through the three steps involved in each one.</p>
<p><strong>Stage 1: Becoming aware of the part.</strong> The first stage of the IFS process is simple: becoming aware of the parts within our multipart mind and moving them from our unconscious into conscious awareness. Instead of being unconsciously controlled by a part — or identifying so fully with it that we think, “This is me” — we step back and notice it. Awareness creates space to observe, engage, and ultimately choose how we want to respond.</p>
<p>Developmental psychologist Robert Kegan has described a similar shift: moving from a state where our parts seem like who we are, and unconsciously driving our behavior outside of our awareness, to a conscious state where we can see and understand them (without being overtaken by them). From there, we can help them take on more productive roles within our inner system.</p>
<p>This stage involves finding the reactive part, which means locating it, naming it, and noticing how it feels in the body. What sensations does it create? What emotions does it evoke? How does it respond when you put your attention on it? IFS emphasizes this kind of body awareness as a vital entry point because the body senses and transmits information to the brain in a bidirectional communication system. When we access this often-neglected somatic intelligence, we gain a richer source of insights about our thoughts, emotions, and motivations that can help guide our developmental journeys.</p>
<p>While this idea may sound far-fetched, it has gained mainstream acceptance, with research exploding in the study of what is called <em>interoception</em>, or how the body-brain system senses and adjusts our internal states. Scientists including Damasio, Stephen Liberles, Wen G. Chen, and Nobel Prize winner Ardem Patapoutian are exploring this domain.</p>
<p>In leadership settings, common parts that emerge include controllers, diplomats, busy doers, planners, analyzers, caregivers, people pleasers, caretakers, critics, and organizers. At their best, these parts help leaders function effectively. But when they take on extreme roles, they can morph into perfectionists, workaholics, authoritarians, stuck opposers, procrastinators, avoiders, obsessives, or conflict avoiders.</p>
<p>The work of gaining awareness of such parts naturally begins with Step 1, finding them. Daniel, a professional man whom Kate coached, was struggling to choose among several different work opportunities. As he discussed his dilemma, Kate noticed that two distinct parts of him seemed to be speaking, and she asked whether those parts had names. Daniel said yes — and decided to call them the Idealist and the Entrepreneur. They had been in a quiet tug-of-war for much of his life, and now they were keeping him stuck.</p>
<p>When being guided through an IFS session, most people can easily recognize the different voices or impulses inside themselves. It’s something we all do naturally when we say, “Part of me wants this, but another feels unsure.”</p>
<p>Step 2, focusing on the part, recognizes that parts are embodied. This means that they are represented in the body and often connect to somatic (embodied) feelings that become useful sources of data and intelligence to help people make sense of their experiences. Following the somatic arc of the IFS method, Kate asked Daniel whether he could find these two parts in or around his body. Then she asked him to follow Step 3 and flesh them out.</p>
<p>Daniel’s Idealist came forward first: It was in his heart region, he suggested, and full of energy. “I like it,” Daniel said. “And I like what it triggers in other people. It feels good inside me.” This part was animated by purpose and the desire to inspire. It brought him a lightness, a hope, and a clear sense of meaning.</p>
<p>But when they turned toward the Entrepreneur, the energy shifted. Daniel felt that part as a heavy weight in his shoulders and back. He spoke of scars, of scarcity, and of memories shaped by his father, who had been an entrepreneur too. It was a feast-or-famine existence. When things went well financially, there was abundance around him, but in hard times, the financial flow dried up — and, eventually, heartbreak and trauma followed. “You have to be careful,” the Entrepreneur warned. “You have to be cautious.”</p>
<p><strong>Stage 2: Forming a new relationship with the part.</strong> Early in his career, Richard found that when he judged, suppressed, or tried to overpower his clients’ inner parts, the parts would resist and reemerge — often stronger than before. The same is true for many of us: When we battle an inner voice, it tends to leak out somewhere else in our life, often returning with greater force. The alternative is counterintuitive but powerful: Rather than fighting against a part, we can get curious about its story. This curiosity creates the conditions for transformation, allowing the part to step into a healthier role that fits the present. Remember: There are no bad parts.</p>
<p>The key question to ask at Step 4 is “How do I feel toward this part?” If the answer includes qualities like curiosity, compassion, calmness, or even gratitude, it signals that you are relating from the core of your self. That mindset opens the door to understanding the part more fully. This is the third principle of IFS, and its ultimate goal: to access that central self, enable it to start relating to the inner system of parts, and step into a leadership role like a proactive orchestra conductor who creates coordinated internal harmony.</p>
<p>But sometimes it isn’t easy. Many people feel angry, afraid, or judgmental toward certain parts when they show up. Kate recalls getting in touch with her own inner perfectionist; when her therapist asked how she felt about the part, she blurted out, “I hate it! It’s driving me crazy. I want to get rid of it!” That reaction was itself another part speaking: a frustrated, protective voice that needed acknowledgment before she could relate to the perfectionist with genuine curiosity.</p>
<p></p>
<p>The same dynamic appeared in Daniel’s coaching session with Kate. He felt warmly towards his Idealist part, eager to explore its sense of hope and meaning. But when he turned toward﻿ the Entrepreneur, Daniel hesitated. He resented its warning messages, which he felt held him back and interfered with his ability to sustain a positive, ambitious vision for his future vocation. However, he decided to take Kate’s suggestion to get to know that part a little better and hear its fears and stories. (It’s important that individuals, not outside helpers, set the pace and depth of their inner journey.)</p>
<p>If Daniel had resisted, Kate would have gently encouraged him to explore the parts that were resistant and hear their fears and their stories. The IFS process isn’t about pushing people where the therapist or coach thinks they should go but instead listening closely and supporting the client’s inner exploration at the pace, depth, and timing that they ultimately govern. If ever people get stuck or confused, a simple question can often unlock the situation: “Just ask <em>inside</em>.” People have wisdom about their own inner system that sometimes needs only a gentle invitation to emerge.</p>
<p>Step 5, befriending, means hearing the parts’ full stories, offering them appreciation for the role they have played, and asking what support they need. For example, you might ask a perfectionist part why it’s so afraid of making mistakes. It might reveal a memory of being humiliated in grade school for not paying attention, or answering questions incorrectly. Over the years, this part may have worked tirelessly to protect you from failure, helping you to look competent so you could keep your job. It may not want to give up its job in your inner system, fearing that delegating or relaxing its vigilance will lead to disaster. Reassuring the part that you aren’t trying to eliminate it but rather to help it find a more productive role that it likes even better can ease its concerns.</p>
<p>In Daniel’s session, he connected with each of his parts to hear more of their stories and then moved to Step 6 to hear their fears and how they wanted to help guide his future choices. In so doing, something new came forward: a new vision that blended the best of both of them.</p>
<p></p>
<p>The Idealist was afraid it might not be able to have a voice if it were squashed by financial worries. It wanted to guide Daniel to pay attention to exciting new opportunities that aligned with his deepest purpose. The Entrepreneur feared financial catastrophe. It wanted to keep him focused on his current work as he explored new opportunities to create a stable financial future for his family. By the end of the session, Daniel had a clear sense of an opportunity he wanted to pursue that honored both inspiration and pragmatism. His session was a demonstration of what can happen when the core self of a person’s psyche can relate to subparts of the psyche with compassion and curiosity and integrate those voices. This is how we make space for the wise and intuitive self that can guide us forward. At this stage in the process, many people begin to spontaneously see new ways to relate their inner parts. Sometimes, though, parts carry long-held burdens from the past that block the individual’s access to insight and forward movement in their lives.<a id="reflink3" class="reflink" href="#ref3">3</a> Those circumstances may require deeper work that should be guided by a trained therapist. We would always advise consulting with a licensed mental health professional in cases where an individual’s safety is at risk or there is complex trauma, mental or physical instability, or illness that needs professional attention.</p>
<p>That said, while we’ve recounted some facilitated engagements with IFS here, many individuals don’t need a therapist to work with IFS ideas. Much of the value of IFS can be gained by employing its three core principles: recognizing that we have many parts operating inside us, that there are no bad parts, and that we can readily gain access to the wise self at the center of our psyche.<a id="reflink4" class="reflink" href="#ref4">4</a> The bottom line of IFS or any therapeutic model is that we should each approach our inner world not with judgment but with curiosity and compassion. And if self-compassion isn’t available at first, start with curiosity. Pull on that thread and see where it leads you.</p>
<p></p>
<p>IFS offers a simple, easily accessible framework that can guide us in getting to know our inner worlds and establishing a more coherent relationship among our multiple inner voices. This can enable us to lead more consciously so that we can create the impact we intend. By accessing our core self — the central organizing energy of the psyche — we create trust with our parts, release them from outdated roles, and unlock the wisdom they hold. For leaders, this means responding to challenges with greater clarity, steadiness, and compassion. It means creating the conditions where both they and those they lead can thrive.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/how-leaders-can-move-past-personal-obstacles/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Why Businesses Should Experiment With Quantum Computing Now</title>
				<link>https://sloanreview.mit.edu/article/why-businesses-should-experiment-with-quantum-computing-now/</link>
				<comments>https://sloanreview.mit.edu/article/why-businesses-should-experiment-with-quantum-computing-now/#respond</comments>
				<pubDate>Thu, 07 May 2026 11:00:58 +0000</pubDate>
				<dc:creator><![CDATA[Avi Goldfarb and Florenta Teodoridis. <p>Avi Goldfarb is the Rotman Chair in Artificial Intelligence and Healthcare﻿ and professor of ﻿marketing at the Rotman School of Management, University of Toronto. He is also chief data scientist at Creative Destruction Lab-Toronto, a research associate at the National Bureau of Economic Research, a distinguished fellow at the Hebrew University of Jerusalem, and a research lead at the Acceleration Consortium. Florenta Teodoridis is the Jorge Paulo and Susanna Lemann Chair in Entrepreneurship and an associate professor of management and organization at the University of Southern California Marshall School of Business. She is also a mentor in the Quantum Stream at Creative Destruction Lab.</p>
]]></dc:creator>

						<category><![CDATA[Narrated Article]]></category>
		<category><![CDATA[Quantum Computing]]></category>
		<category><![CDATA[Technology Innovation]]></category>
		<category><![CDATA[Technology Investment]]></category>
		<category><![CDATA[Managing Technology]]></category>
		<category><![CDATA[Strategy]]></category>
		<category><![CDATA[Technology Innovation Strategy]]></category>

				<description><![CDATA[Matt Chinworth/theispot.com Executives tracking the latest news about quantum computing might conclude that with technical milestones still to be reached, the prudent approach is to watch and wait before investing. But that overlooks what other, bolder companies recognize: Quantum computing is an enabling technology, and user organizations have a critical role to play in shaping [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/2026SUMMER_Goldfarb-1290x860-1.jpg" alt="" class="wp-image-126958" /><figcaption>
<p class="attribution">Matt Chinworth/theispot.com</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">Executives tracking the latest news</span> about quantum computing might conclude that with technical milestones still to be reached, the prudent approach is to watch and wait before investing. But that overlooks what other, bolder companies recognize: Quantum computing is an enabling technology, and user organizations have a critical role to play in shaping how it will create value.</p>
<p>Headlines about new chips, qubit (quantum bit) counts, and error correction suggest that the key question is whether quantum computing has reached the point where it can outperform classical computers on practical problems. This framing leads to a familiar strategic dilemma: ﻿Wait until the technology is clearly “ready﻿,” or risk getting involved too early in something that may not pay off. Qubit counts and error rates are appropriate engineering milestones, but they are a poor guide to how and when most companies should engage. For technologies like quantum computing, economic value does not arrive all at once, when a technical threshold is crossed. It emerges gradually, through experimentation, complementary innovation, and organizational learning, often well before the technology is fully mature.</p>
<p>Like other enabling technologies, quantum computing will not generate much economic value on its own. Instead, its economic value will emerge through repeated cycles of co-invention between the technology-producing sector and complementary innovations developed by users in application settings. Technologies with the highest enabling potential are referred to as general-purpose technologies.</p>
<p>Electricity is one such general-purpose technology. Its development and diffusion depended on continuous co-invention between producers and users: Early innovations in power generation prompted downstream experimentation in lighting, motors, appliances, and factory layouts, and those downstream innovations in turn reshaped what kinds of power generation and distribution systems were needed upstream. Classical computers followed a similar pattern: Progress in hardware depended on complementary innovations in software, data storage, and organizational processes, while advances in those complements fed back into hardware design by advancing performance requirements.</p>
<p></p>
<p>Quantum computing fits this same pattern. Its economic impact will not come suddenly, after passing a particular technical threshold, nor will it diffuse as a plug-and-play tool. Instead, value will be created through feedback loops in which user experimentation reveals near-term economic opportunities, what levels of performance matter, and what processes and skills are required to generate value. The practical challenge for company leadership is therefore not predicting which quantum computing vendors will reach certain technical milestones; rather, it is developing the organizational capability to decide when and how to participate in these co-invention cycles, by translating business problems into quantum computing use cases and strategically redesigning processes as the technology evolves.</p>
<h3>The Managerial Dilemma</h3>
<p>When viewed as an enabling technology, quantum computing presents managers with a distinct set of challenges. Value creation depends on active engagement from downstream users, because experimentation in application settings clarifies which technical progress should be prioritized. At the same time, companies’ incentives to engage are weak because the path to capturing value from such experimentation is uncertain.</p>
<p>For enabling technologies, the most significant economic benefits rarely appear in the short run or in the first applications explored. They accumulate over time, as learning from early experiments informs more meaningful applications down the line. For example, significant returns from electricity accrued only after several cycles of co-invention, with the emergence of electric motors in manufacturing plants and household appliances. A similar pattern played out with the internet. Early uses focused on basic connectivity and information sharing, but the largest economic gains emerged later, after complementary innovations in software, data, and organizational processes enabled new business models, such as e-commerce, digital platforms, and cloud-based services.</p>
<p>This makes early value capture hard to assess. Companies that develop early use cases for quantum computing may not be able to secure long-run advantages, considering that the most valuable use cases likely have not yet been discovered.<a id="reflink1" class="reflink" href="#ref1">1</a> This could raise concerns that the value generated through early experimentation may be captured by others. For example, a great deal of the value created by internet technologies was captured by companies such as Google and Meta.</p>
<p>When companies hesitate to engage in early experimentation, the result is persistent uncertainty about what customers value. Developers already face uncertainty about how to achieve a technically viable system, but what counts as viable depends on which applications potential customers prioritize and what performance thresholds matter in practice, especially in the near term. When downstream companies do not engage, upstream producers lack clear signals about what to optimize for, potentially slowing technical progress. When the technology remains immature, downstream companies struggle to specify concrete use cases that would justify experimentation and near-term investment. The result is a catch-22: Near-term value-capture uncertainty sustains demand uncertainty and discourages co-invention, which in turn makes it harder to resolve technological uncertainty. Companies want proof before experimenting, but proof often arrives only because companies experiment.</p>
<p></p>
<p>But near-term uncertainty about capturing value is not a reason to delay. Because quantum computing is an enabling technology, co-invention processes shape how the technology develops. The feedback loops between producers and users imply that who engages and what problems they choose influence which applications become technically feasible early on. Engaging early gives companies opportunities to influence which performance dimensions are prioritized and to identify future complementarities. For example, financial services companies that engage early will shape the direction of innovation in quantum computers and related software toward their needs. In this sense, managerial decisions about engagement are not just responses to technological progress but also inputs into the direction that progress takes. They are a mechanism for discovering how to adapt a company’s assets and strategy to applications that will later generate higher value.</p>
<p>Moreover, organizations that benefit most from enabling technologies are those that redesign their processes to take advantage of what technology makes possible, even though such actions are the most costly, slow, and difficult to evaluate in advance.<a id="reflink2" class="reflink" href="#ref2">2</a> Early uses often generate only incremental value. The largest gains, which typically come later, are generally enabled by companies that envision new processes.</p>
<p></p>
<h3>Experimentation in Quantum Computing</h3>
<p>What will arise that will allow companies to capture significant value from quantum computing is still an open question. Today, companies need to experiment and learn. Such experimentation is already happening.<a id="reflink3" class="reflink" href="#ref3">3</a> One early example is Lockheed Martin’s decision to move from watching quantum progress to engaging in hands-on experimentation. In 2011, after having spent about a year evaluating the technology, Lockheed entered a multiyear agreement for a D-Wave One system, a 128-qubit quantum annealing machine. Reporting at the time valued the deal at roughly $10 million, including support and maintenance. Rather than treating the D-Wave One as a turnkey product, Lockheed helped establish the USC-Lockheed Martin Quantum Computing Center at the University of Southern California’s Information Sciences Institute, giving researchers and engineers sustained access to the system and a setting designed for iterative learning. The intent was not that a single installation would transform operations overnight. It was to build familiarity with what the machine could and could not do, to test problem formulations, and to identify where quantum approaches might eventually matter for practical problems.</p>
<p>IBM pursued a different kind of experiment — one designed to scale learning beyond a single company. On May 4, 2016, it launched IBM Quantum Experience, a cloud service that offered users access to a 5-qubit quantum processor and a matching simulator to run their own experiments. Uptake was immediate. Roughly 7,000 users registered within the first week, and over 17,000 more registered the following week.<a id="reflink4" class="reflink" href="#ref4">4</a> Over time, the user base grew into the hundreds of thousands. This 5-qubit device was not commercially useful; it mattered because cloud access enabled broad engagement with a prototype, which accelerated downstream experimentation. The IBM cloud service launch was followed by several other﻿ similar services, such as Alibaba Cloud’s quantum computing platform, Rigetti’s Forest, and D-Wave’s Leap in 2018; Xanadu Cloud, the Honeywell System Model H1, and Amazon Braket became generally available in 2020.<a id="reflink5" class="reflink" href="#ref5">5</a></p>
<p>Companies across industries have been using access to quantum computers to explore concrete problems before the technology has matured. Examples include a partnership between Airbus and 4colors Research on quantum optimization; the Port of Los Angeles and Fenix Marine Services’ collaboration with D-Wave on cargo terminal operations; Volkswagen’s work with D-Wave on traffic optimization; and Telefónica Germany’s partnership with Amazon to explore network optimization. Across these cases, the common pattern is not immediate operational transformation but experimentation that helps companies learn which problem formulations, workflows, and benchmarks matter.</p>
<p></p>
<p>These exploratory efforts can also lead to innovation that has immediate practical value for established systems. For example, investments in quantum computing have produced quantum-inspired innovations, such as algorithms that can be implemented on classical computing hardware.<a id="reflink6" class="reflink" href="#ref6">6</a> Other innovations include methods that improve the efficiency of recommendation systems, optimization routines, and materials discovery.<a id="reflink7" class="reflink" href="#ref7">7</a> Fujitsu’s Digital Annealer is an example of quantum-inspired optimization methods being implemented on classical hardware. It has been used in areas such as distribution and warehouse operations to improve routing and part placement, in financial services to support portfolio optimization under complex constraints, and in manufacturing and logistics to enhance production planning and operational efficiency. Several other companies, including IBM and Google, have reported similar classical computing advances that emerged directly from their quantum computing efforts.</p>
<h3>Elements of a Quantum Strategy</h3>
<p>Executives planning a quantum strategy should treat quantum computing as an enabling technology. Their objective should not be to time a technological breakthrough but to enable co-invention feedback loops and to reduce both technological and demand uncertainty through deliberate engagement. This involves building the organizational capacity to turn learning into action when a technological breakthrough occurs that makes the company’s use cases possible. The focus should not be on when to adopt. Instead, it should be on learning, experimentation, and preparation for process changes over time. To that end, leaders should take the following steps.</p>
<p><strong>1. Develop boundary spanners linking quantum technology to company-specific problems. </strong>For companies outside the tech industry, keeping track of emerging technologies is always challenging. Such companies rarely employ experts in the underlying science, and so they must rely on outside signals to assess progress. This challenge is particularly acute with quantum computing. The technology is complex and unintuitive, and public narratives often oscillate between hype and skepticism. As a result, companies risk overestimating their readiness to work with quantum, underestimating the technology’s eventual impact, or missing specific applications relevant for their industry that might not be highlighted in general-interest news coverage.</p>
<p>Companies need to ensure that they have access to people who understand both the business and what the technology can do. Such boundary-spanning roles, sometimes filled by generalists who can connect insights across fields, are critical for translating technological progress into company-specific questions.<a id="reflink8" class="reflink" href="#ref8">8</a> For example: Which problems might become tractable? Which constraints matter most? What co-invention is needed? What kinds of performance improvements would justify investment? The employees assigned to this role can stay abreast of developments by attending relevant events, connecting with quantum technology organizations, and plugging into ecosystem initiatives.</p>
<p>The goal of these initiatives is to connect quantum expertise with industry to shape the direction of quantum computing co-invention efforts in ways that will benefit user companies. For example, in 2025, the state of Maryland and the University of Maryland announced the $1 billion Capital of Quantum initiative, with IonQ positioned as an anchor partner. In the same ecosystem, Microsoft subsequently announced plans for a quantum research center in the University of Maryland’s Discovery District. In Chicago, the Chicago Quantum Exchange connects major universities and national labs with industry partners, while the Illinois Quantum and Microelectronics Park is being built as a large-scale public-private site intended to host quantum companies and shared facilities. Elevate Quantum has been designated a regional tech hub and is developing the Quantum Commons as a 70-acre campus in Colorado intended to connect startups, industry, academia, and shared infrastructure. In Calgary, the University of Calgary’s Quantum City partnership (with the Government of Alberta and an industry partner) was created to build infrastructure, talent programs, and adoption pathways, including a dedicated collaborative hub to connect R&amp;D with implementation.</p>
<p><strong>2. Find near-term opportunities to anchor learning. </strong>A second strategic priority is to identify opportunities that make experimentation feasible. Some of these opportunities involve narrowly defined problems where even modest performance improvements would be valuable and integration into existing processes would be manageable. Other opportunities arise from quantum-inspired methods that run on classical hardware but emerge directly from engagement with quantum computing. These near-term opportunities rarely represent the largest long-run payoff, but they support important organizational learning.</p>
<p>For example, recent research identified ways in which companies may derive economic benefits from quantum approaches even before the technology delivers clear technical superiority over classical computing.<a id="reflink9" class="reflink" href="#ref9">9</a> The key is to observe that the relevant comparison is not whether quantum computers can solve problems that classical computers cannot but whether they change the cost, speed, or resource intensity of solving near-term economically meaningful problems. When quantum-based or quantum-inspired methods alter these trade-offs, companies can justify experimentation and investment based on near-term economic value.</p>
<p>Focusing on problems that matter today makes experimentation easier to justify and easier to organize. It creates incentives for teams to engage, provides concrete benchmarks for evaluating progress, and helps companies learn how quantum approaches differ from classical ones in practice. At the same time, working on near-term applications exposes organizations to new ways of formulating problems, paving the way for identifying complementarities and ideas for redesigned processes that will facilitate higher returns in the long run. In this sense, near-term opportunities are entry points that allow companies to build intuition about where quantum methods might offer advantages, how those advantages could translate into economic value, and where further experimentation might open up more consequential possibilities over time.</p>
<p><strong>3. Create space for longer-term experimentation and process innovation. </strong>Finally, companies need to recognize that the largest returns from enabling technologies typically come from changes in processes, not from incremental improvements to existing tasks. Classic research on technological change has found that established companies often struggle to benefit from emerging technologies not because they fail to recognize them but because they try to evaluate and implement them using the performance metrics, processes, and incentives of the existing business.<a id="reflink10" class="reflink" href="#ref10">10</a> As a result, promising new technologies are either underfunded, misapplied, or forced into use cases that fit current operations rather than future opportunities.</p>
<p></p>
<p>Companies must create spaces where experimentation with emerging technologies can proceed unencumbered by day-to-day operational metrics and short-term performance expectations. For example, in retail, the internet served as an enabling technology that generated most value when traditional brick-and-mortar processes were redesigned. Walmart invested early and persistently in e-commerce capabilities, data infrastructure, and digitally integrated supply chains, gradually redesigning its processes. Sears, in contrast, largely treated the internet as a peripheral sales channel layered onto existing brick-and-mortar processes and struggled to adapt its operating model as retail shifted online.</p>
<p>The implication for quantum computing is that organizational space should be deliberately created to enable exploration that is not tightly tied to immediate returns. Such experimentation may need to be separated from core operations, supported by different incentive structures, and evaluated based on learning rather than near-term financial impact. The goal is not simply to test quantum tools but to explore how quantum-enabled capabilities might eventually support new ways of delivering value.</p>
<p></p>
<p>Taken together, these strategies turn quantum computing from a waiting game into an active learning process. Rather than asking when the technology will be ready, managers can focus on whether their organization is developing the interpretive capability, experiential knowledge, and organizational flexibility needed to benefit when more powerful applications become feasible. In the context of enabling technologies, preparedness is about being ready to recognize, shape, and act on opportunities as they emerge. If and when quantum becomes useful for specific enterprise problems, those companies will be positioned to move quickly.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/why-businesses-should-experiment-with-quantum-computing-now/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Calibrate AI Use to the Decision at Hand</title>
				<link>https://sloanreview.mit.edu/article/calibrate-ai-use-to-the-decision-at-hand/</link>
				<comments>https://sloanreview.mit.edu/article/calibrate-ai-use-to-the-decision-at-hand/#respond</comments>
				<pubDate>Wed, 06 May 2026 11:00:43 +0000</pubDate>
				<dc:creator><![CDATA[Pedro Amorim, Amr Saleh, and Ulrika Cederskog Sundling. <p>Pedro Amorim is a professor at the University of Porto and cofounder of LTPlabs. Amr Saleh is a generative AI and optimization consultant at LTPlabs. Ulrika Cederskog Sundling is an investor, a board director, and a member of LTPlabs’s advisory board.</p>
]]></dc:creator>

						<category><![CDATA[AI Strategy]]></category>
		<category><![CDATA[Analytics Strategy]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Decision-Making]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Narrated Article]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Analytics & Business Intelligence]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Frontiers]]></category>

				<description><![CDATA[PPaint/Ikon Images On a rainy Tuesday in London, the leadership team of a consumer goods company reviewed two business decisions: “Where should we open our next five stores?” and “Should we pivot the brand toward wellness?” Generative AI had been used to support the decision-making process for addressing both questions. The team ended up with [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/2026SUMMER_Amorim-1290x860-1.jpg" alt="" class="wp-image-126950"/><figcaption>
<p class="attribution">PPaint/Ikon Images</p>
</figcaption></figure>
<p></p>
<p></p>
<p><span class="smr-leadin">On a rainy Tuesday in London,</span> the leadership team of a consumer goods company reviewed two business decisions: “Where should we open our next five stores?” and “Should we pivot the brand toward wellness?” Generative AI had been used to support the decision-making process for addressing both questions. The team ended up with plenty of plausible qualitative arguments for the proposed road map for store expansion — without data or analytics to support these recommendations. The tool had helped the team produce a polished narrative on the wellness pivot, along with a compelling deck advocating the strategic move, but stakeholder engagement was shallow, and there wasn’t a shared conviction that the organization was ready to move.</p>
<p>The meeting exposed the flawed assumption that all AI is the same and that every type of artificial intelligence supports decision-making equally. In reality, different decisions require fundamentally different AI roles. Some decisions are narrow: Objectives are clear, data is available, and outcomes can be measured quickly. Others are wide: Goals are contested, information is incomplete, and alignment matters as much as analysis. When leaders treat both decision types as the same, they predictably misapply AI technology, using generative tools where analytical engines are needed or where the real work is deliberation and commitment. The result is a disappointing output that fails to support narrow decisions and fragile buy-in and difficult execution for wide decisions that demand socialization and alignment.</p>
<p>That mismatch is showing up across industries. AI adoption is now widespread, yet many organizations still struggle to convert AI activity into measurable business impact. In its <a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai" target="_blank">2025 report on the state of AI</a>, McKinsey describes this gap starkly: 88% of companies now use AI in at least one function, but only around 40% are able to see a positive impact on the bottom line. In our work with executive teams, the pattern behind that gap is consistent. The pressure to use AI — amplified by headlines about generative and agentic systems — often outruns the harder discipline of deciding where AI should lead, where it should support, and what kind of AI fits the decision at hand. As a result, teams build impressive decks for problems that require more time for internal alignment, and they use conversational generative tools for decisions that demand rigorous analytics.</p>
<p>The solution is to calibrate AI’s role in the decision. By distinguishing between narrow and wide decisions, organizations can match the technology’s capabilities to decision characteristics — allowing AI to act as a decision engine when the decision is narrow and as a decision helper when the decision is wide.</p>
<p></p>
<h3>Narrow or Wide?</h3>
<p>An AI capability can play very different roles, depending on the decision it’s supporting. Analytical AI and generative AI can each create value across many contexts, but their effectiveness depends on matching them to the characteristics of the decision. Narrow decisions, such as where to open the next store, have clear objectives, usable input data, and fast feedback loops. Wide decisions, like evaluating a brand repositioning toward wellness, are multicriteria, messy, and politically charged.</p>
<p>Importantly, this is not a binary classification. Many leadership decisions are portfolios: A wide, strategic choice often contains narrow subdecisions that can be modeled and optimized. A brand pivot (wide) may include narrow components, such as message testing, media-mix optimization, pricing experiments, and demand forecasting. Each type of question is typically better suited to a particular AI approach.</p>
<p><strong>Narrow decisions are the familiar territory of analytical AI.</strong> They are well-defined problems where objectives can be specified, the space of possibilities can be modeled, and performance metrics are clear. Forecasting demand, detecting fraud, optimizing delivery routes, and choosing store locations are classic examples. In these domains, analytical AI — optimization, prediction, and causal modeling — can evaluate patterns and trade-offs at a scale and speed that humans cannot match.</p>
<p></p>
<p>In making narrow decisions, AI serves as a precise and tireless decision engine while decision makers focus on setting the objective correctly, providing high-quality inputs, stress-testing assumptions, and defining guardrails (such as constraints, thresholds, and exception rules). Managers must supervise how the system learns, how it performs under changing conditions, and what happens when reality diverges from the model.</p>
<p><a href="https://sloanreview.mit.edu/article/how-generative-ai-can-support-advanced-analytics-practice/">Recent research</a> shows how generative AI can complement analytical AI in narrow contexts: as an accelerator around the analytical core. It can help clarify problem framing, document data logic, and translate technical outputs into business language. Generative AI also helps to capture tacit operational knowledge that is often not documented in manuals or data sets. <a href="https://doi.org/10.48550/arXiv.2310.11589" target="_blank">Through dialogue, examples, and iteration</a>, it can make the hidden layer of human expertise easier to extract and reuse — while the analytical model remains responsible for the decision logic itself.</p>
<p><strong>Wide decisions are characterized by ambiguity.</strong> They typically involve competing priorities, evolving information about risk and success factors, and the need for alignment. Objectives are rarely singular; they typically combine financial, strategic, ethical, and political considerations. Entering a new market, repositioning a brand, redesigning an organizational structure, or navigating regulatory uncertainty are common examples.</p>
<p>In wide decisions, AI’s role must be carefully calibrated. Generative AI can help leaders synthesize diverse inputs, surface assumptions, frame scenarios, and articulate trade-offs in a way that makes the decision space more legible. Agentic AI can extend that support when it is designed as a goal-directed workflow that uses tools (search, retrieval, and analysis) to gather and organize material — with explicit checkpoints, traceability, and human review.</p>
<p>Managers using generative AI as described above must take care not to mistake fluency for understanding: A system can produce persuasive narratives while missing context, embedding hidden assumptions, or overweighting unreliable sources. Leaders should therefore treat AI as an amplifier of perspective rather than an authority. The best teams design their process so that AI helps people reason together — by broadening the evidence base and clarifying trade-offs — and avoid outsourcing judgment or commitment.</p>
<p></p>
<h3>Getting Practical About Approach</h3>
<p>Every function contains a mix of decisions that can be modeled and are measurable and decisions that are ambiguous and alignment-dependent. Leaders should be able to distinguish a narrow decision from a wide one by applying a small set of diagnostic criteria. The goal isn’t to establish a perfect classification; it’s to set a useful calibration: How much of this decision can be formalized, measured, and iterated, and how much depends on judgment, values, and organizational commitment?</p>
<p>A practical rule of thumb is to treat the diagnostic as a scorecard. Consider the questions beside each of the diagnostic criteria listed below. If most of your answers are “yes,” the decision likely sits closer to narrow, and analytical AI can serve as an engine generating specific recommendations. If “no” dominates, the decision is closer to wide, and AI is better suited for seeking evidence, surfacing assumptions, and informing collective judgment.</p>
<ul>
<li>Objective clarity: Is the goal crisp and quantifiable (not just directionally appealing)?</li>
<li>Data readiness: Do we have relevant, reliable, reusable data — not just anecdotes?</li>
<li>Causal stability: Will historical relationships likely hold over the decision horizon?</li>
<li>Boundary transparency: Are the boundaries of the problem codifiable, or mostly contextual/political?</li>
<li>Feedback loop: Can we observe outcomes quickly and incorporate them into the next decision cycle?</li>
<li>Reversibility: Can we reverse or iterate this process cheaply, or is it a one-way street?</li>
</ul>
<p>Most important, managers should use those diagnostic questions to spot hybrid decisions. Many wide, strategic decisions contain narrow components that can be informed by analytical AI. When the overall decision is identified as wide, they should ask, “Which subdecisions inside it score narrow?” Those are candidates for analytical models, optimization, experimentation, and automated workflows.</p>
<h3>Matching the Tool to the Objective</h3>
<p>Two recent cases that we’ve worked with illustrate what changes when organizations calibrate AI’s role to the decision.</p>
<p><strong>Churn prevention in retail banking.</strong> Choosing which customers to target in a retention campaign is a narrow decision: The objective is measurable, the data is typically abundant, and feedback loops are relatively fast. A European retail bank wanted to reduce attrition among high-value customers and approached the problem with analytical AI. A predictive model estimated each customer’s likelihood of churning within 90 days and triggered next-best actions calibrated to risk and customer value. The system was built to do what narrow decisions demand: convert structured signals into recommendations that can be monitored, tested, and improved.</p>
<p></p>
<p>Generative AI complemented the analytical core without displacing it. It translated unstructured customer signals — such as contact-center interactions and complaint narratives — into structured inputs that enriched the analytical model, and it drafted outreach scripts tailored to customer segments and likely pain points. It also summarized the drivers behind each recommendation so ﻿that front-line agents could act quickly and with context.</p>
<p><strong>Organizational redesign at a global insurer.</strong> Organizational redesign sits at the other end of the spectrum: It’s a wide decision due to a combination of ambiguity, competing priorities, and political constraints. A multinational insurer was considering shifting from a product-centric structure to a customer segment structure. The decision would cascade into reporting lines, incentives, technology investments, and cultural identity. It involved deciding which trade-offs the organization was willing to make and then building a commitment to live with them.</p>
<p>In this case, generative tools synthesized internal context — the strategic storyline leaders had been telling, how the organization described past reorganizations, and what had actually broken or worked — alongside external evidence, including relevant industry cases and organizational design and change management research. The value came from making it easier to reason about the choice: AI helped the team articulate a small set of coherent scenarios, anticipate second-order effects on stakeholders, and surface tensions between the stated strategy and the company’s current resource-allocation patterns.</p>
<h3> Six Steps for AI-Supported Decision-Making</h3>
<p>In our work with executive teams, we consistently see the same pattern. Once leaders distinguish narrow decisions from wide ones, they stop talking about AI at the conceptual level and start explicitly deciding how AI will be used in the decision process — and what will change as a result. Teams move faster on problems that are measurable, and they stop expecting automation to substitute for commitment where the decision is inherently political, multicriteria, or irreversible.</p>
<p></p>
<p>Leaders who want to see AI more broadly applied as a decision-support tool in their organizations can take the following steps:</p>
<p><strong>1. Inventory critical decisions.</strong> List the top 20 decisions (of the organization or function) and classify each as narrow or wide using the six-question diagnostic.</p>
<p><strong>2. Treat the framework as a portfolio.</strong> Fund a small number of narrow bets that can show measurable impact quickly, and stop “automation” pilots that are actually wide decisions in disguise.</p>
<p><strong>3. Stand up two playbooks.</strong> For narrow decisions, outline the analytics life cycle (data, modeling, monitoring, and exception handling). For wide decisions, establish a deliberation protocol (evidence standards, explicit assumptions, and verification checkpoints).</p>
<p><strong>4. Wire GenAI differently.</strong> Use it as an accelerator around narrow models (such as documentation and feature extraction) and as a synthesis partner for wide decisions (such as scenario building and pre-mortems) — with traceability and review.</p>
<p><strong>5. Instrument the decision.</strong> For narrow decisions, track accuracy, drift, and operational impact. For wide decisions, log prompts, sources, rationales, and decision checkpoints so ﻿that reasoning is inspectable and repeatable.</p>
<p><strong>6. Close the loop.</strong> Run post-decision reviews: Did the narrow engine hit targets and remain stable? Did the wide process surface key risks, real options, and the trade-offs that were at stake?</p>
<p>The message can stay simple: AI is not a monolithic technology but one that comes in different flavors with different capabilities. Gaining measurable benefits from it requires that leaders use the appropriate tool for the job at hand.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/calibrate-ai-use-to-the-decision-at-hand/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Behind the AI in the Newsroom: The Washington Post’s Vineet Khosla</title>
				<link>https://sloanreview.mit.edu/audio/behind-the-ai-in-the-newsroom-the-washington-posts-vineet-khosla/</link>
				<comments>https://sloanreview.mit.edu/audio/behind-the-ai-in-the-newsroom-the-washington-posts-vineet-khosla/#comments</comments>
				<pubDate>Tue, 05 May 2026 11:00:47 +0000</pubDate>
				<dc:creator><![CDATA[Sam Ransbotham. <p><cite>Me, Myself, and AI</cite> is a podcast produced by <cite>MIT Sloan Management Review</cite> and hosted by Sam Ransbotham. It is engineered by David Lishansky and produced by Allison Ryder.</p>
<p><a href="https://sloanreview.mit.edu/sam-ransbotham/">Sam Ransbotham</a> is a professor in the information systems department at the Carroll School of Management at Boston College, as well as guest editor for <cite>MIT Sloan Management Review</cite>’s Artificial Intelligence and Business Strategy Big Ideas initiative.</p>
]]></dc:creator>

						<category><![CDATA[App and Software Developers]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Information Sharing]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Customers]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[New Product Development]]></category>

				<description><![CDATA[In this episode of Me, Myself, and AI, host Sam Ransbotham speaks with Vineet Khosla, CTO of The Washington Post, about how AI is reshaping the way news is produced, delivered, and consumed. Vineet argues that journalism itself isn’t broken — but the formats people use to consume news are rapidly evolving, especially as audiences [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<p>In this episode of <cite>Me, Myself, and AI</cite>, host Sam Ransbotham speaks with Vineet Khosla, CTO of <cite>The Washington Post</cite>, about how AI is reshaping the way news is produced, delivered, and consumed. Vineet argues that journalism itself isn’t broken — but the formats people use to consume news are rapidly evolving, especially as audiences increasingly interact with information through AI. The conversation explores how the <cite>Post</cite> is experimenting with personalized AI podcasts, AI-powered research tools for journalists, and conversational news experiences that help readers understand not just what happened but why it matters and how it connects to other world events. </p>
<p>Behind the scenes, the <cite>Post</cite> is deploying artificial intelligence across the entire organization, and Vineet shares details about the organization’s “AI everywhere” philosophy.</p>
<aside class="callout-info">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/MMAI-S12-EX-Khosla-WashingtonPostAds-headshot-600.jpg" alt="Vineet Khosla"/></p>
<h4>Vineet Khosla, <cite>The Washington Post</cite></h4>
<p>Vineet Khosla, chief technology officer at <cite>The Washington Post</cite>, is a renowned AI engineer whose career has been marked by groundbreaking achievements. Before joining the <cite>Post</cite> in 2023, Khosla created Uber’s global maps routing system with cutting-edge AI tools. He was the first engineering hire for Siri’s natural language engine, and as a senior AI engineer with Apple, he played a central role in developing the core natural language understanding engine and the architectural framework that allowed the virtual assistant to operate on devices.
</p>
<p>Khosla has been working with AI since 2005 and is the holder of two patents and multiple white papers published on the subject. He earned a master’s in artificial intelligence at the University of Georgia and a bachelor’s in computer science at Pittsburg State University.</p>
</aside>
<p>Subscribe to <cite>Me, Myself, and AI</cite> on <a href="https://podcasts.apple.com/us/podcast/me-myself-and-ai/id1533115958" target="_blank" rel="noopener">Apple Podcasts</a> or <a href="https://open.spotify.com/show/7ysPBcYtOPVgI6W5an6lup" target="_blank" rel="noopener">Spotify</a>.</p>
<h4>Transcript</h4>
<p><strong>Allison Ryder:</strong> How can AI help companies meet customers where they are, especially when their behaviors and needs evolve quickly? Find out how one news outlet turns this challenge into an opportunity on today’s episode.</p>
<p><strong>Vineet Khosla:</strong> I’m Vineet Khosla from <cite>The Washington Post</cite>, and you’re listening to <cite>Me, Myself, and AI</cite>.</p>
<p><strong>Sam Ransbotham:</strong> Welcome to <cite>Me, Myself, and AI</cite>, a podcast from <cite>MIT Sloan Management Review</cite> exploring the future of artificial intelligence. I’m Sam Ransbotham, professor of analytics at Boston College. I’ve been researching data, analytics, and AI at <cite>MIT SMR</cite> since 2014, with research articles, annual industry reports, case studies, and now 13 seasons of podcast episodes. In each episode, corporate leaders, cutting-edge researchers, and AI policy makers join us to break down what separates AI hype from AI success.</p>
<p>Hi, listeners. Today we’re joined by Vineet Khosla, chief technology officer at <cite>The Washington Post</cite>. The <cite>Post</cite> isn’t just a newsroom. It’s a giant technology machine that delivers journalism to millions of people around the world every day. And Vineet leads the teams that build those systems behind the breaking news and audience experience and security and AI, we’re hoping based on the discussion today. So we’ll talk about how technology is shaping journalism and maybe a little bit about what audiences don’t see behind the scenes, and what the future of news might look like. Vineet, thanks for being here. </p>
<p><strong>Vineet Khosla:</strong> Thanks for having me, Sam. I’ve been listening to your podcast for a while, so it’s a pleasure to be finally on the other side of it. </p>
<p><strong>Sam Ransbotham:</strong> Maybe we can talk a little bit about what happens behind the scenes with the podcast. Let’s start with something that many listeners feel. I think consuming news in our modern world can be pretty overwhelming and fragmented and tough to understand. And that may be especially true for a younger audience who [is] more raised in a different digital world than I was. So from your side, what’s maybe currently broken about how we’re experiencing news, and what needs to change? </p>
<p><strong>Vineet Khosla:</strong> The way I view it is there is not something broken about news. If we zoom out, we should think about journalism as a discipline, not a format. When you start to think about it solely as a format, it may seem broken to the younger audience. The difference is they’re just consuming it very differently than you and me. I use this example: We used to just read the news, then came radio. We heard the news, then came TV. We watched the news, then came AI. We started talking to and asking the news. In all of these changes, the consumption of news actually increased. The value of news in our society actually increased. We are just consuming it very differently at different times of the day. </p>
<p><strong>Sam Ransbotham:</strong> That consumption is a big deal. I want to know only the news that I care about. I don’t want to hear stuff I don’t care about, but I want to be aware that the stuff I don’t care about is happening. I don’t want to be in a bubble. Other industries have really struggled with this, if you think about the streaming industry and retail and music. What is personalized news going to be for <cite>The Washington Post</cite>? </p>
<p><strong>Vineet Khosla:</strong> That’s a question I’ve grappled with for the last two and a half years. I’m not from the news industry. I come from outside. So when I landed here, I realized there are two things news does that [are] very important. One is it tells us what is important in the world, and then it tells us why it is important. That’s the sense making, right? The personalized aspect is taken over by social media. They already tell you what’s important. So by the time they come to us, there are very few things we are telling them [that are] different than they already know. </p>
<p>But the “why,” that is the core value that we provide. And that’s where I think we have to have a balance of [personalization] — you need to be data-driven, but you need to use your data almost like a compass, not a GPS. It is still the onus of the newsroom, a responsible ethical newsroom with journalistic standards, to make sure the news we give out to people is not so personalized that it becomes an echo chamber and a reinforcement of their beliefs. </p>
<p>It’s a hard thing to balance, because we understand looking at Big Tech outside, if you go deeply personalized, you will have [an] audience, you will have clicks, you will have money, you will have revenue. For our industry to balance both of these — meet the consumer where they are, give them the news they actually need, don’t give them too much when they’re not ready for it, but at the same time, make sure we are being very even and our perspective and our opinion is coming through — is very important. </p>
<p><strong>Sam Ransbotham:</strong> I think what you’re describing is a really difficult Goldilocks problem, which is you want to do enough but not too much. It’s not too hot, not too soft, just right. We want to know about the whole wide world that’s going on, but we also care about opinions that are closer to what my prior opinion was. I try to be pretty active about keeping news sources in my life that I dislike intensely. </p>
<p>How do you maintain journalistic integrity in that process then, when you’re choosing … the kinds of things that you focus on and don’t? This has been going on for years, so this is not a new problem. </p>
<p><strong>Vineet Khosla:</strong> I think it’s a multifaceted problem. First it actually starts with the newsroom. I do believe our newsroom, with its standards and the way they do reporting, they’re trying to put a very fair perspective out. What you will see if you come to our application is there are actually many different ways to consume [news]. You can read it. You can listen to it. We just started an AI podcast, where the AI chooses some articles that you might be interested in and turns it into a podcast. You have the option of going to the homepage, which is edited by our editors. This is the expert perspective on what is happening [in the world]. You can go to the “For you” tab and just read personalized news. </p>
<p>So from our side, what we ensure is we give you many options, and we educate you with good products and design [for] why these options exist. Hopefully somewhere between that, you get out of your echo chamber. </p>
<p>Now we want to go beyond that too. If you go to our homepage, you will see an old-style ticker at the bottom of our WashingtonPost.com, where we are letting other news organizations [show] what they’re putting on their homepage, almost for free, on our site to say, “Hey, these are other things that are happening,” because it’s quite possible we’re not going to cover everything in every perspective and to keep extending the service to the nation. I really think we need to, as a news company, try and give value to everyone’s life as much as possible. </p>
<p>We recently started something called Ripple. So it’s <a href="https://www.washingtonpost.com/ripple/" target="_blank">WashingtonPost.com/ripple</a>, where we are going to opinion sections across America and trying to bring their content, [through] partnerships with them, to our consumers, to our users. It’s a hard problem, but you do need people who are solving it, and you also need people on the other side who want it to be solved, people like you. </p>
<p><strong>Sam Ransbotham:</strong> That’s a really fascinating idea, the idea of trying to surface those ripples from lots of different places. Let’s be frank: You’re not going to be perfect at doing that, but I think that’s inevitably part of the process. The cost of not doing it is probably more extreme than the cost of making some algorithmic problems there. </p>
<p>I know you’ve had trouble with the podcast in terms of personalization and trying to get that extreme personalization. Can you share with us a bit about how that project has gone? </p>
<p><strong>Vineet Khosla:</strong> We realized there is a market need in the middle of [heavily] curated editorial podcasts. I almost view them as expert opinions. These are the experts of our company who are saying, “These are the important things you need to know” versus “Sometimes [these] things are not important to the world, but they’re important to me.” I’ll give you one very good example [that] really made me a fan of this product. </p>
<p>[Do] you remember when the Texas redistricting fight was happening, and there [were] a lot of court cases going on? At the same time in India, there were elections happening in the state of Bihar. We covered these two stories, and somehow the podcast, given my interest, talked about the redistricting, the laws, and how the party in power over there is trying to hold on to the votes. And then it contrasted with the elections of Bihar, where some of this might have already happened in the past, and therefore the party that’s winning is banking on the wins coming from those types of redistricting efforts. … Neurons fired in my brain, Sam. I’m like, “Whoa. This is so interesting. I have seen this side in India, and I see what’s happening in Texas. I kind of don’t like it, but thank you for showing me these two [stories].”</p>
<p>Now if you imagine an expert’s view, to 99% of [the] population of America, that second story is not relevant. And even if they’re interested in it, it’s not really going to fire the neurons in their brains the way this podcast did for me. I think that is the gap we are trying to really hit with personalized podcasts. It’s because this is all based on our reporting; this is all factual stuff we did at <cite>The Washington Post</cite>. We did it because we think this is important for the world to know. </p>
<p>We worked very closely with our newsroom. We tested it very well. And yes, it’s not going to be perfect. It made a few mistakes. Once we launched it, we made sure when we presented it to our consumers, with our design, with the disclaimers, with the warnings, [that] they [understood] that this is a beta experimental product. They understood that there would be mistakes that happen, and we were all as a team watching it very closely. </p>
<p>In terms of technical [issues,] one thing we realized was it has a lot of trouble when you have a lot of third-person references in an article. Let’s say it says, “Vineet said this, and Jennifer said that,” and the following sentences [include] “he” and then “she.” To us, it’s immediately clear who the he is and who the she is. To AI, it might not be. Once we started figuring out those types of problems, we really went back, changed our scripts, changed our prompts. [We] made sure we didn’t change the writing of the article. We just made sure on the AI side [that] we have a way of solving this problem. And the proof of that is we have published about over 100,000 personalized podcasts by now. The completion rate of these podcasts is actually higher than the completion rate of [the] normal podcasts that we publish. </p>
<p><strong>Sam Ransbotham:</strong> That’s a beautiful example because it’s going to connect some things, it’s going to miss some things, but maybe when it does, it’s going to be amazing. One of the enduring themes of our show seems to be this exact idea of improvement. One of our early podcast guests mentioned the idea that <a href="https://sloanreview.mit.edu/audio/the-first-day-is-the-worst-day-dhls-gina-chung-on-how-ai-improves-over-time">the first day is the worst day</a>. So when you put this experiment out, you’re going to discover some stuff, like the pronoun problem you mentioned, and how it’s obvious to us which story connects to which one. But you’re going to fix those, and it’ll keep improving. </p>
<p>What’s your plan for this product, for this personalized podcast? I’m already quite jealous [of your 100,000 episodes]. I think we’re just over a hundred, and it’s been exhausting. </p>
<p><strong>Vineet Khosla:</strong> Well, I don’t think it replaces the experts. You know, 100 is a lot of work, [and] 100,000 is still a lot of work on the team [that’s] building it because we review problems that … come in. So the work happens, I guess, on [a] different side. For us it happens on the QA side. </p>
<p>But I would zoom out of personalized podcasts and maybe talk more about the AI efforts we are doing over here. And then it would all make sense, right? The way we are viewing AI in our company is we call it “AI everywhere.” It’s an “AI everywhere” approach where we want it in the production of the news. There’s so much [generative AI] can do. </p>
<p>We have a tool called Haystacker, which can go through hours and hours of videos. In what would take people weeks, now our journalists can go and say, “I want to find that person with [the] red cap,” [and the AI goes] through Jan. 6 riot videos and gets that type of information. </p>
<p>You have probably heard all about how big data sets are now no longer a thing journalists fear anymore. They don’t have to manually read it. They can really ask it intelligent questions. So we’re building a lot of tools internally for that side. So that’s one big pillar, [using] AI to help the core mission we have of journalism. </p>
<p>The second … is consumer facing. That’s where [our] AI podcast, “Ask the Post AI,” [and story] summaries … come in. In the case of the AI revolution, I feel like the audience moved before we moved, right? When there was an internet revolution, people had to go buy computers, they had to learn it, they had to get on the web browsers, and then the newsrooms moved to a website. In the world of AI, the audience went overnight. </p>
<p><strong>Sam Ransbotham:</strong> I want to push back a little bit on this Haystacker. I really like that name. What you’re saying is “Hey, you want to go through that haystack and do it with artificial intelligence, and find all those needles.” It’s certainly true that we’ve got a lot more content in the world to go through. It’s staggering the amount of things that are happening. We’re getting a lot more content. Are there more needles in that content? Or is there better discovery of the existing needles, or is a lot of the hay that you’re sifting through just a lot of left-tailed junk? Does that make sense? </p>
<p>When I think about a haystack, I think, “OK, let’s grow the whole pile, and when we grow the whole pile, we’ll have more needles because we’ve got more hay.” But we may just be hiding those other needles better.</p>
<p></p>
<p><strong>Vineet Khosla:</strong> Both things are right. So let me start [with] the Haystacker project. The name came [from] we are finding a needle in a haystack because we actually already had a haystack. Somebody gave a reporter a lot of videos. Somebody gave a reporter a lot of data and said, “Hey, something’s going on over here,” and it would take them two, three weeks to go through it. So we just help them. We are helping them find that needle instead of them watching it frame by frame. So that’s really the origination of this tool. And this is one of the many tools. A lot of news companies are building these tools. </p>
<p>But going back to your bigger question [about] there is a whole lot more data, and most of it is not interesting. We don’t think it is the job of AI to find all those interesting things and serve them to you without a journalist involved in the middle. So the journalist is usually [using] their instinct, asking questions, trying to find more out of it. And I’m sure you can get to a world where you have really curated data sources. You can take Department of Labor reports out, right? And our journalists use those reports, and they create stories out of [them]. </p>
<p>So when you go to “Ask the Post,” and you say, “Hey, what was the unemployment rate in 2013 in [the] agriculture sector?” we may or may not have written about it in a news article. But if [we have access to] one of those data sources that our journalists trust and use, I think it’s fair to use it and give the answer to the question. But once again, there is a newsroom in the loop, like that verification of data. And I think that makes for a little bit higher quality than the general-purpose internet, you know, hoovering ask engines. They have their own place; I’m not taking a dig at them. I’m just saying there’s a different place for that, and what we are trying to build over here in <cite>The Washington Post</cite> is if you are in the market for trusted news and journalism, and you want some verified facts and have confidence, you should start with us. </p>
<p><strong>Sam Ransbotham:</strong> Let’s tie back to how you started this process. You started talking about why. And right now that why has to be part of that; otherwise, like you say, that’s a sharp contrast between the useful search engines, which produce a list but do not produce the why. As I say that, though, I think about modern search technology, and it seems to be trying to use artificial intelligence to move toward more of a why and more explanation. But you were pretty clear about the role of your journalists in this process. </p>
<p>So maybe expand a little bit on that. Where are you automating? What absolutely requires human judgment? How are you figuring out where those lines are? We could talk about individual examples, but what’s the process for figuring out how to decide? </p>
<p><strong>Vineet Khosla:</strong> It goes back to AI governance and policies around how we are using AI in the company. We broke it down into three parts. The easiest one I’ll talk [about] first is infosec. We got our infosec team involved, and we said, “Listen, you need to tell us how to not mess it up really bad. You need to tell us what’s happening on the bubble in terms of security and put a policy out.” [This] is easier for us because we are using a [large language model] that we are hosting on a private instance. </p>
<p>Then comes the newsroom aspect: The newsroom and the journalist sat down, and they’ve decided for themselves how they want AI to show up in the work they do — how they will use it, how they will attribute to it, what are the do’s and don’ts. </p>
<p>And then the third aspect is the consumer. This is the tricky aspect because this is what you typically think of as a product, and the approach we have taken is using good design. We want to always inform our consumers, our audience what they are consuming, how much of this is from AI. And it’s a spectrum, right? </p>
<p>Let’s take the example of summaries. We still label AI summaries — “this is an AI summary” — but the way I see people use it and the number of people who are actually looking at the disclaimer or giving us a thumbs-down button on it because they didn’t like it, it’s moving down. It’s almost to the point that nobody is shocked that we have an AI summary, and none of the users are bothered about it. But I’m pretty sure if we put a full AI-generated video — which we haven’t done so far, and we don’t plan on — we would put stronger disclaimers. </p>
<p>So at a product level, we want to lean on design and consumer behavior to make sure we are always informing them when they are using something [that] is AI or not. </p>
<p><strong>Sam Ransbotham:</strong> Let’s jump forward though. If we were sitting here together in a decade, you’ve got to be thinking about the direction that the news experience is going. And you’ve mentioned the read the news, listen to the news, watch the news progression that’s happened. You’ve thought about this a lot. Tell me what you think is going to happen in the next decade or so.</p>
<p><strong>Vineet Khosla:</strong> If I was that smart, Sam. … </p>
<p><strong>Sam Ransbotham:</strong> You wouldn’t be talking to me?</p>
<p><strong>Vineet Khosla:</strong> I would be somewhere in New York in the hedge fund business, making my bets. </p>
<p><strong>Sam Ransbotham:</strong> OK, we can go shorter. Maybe you can give us a little hint about next month, and we can try to expand from that. </p>
<p><strong>Vineet Khosla:</strong> I do sincerely believe the need for news and quality news has never been more. Journalism is a discipline, not just a format. We need to keep adapting our journalism to different formats, use technology where it can help us. And that’s what we intend to keep doing at <cite>The Washington Post</cite>. </p>
<p>You will start to also probably hear … the ideas around liquid content. Think about the content the way we do. Typically news lasts 24 hours, right? After 24 hours, every newsroom will tell you the story dropped off. They take it off the homepage, people stop talking [about] it. You do a deep investigative piece, maybe [it lasts] seven days. We will pin it somewhere, people will share it, it will have longer legs. But no matter what, after that, it just drops off.</p>
<p>I see a world where people’s curiosity drives the news. News can literally live in infinite forms for a long period of time because somebody could come back and start asking [a] bunch of questions. They could start asking questions, or they could say, “Can you help me write up a report on the change in [Immigration and Customs Enforcement] tactics between [Washington, D.C.] and Minnesota? I really want to understand what was happening in the world at that time [when] it became more violent than it used to be in the past.” I do think this unlocks more news. It actually grows the market more than [the initial] fear of shrinking. And that’s always the fear, right? </p>
<p>When a new technology comes, [there is] first a very genuine fear of shrinking. I don’t want to deny that. Honestly, as an engineer, I see what Claude Code has done in the last two months, and I’m like, “Whoa, there goes my backup career choice. I guess I’m not going to be a super short Java programmer anymore.” But once you get past the fear, I think this grows. AI helps us grow. As long as people and their curiosity and the need to get verified news, information, facts exist, this is going to be good. So that’s the bear. What do you call it in the stock market — the positive side? </p>
<p><strong>Sam Ransbotham:</strong> You [need to know] that if you’re going to switch to hedge funds. </p>
<p><strong>Vineet Khosla:</strong> Bull is positive. Bear is negative. As you’re realizing, my future career choices are quite limited. </p>
<p><strong>Sam Ransbotham:</strong> You better stick with Java. </p>
<p><strong>Vineet Khosla:</strong> I’ll stick with Java. But I also do see there is risk around trust. When I look at the future, the thing that worries me the most is the trust of consumers used to be with the mastheads. You would read a newspaper because you trusted that there were standards and procedures and professionals. And then in our lifetime, I [saw] the trust move to creators. People started trusting creators more. They were more influenced by people on Twitter. They were more influenced by Instagram and TikTok people who were telling them the news. And I thought about it. I’m like, “What’s going on over here?” </p>
<p>One is our news did not adapt fast enough, right? That’s true. We did not meet the consumer where they are. But we as humans just generally trust other humans. We trust voice. We trust language. No matter what part of the world you are [in], if somebody speaks any other language, you know that you’re in [the] company of intelligence. </p>
<p>In fact, if I could go back to my Apple days, we had this anecdote. When Siri came, it was the first voice. It was the first voice interaction with your machines. People could talk to it. And then Apple Maps came at the same time, and we had a few incidents where we had wrong data, and people would go on dirt roads and get stuck. The consistent complaints we used to get is “Well, Siri told me to go there.” And that’s when we realized the Siri voice and the Apple voice being the same voice was actually a problem because [users] were putting more trust in it than they should. Their eyes were showing this road doesn’t exist, but they would turn right because Siri told them to. </p>
<p>So I think this is what happened to us: The trust moved from mastheads to people because naturally as humans we trust other humans a little bit more. What worries me is as these AIs become almost a better human than a creator, because they can talk back to you, they can be deeply personalized, they can understand you more than a creator does, I fear the trust will move to the AIs even more than it was with the humans. </p>
<p>Now, given that, what do we do? That’s my hypothesis. The trust to AI that people will have, the relationship we will have, will be very deep. I think the onus is on us, in the news, in the journalism world, to build equal types of experiences so the consumer doesn’t get locked in with a couple of big options that exist in the world outside. I feel hopeful when I see things like MCP protocols come out.</p>
<p><strong>Sam:</strong> Model context protocols. </p>
<p><strong>Vineet:</strong> Model context protocols. I see agent-to-agent conversations happening. I see enough companies out there, big tech, small tech startups, [that] are working down this path of saying, “Hey, if my agent needs news, I want to connect it with your agent so it can get the right verified news.” So I’m hopeful also, but I’m also very worried about the trust. I want to make sure it stays with people who deserve it. </p>
<p><strong>Sam Ransbotham:</strong> Actually, there are four or five things that are pretty fascinating there. One, I had not really thought about that transfer of trust between the different Siri products. … My gut reaction, my naive approach would have been to say, “Hey, that’s good that trust transfers.” But what you pointed out is that when you have two different products with different base levels of accuracy, that you might not want to transfer that trust. That’s an interesting way of thinking about that. I naturally thought, “Hey, more trust is better.” But you can actually signal this is something that should not be trusted with a more robotic voice, for example. </p>
<p>You touched on Siri. Let’s back up here and talk about how you have not always been at <cite>The Washington Post</cite>. Tell us a little bit about how you got to where you are there and Siri as a part of that journey. </p>
<p><strong>Vineet Khosla:</strong> Back in my undergrad days, I got introduced to AI, and I kind of got seduced by the idea of machines doing all the work for me. I was like, “This is great. I’m going to go get a master’s in artificial intelligence, so I can just sit back and relax.” That led to my first job in the mortgage industry. We used to do these AI models for loans. If you remember, the year being 2007, when the great mortgage crisis and the financial collapse happened, my entire industry got wiped out. Turns out nobody was listening to AI when it came to loans. </p>
<p>But that one door closed and a universe opened. I was contributing some open-source code. The founders of Siri saw my code. They invited me to apply for an interview. So I went over to Silicon Valley, and then I spent the next 10 years working with them, building Siri. We were the voice-driven AI for our time, and for the longest time, until Alexa came and Google Assistant came, and that whole universe opened up. </p>
<p>[After] about 10 odd years, I took a hard right turn and I went into Uber Maps. I ran the team that was building the routing algorithms. It was a whole lot of fun. It [involved] graph search. It was hardcore computer science, right? Graph search is as computer science as you get. I really loved that stint. After doing that for about four years, LLMs came on the scene. Then I was like, “OK, I’m going back to my old world of natural language processing.” And I wanted to do something over there. </p>
<p>So I took some time off from Uber. I thought I’m going to reeducate myself. I bought some gardening tools. My wife got really worried. She’s like, “How long are you going to reeducate yourself? You have too many tools over here.” But this <cite>Washington Post</cite> opportunity came, and all the neurons in my brain fired. I said, “Listen, this revolution is all about language. It’s all about knowledge. This is what newsrooms are. They are the repository of language. They are the masters. They are the experts. They have all the knowledge and information.” And then I interviewed with <cite>The Washington Post</cite>; they are a great team. I interviewed with [owner] Jeff Bezos, and finally I was like, “Yes, this is what I want to do as my next chapter in life.” </p>
<p><strong>Sam Ransbotham:</strong> There’s a whole bunch of things to push on there. One part of that I wanted to pull on, you glossed over very quickly, was that you had made some open-source contributions, and people at Siri noticed it. And that led to [you] being involved with Siri, which led to the Apple acquisition and your involvement there. I particularly like that because I’m a very big proponent in this idea of contributing things. [When] we think about the incentive for contribution, that’s a great story for how being interested, being curious about technology and working on something, and providing evidence of that through an open-source project — there are other ways besides open-source projects, but that’s one great way — can cascade into a very interesting arc around how that developed. </p>
<p><strong>Vineet Khosla:</strong> Now that’s true. I got lucky in a lot of ways because I was doing something that people were interested in, and that opened up this opportunity. You’re very right. I do think when you’re early on in your career you should dabble with things a whole lot more [and] then become an expert in [it] because you don’t know who is looking. </p>
<p><strong>Sam Ransbotham:</strong> You say luck, though, and I do think that there’s a big part of that luck, but luck only combines well with working on something at the same time. I’ll also make the snide comment that one part of the story I’d like to gloss over is your master’s in artificial intelligence was from the University of Georgia, and I’m a Georgia Tech person, so I want to quickly gloss over that. You can have bad luck as well. </p>
<p><strong>Vineet Khosla:</strong> No, I actually do think it’s an important one. I have deep, deep respect for Georgia Tech. Of course, you have [an] amazing computer science program, robotics program, AI program. What University of Georgia was offering uniquely at that time, and still does, is its interdisciplinary program. So I studied language, I studied philosophy, I studied the theory of mind, I studied first-order logic, and then I also studied all this statistical AI, which is basically 99.99% of the AI as people understand it now. So congratulations, you guys won. </p>
<p><strong>Sam Ransbotham:</strong> One other part of that was you mentioned graph-based [work]. Why do you think that the graph-based approaches are so interesting? Why did that catch your eye?</p>
<p><strong>Vineet Khosla:</strong> Well, it was a classic routing problem. We were doing maps and routing, so you have to route over graphs and edges and nodes. Those algorithms, you studied them in school, right? That’s what caught my interest. </p>
<p>Now for Uber, there was a twist. The twist was that routing for a transit is very different — when I say mass transit, I don’t mean buses, I mean like taxis and Ubers — than personal routing. </p>
<p>We settled on a metric, which was 10 meters or 10 seconds. If your map is wrong by 10 meters, or your ETAs are wrong by 10 seconds, you don’t have a great experience. If your Uber stops 10 meters farther away than where you are, you are running to catch it. You’re putting yourself in an unsafe situation. Maybe you’re crossing the street. If you didn’t reach [it] in time, and your Uber is standing over there, maybe that guy’s getting a ticket, the traffic is backed up, the cops are on the case. </p>
<p>So for us, the level of accuracy was actually way more than what Google and Apple do. And we had to scale not linearly. With Apple and Google, the number of phones they sell is the number of map directions that will happen, while we [were] trying to balance a market. So for one rider, you would probably reach out to 100 drivers to see when they can get to them. And similarly, for 100 drivers, you reach out to 100 riders. It’s possible that the driver [who’s] closest to me is five minutes away, and the driver [who’s] closest to you is one minute away. But I might switch the order of drivers so we both get a driver in two minutes, and then the market is balanced. Otherwise, I would have canceled it because mine was five minutes away. </p>
<p>Once you start poking at [the problems], you see this is a very different routing problem. Of course, graph search and the routes and the Dijkstra [algorithm] is at the heart of it, but the layers we had to keep putting on it to get to a balanced marketplace [were] just very exciting. No one had really done that before. </p>
<p><strong>Sam Ransbotham:</strong> That seems fun. Actually, you mentioned Dijkstra’s algorithm and these things. It makes me happy to think that these core ideas still maintain. I mean this matching problem you just described is a classic example of the generalized assignment problem. These are some root problems in operations research and in graph theory and mathematics. It’s fun to see that not everything is statistically picking the next probable word. [I’m] glad to see some of these old-school things come through and come back. </p>
<p>Vineet, this has been a fascinating look at where journalism and the technology behind it, I think, may be heading. The future of news clearly seems more personalized and more AI-powered in many ways, and more complicated in many ways. And I’m glad that you and others are working on it. Thanks so much for joining us today. </p>
<p><strong>Vineet Khosla:</strong> Thanks for having me, Sam.</p>
<p><strong>Sam Ransbotham:</strong> Thanks for listening. On our next episode, I’ll talk with Andrew Palmer, a journalist at <cite>The Economist</cite>. We’ll learn how another news outlet is thinking about AI. Please join us.</p>
<p><strong>Allison Ryder:</strong> Thanks for listening to <cite>Me, Myself, and AI</cite>. Our show is able to continue, in large part, due to listener support. Your streams and downloads make a big difference. If you have a moment, please consider leaving us an Apple Podcasts review or a rating on Spotify. And share our show with others you think might find it interesting and helpful.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/audio/behind-the-ai-in-the-newsroom-the-washington-posts-vineet-khosla/feed/</wfw:commentRss>
				<slash:comments>1</slash:comments>
							</item>
					<item>
				<title>The Innovation Advantage GenAI Can’t Give You</title>
				<link>https://sloanreview.mit.edu/article/the-innovation-advantage-genai-cant-give-you/</link>
				<comments>https://sloanreview.mit.edu/article/the-innovation-advantage-genai-cant-give-you/#respond</comments>
				<pubDate>Mon, 04 May 2026 11:00:55 +0000</pubDate>
				<dc:creator><![CDATA[David Schonthal. <p><a href="https://www.kellogg.northwestern.edu/academics-research/faculty/schonthal_david/" target="_blank" rel="noopener noreferrer">David Schonthal</a> is a clinical professor of strategy, innovation, and entrepreneurship at Northwestern University’s Kellogg School of Management and coauthor of <cite>The Human Element: Overcoming the Resistance That Awaits New Ideas</cite> (Wiley, 2021).</p>
]]></dc:creator>

						<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Competitive Advantage]]></category>
		<category><![CDATA[Competitive Strategy]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Innovation Management]]></category>
		<category><![CDATA[Innovation Process]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Innovation]]></category>
		<category><![CDATA[Innovation Strategy]]></category>

				<description><![CDATA[Eliot Wyatt/Ikon Images For most of modern business times, competitive advantage belonged to whoever had the best ideas. Better ideas meant better products, which meant more customers, which meant more revenue and profit. The entire innovation industry — consultancies, design firms, brainstorming retreats fueled by sticky notes and gallons of La Croix — was built [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Schonthal-1280x860-1.jpg" alt="" class="wp-image-126895"/><figcaption>
<p class="attribution">Eliot Wyatt/Ikon Images</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">For most of modern business times,</span> competitive advantage belonged to whoever had the best ideas. Better ideas meant better products, which meant more customers, which meant more revenue and profit. The entire innovation industry — consultancies, design firms, brainstorming retreats fueled by sticky notes and gallons of La Croix — was built on this premise: If you could generate more and better ideas than your competitors, you would win.</p>
<p>That advantage has been vaporized by AI. </p>
<p>Generative AI has turned ideation into a full-blown utility. Today, anyone with a $20 subscription to a GenAI tool can instantly generate 100 product concepts. That has rendered the raw material of innovation — ideas — as abundant, accessible, and cheap as electricity. And here’s the thing about electricity: Nobody competes on it. You compete on what you build with it. Which means the competitive advantage has shifted upstream, from the solution to the problem — specifically, to how you identify and <em>frame</em> the problem in the first place.</p>
<p>This is something I’ve taught for years — to executives, MBA students, and others — going back to my time as a designer at IDEO. It is called Question Zero: the question before the question. Before you ask, “How do we solve this?” you need to ask, “Are we even looking at the right problem?” The quality of innovation has always been determined by the quality of problem framing. But until recently, most organizations could get away with mediocre problem framing. Why? Because ideas were scarce enough to be valuable on their own. </p>
<p>That’s no longer the case. When everyone has access to the same idea-generation engine, the remaining edge is the insight that tells you where to point your business. GenAI won’t give you this insight, though it can surface data and patterns that help <em>you</em> see it. Let’s examine why businesses continue to frame the wrong problem, examples of startups and established businesses reframing successfully, and how to get started.</p>
<p></p>
<h3>Why Most Organizations Frame the Wrong Problem</h3>
<p>If problem framing is so important, why is everyone so bad at it? </p>
<p>It’s because the “best” problems — the ones that lead to the most valuable, genuinely differentiated solutions — are almost always hidden. And they’re hidden for a specific, annoying reason: The people who experience them can’t tell you about them.</p>
<p>This is something my colleague Loran Nordgren and I discuss extensively in our book, <cite><a href="https://www.humanelementbook.com/" target="_blank" rel="noopener noreferrer">The Human Element</a></cite>. Users experience friction with your product, your service, your entire category — but they can’t explain it. They know how they feel but not <em>why</em> they feel it. The friction is real. The self-awareness is nonexistent. </p>
<p>Ask a customer why they abandoned your app and they’ll likely say, “I got busy.” The real answer — the one hidden in the emotional recesses of their brain — might be that your onboarding flow made them feel like they’d accidentally wandered into an advanced calculus class. They’re not going to tell you that, because they don’t even know that’s what happened. They just know they stopped opening the app.</p>
<p></p>
<p>This means that the standard problem-identification toolkit — surveys, focus groups, net promoter scores, quarterly voice-of-customer decks — captures only what people can and will articulate. The bad news is that what people can and will articulate is, at best, the surface problem. Understanding the surface problem leads to incremental solutions, which, by definition, are undifferentiated. You end up competing on features, then price, then “vibes.” This is not a strategy; it’s a slow descent into commodified oblivion.</p>
<p>The deeper problem — the reframed one, the one worth solving — lives in the gap between what people <em>say</em> and what they <em>do</em>. Finding that gap has always required the kind of deep, patient observation and investigative interviewing that most organizations can’t afford or feel that they don’t have time for; it’s something that doesn’t lend itself easily to a slick 2x2 framework in a PowerPoint deck. So most companies just skip it and go straight to brainstorming, which they consider the fun part.</p>
<p>AI changes this equation. Not because it replaces human insight — AI has no insight; it has pattern recognition and a <a href="https://www.nbc.com/nbc-insider/stuart-smalley-snl-who-played-him-movie-al-franken" target="_blank" rel="noopener noreferrer">Stuart Smalley</a> tone of relentless encouragement — but because it can surface the behavioral patterns that <em>lead to</em> human insight at a scale and speed no human team can match. </p>
<p>Ultimately, then, AI is not the insight but the high-powered telescope that makes the insight visible.</p>
<h3>The Startups That Won by Reframing</h3>
<p>The clearest proof that problem reframing drives differentiation comes from startups that have broken through in a big way in the past two years — not by having better technology but by asking Question Zero about problems everyone else had framed in less original ways.</p>
<p>Take <a href="https://cursor.com/" target="_blank" rel="noopener noreferrer">Cursor</a>, an AI-powered code editor that hit $1 billion in annualized revenue and a $29 billion valuation in 2025. Every other company in the space framed the problem the same way: “How do we help developers write code faster?” GitHub Copilot was already solving that, and solving it well. But Cursor’s founders — four MIT graduates barely old enough to rent a car without extra fees — saw something different. Developers weren’t actually spending most of their time writing code. They were spending it <em>reading</em> code: navigating unfamiliar code bases and trying to understand what someone else built three years ago at 2 a.m. The bottleneck wasn’t composition. It was comprehension. </p>
<p>That reframe — from “write faster” to “understand better” — produced an entirely different product, an entirely different company, and an entirely different, much-higher-value outcome. Same market. Same underlying technology. Very different problem solved.</p>
<p>Meanwhile, <a href="https://www.speak.com/" target="_blank" rel="noopener noreferrer">Speak</a>, a language-learning app that raised $78 million and reached a $1 billion valuation in late 2024, tells the same story in a different domain. The obvious framing in the sector was “How do we teach grammar and vocabulary more effectively?” Every competitor was running that race, and Duolingo was winning by several laps. Speak’s founders reframed the challenge: “Why are people who study a language for years still terrified to open their mouths and speak it?” The answer isn’t that there’s a knowledge gap. It’s a confidence gap — the fear of sounding foolish in front of others. But nobody describes their problem that way. No language learner walks into a class and says, “I’m here because of shame.” They say they need more practice. </p>
<p>So Speak built an AI conversation partner that lets learners mangle a subjunctive without anyone grimacing at them and then provides a gentle correction. The technology is impressive. But what really made it work was the reframe. The real problem was never learning. It was the emotional friction around learning.</p>
<p>In the productivity industry, <a href="https://fireflies.ai/" target="_blank" rel="noopener noreferrer">Fireflies.ai</a> reframed a common meeting problem. When everyone was asking, “How do we make meetings shorter?” Fireflies asked, “What if the real waste isn’t the meeting itself but everything that happens <em>after</em> it?” That includes the hours spent writing summaries nobody reads, chasing action items nobody remembers, and gently reminding Kevin that he did, in fact, agree to that deadline last Tuesday. The meeting wasn’t the problem; it was the evaporation of the meeting’s output. That reframe produced a product the “shorter meetings” crowd couldn’t compete with, because even though they might have been building a truly better mousetrap, they were in the wrong room from the start.</p>
<p>In each case, these startups didn’t out-ideate the competition. They <em>out-framed</em> them. They saw the same market and found a different problem within it — one that led to a solution nobody else was creating because nobody else had seen the problem the way they had. Ideas were never the bottleneck; the originality of the problem framing was.</p>
<p></p>
<h3>How Established Companies Use AI to Surface the Reframe</h3>
<p>The startups mentioned above achieved innovative reframing through intuition and proximity. Established organizations can deliver the same through AI-powered behavioral observation at scale. There are multiple examples of this among some of the best-known companies. The pattern is remarkably consistent: The AI agent doesn’t generate the reframe; it surfaces the behavioral data and patterns that make the reframe possible. The human still has to have the insight, but the AI makes sure there’s something to see.</p>
<p>For example, Netflix spent years framing its core challenge as a genre problem: “What genres does this subscriber prefer?” The AI’s job was to match users to categories — perfectly reasonable but also, it turns out, a pedestrian framing of the problem. By using AI to observe behavior at scale, Netflix discovered something no focus group sessions could have surfaced: People weren’t browsing by genre. They were browsing by <em>mood</em>. </p>
<p>The difference between a Friday night with friends and a Sunday alone after a bad week isn’t an action-vs.-comedy distinction — it’s an emotional vibe. Nobody ever submitted a feature request that said, “Let me search by how I feel.” But the behavioral data was unmistakable. To capitalize on this observation, in 2025 Netflix began testing an AI-powered search that lets users describe what they’re in the mood for rather than what category they want. The reframe — from genre preference to emotional need — didn’t emerge from a product road map. It emerged from paying attention to what people actually did, at scale.</p>
<p>Another example is Duolingo’s AI system, Birdbrain, which surfaced a reframe that no curriculum designer had considered. By analyzing billions of data points across dozens of language pairs (a learner’s native language and the language being learned), Birdbrain discovered that certain combinations had dramatically higher dropout rates, but in patterns nobody had expected. Spanish speakers learning Portuguese, for instance, were more likely to stop using the app when working on lessons where the two languages were almost identical rather than where they differed: Similarity breeds overconfidence. </p>
<p>Specifically, learners cruised through lessons feeling great, acing quizzes, collecting little digital trophies — right up until they quietly stopped opening the app altogether. All that reinforcement made them feel like they had mastered the new language when in fact they would have struggled to use it in the real world. No survey would have caught this. People don’t report confidence as a problem — they report it as a virtue. </p>
<p>The old frame: “How do we make lessons more engaging?” The reframe: “Where is false confidence silently killing retention?” That second problem can lead to a fundamentally different — and better — solution, such as more subtle tests of mastery for more similar language pairs.</p>
<p></p>
<p>In a different consumer-focused domain, Procter & Gamble’s AI crawled parenting forums and social media and surfaced a behavioral signal no product team would have thought to look for: Parents were using <em>adult</em> skin-care products on their babies. It wasn’t because they were fans of CeraVe’s minimalist branding but because they had given up on baby-specific products entirely: They’d decided that the whole category was either ineffective or filled with chemicals they didn’t trust. </p>
<p>The old frame: “How do we make a better baby lotion?” The reframe: “Why have parents stopped believing us?” That’s not a product problem. It’s a trust problem. And the reframe changes everything: the product, the messaging, the entire go-to-market strategy. You can’t “new and improved” your way out of a credibility crisis. P&G harnessed that framing to engage with and educate parents better through tactics such as product-level personalization and real-time quality and innovation feedback loops.</p>
<p>Then there’s the most meta example of all. Anthropic, the company behind the AI model Claude, built a tool called Clio — Claude Insights and Observations — that uses AI to observe how millions of people use AI. (Yes, it built an AI to watch people talk to their AI.) </p>
<p>Clio clusters millions of conversations and surfaces behavioral patterns invisible at the individual level. It discovered, for example, that Japanese users disproportionately discuss eldercare — a cultural trend and signal observable only at scale. Additionally, it found that users in crisis arrive through specific conversational pathways that single-message safety filters miss entirely. Subsequently, in a particularly humbling discovery, it revealed that Claude’s own safety systems were simultaneously refusing harmless requests (“kill a process” on a computer) while passing over some genuinely concerning ones that could have placed people at risk in the real world. </p>
<p>Anthropic’s original frame: “How do we make our safety filters more accurate?” The reframe: “We’re measuring safety at the wrong unit of analysis entirely.” The insight and reframing didn’t just improve the product. It changed the company’s understanding of what the problem was.</p>
<p></p>
<h3>Three Steps to Get Started</h3>
<p>As the examples suggest, the reframing chain works like this: Better behavioral data leads to better problem reframing; better reframing leads to more novel solutions; and more novel solutions lead to more differentiated products, services, and businesses. And that is the only thing that matters when AI has turned raw ideation into something anyone can do in their pajamas. Here are three ways to start the cycle at your organization.</p>
<p><strong>1. Surface the gap between what people say and what they do.</strong> Point your AI tools at customer support logs, forum posts, social media mentions, and review data. Look specifically for workarounds — hacks, improvised fixes, ways people use your product that you never intended and would likely even find mildly insulting. Developers spending 70% of their time reading other people’s code is a workaround. Parents using CeraVe on their babies is a workaround. Language learners who ace every quiz but won’t order coffee in the language they’ve been studying for three years is a workaround. Every workaround is a reframe waiting to happen.</p>
<p><strong>2. Audit your problem frames before you generate solutions.</strong> Get your team in a room and write down the problem you’re currently solving — the one driving your road map, your next sprint, your big second-quarter initiative. Then ask, “When was the last time we tested whether this is actually the right problem? What might a competitor see that we haven’t been able to? What if the opposite of our core assumption is true?” If the problem frame hasn’t been challenged in the past 12 months, you’re not innovating; you’re redecorating.</p>
<p></p>
<p><strong>3. Use AI to reframe, not just to ideate.</strong> Most people prompt AI with “Give me 10 ideas for X.” That’s fine if you want 10 mediocre ideas delivered with confidence. Instead, feed your AI the behavioral data, the workarounds, and the surprising signals and ask it to generate alternative framings of the problem itself. What if the problem isn’t retention but overconfidence? What if the problem isn’t product quality but category trust? What if the problem isn’t the meeting but the aftermath? </p>
<p>Remember: The AI won’t reframe the problem for you. But if you give it the right inputs, it’ll help <em>you</em> generate framings you wouldn’t have reached alone.</p>
<p></p>
<p>Ideas used to be the scarce resource. Now the scarce resource — the thing that actually drives differentiation — is the insight that reframes the problem. Working this way requires a proactive shift from solving the obvious thing to solving the <em>right</em> thing. AI, for all its generative power, turns out to be most valuable not when it produces answers but when it helps you see a problem you didn’t know you had. </p>
<p>The companies that figure this out won’t just build better products. They’ll build products that nobody else thought to build. </p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/the-innovation-advantage-genai-cant-give-you/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Audit Yourself to Get More From GenAI</title>
				<link>https://sloanreview.mit.edu/article/audit-yourself-to-get-more-from-genai/</link>
				<comments>https://sloanreview.mit.edu/article/audit-yourself-to-get-more-from-genai/#respond</comments>
				<pubDate>Thu, 30 Apr 2026 11:00:06 +0000</pubDate>
				<dc:creator><![CDATA[Vipin Gupta. <p><a href="https://www.linkedin.com/in/vipingupta1/" target="_blank">Vipin Gupta</a> advises Fortune 500 companies, coaches senior executives, and serves on both corporate and nonprofit boards. He previously served as chief innovation and digital officer at Toyota Financial Services International, executive vice president and CIO at KeyBank, and partner at EY/Capgemini.</p>
]]></dc:creator>

						<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Leadership Advice]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Leadership Skills]]></category>
		<category><![CDATA[Skills & Learning]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR &#124; Getty Images More than a year into using generative AI daily, I wondered whether I was getting the most out of my AI use. There was no benchmark or feedback loop, and no one was grading my sessions with ChatGPT and Claude — until I created a self-audit. I did what [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Gupta-1290x860-1.jpg" alt="" class="wp-image-126888"/><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR | Getty Images</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">More than a year into using</span> generative AI daily, I wondered whether I was getting the most out of my AI use. There was no benchmark or feedback loop, and no one was grading my sessions with ChatGPT and Claude — until I created a self-audit.</p>
<p>I did what I’ve always done when faced with a process that lacked measurement. I studied every method I could find — prompting guides, conversations with colleagues, my own session patterns. I used AI to help me use AI better. Over time, I built a single self-audit prompt — one that encapsulates more than 30 habits for getting the most from AI.</p>
<p>Each time I ran the self-audit prompt, the output got sharper. The discipline became reflexive for me. That’s the real value of the self-audit: It made me better at using AI, in every session.</p>
<p>Now, at the end of any significant AI session, I simply prompt: “Review this session and assess it against my AI habits guide. Score how I did, identify what I missed, and guide me to apply missed habits.” Within a few minutes, I get a diagnostic that is uncomfortably specific about what I missed. I now have an answer to a key question: whether my <em>process</em> was good, not just the GenAI output. </p>
<p>A recent field experiment confirmed what I found through my experience. A research team that included MIT Sloan professor Jackson Lu randomly assigned 250 employees at a technology consulting firm in China to either use ChatGPT to assist with their work or to work without it.<a id="reflink1" class="reflink" href="#ref1">1</a> The employees with ChatGPT access were judged as significantly more creative by both their supervisors and outside evaluators. But the gains showed up exclusively among employees with strong metacognitive strategies — those who reflected on their own thinking, recognized knowledge gaps, and refined their approach when results were weak. That finding underscores that metacognition — thinking about your thinking — is the missing link between simply using AI and using it well.</p>
<p></p>
<p>AI widens the gap between disciplined and undisciplined professionals. People who skip the discipline generate more volume without more insight — a pattern consistent with what researchers at the University of California, Berkeley’s Haas School of Business called “unsustainable intensity” in findings published in early 2026.<a id="reflink2" class="reflink" href="#ref2">2</a></p>
<p>Knowing how to use AI is good — but to get the most value from the tool, you need to know whether you’re using it well. The self-audit gives you that.</p>
<h3>A Self-Audit That Measures Five Key Goals</h3>
<p>My self-audit prompt is organized across five goals: set up, refine, verify, own, and systematize. These goals represent a practice that experienced professionals have instinctively followed for years, long before generative AI’s arrival. You don’t need technical training to score well on this audit. You need to replicate the thinking and brainstorming process that you are likely already good at when conducting competitive research, responding to requests for proposals (RFPs), engaging in acquisition analysis, and planning a sales presentation, for example. It is your skill in the application of AI, not the AI itself, that makes the difference.</p>
<p>The self-audit assesses each generative AI session with five questions linked to each of the goals: </p>
<ul>
<li>Set up: Did you prepare the AI before asking it to work? </li>
<li>Refine: Did you iterate on your own thinking, or just reprompt? </li>
<li>Verify: Did you verify before trusting? </li>
<li>Own: Did you make the output yours, or accept the default? </li>
<li>Systematize: Did you build something reusable, or close the chat and start over?</li>
</ul>
<p>You won’t score well on all five goals in every session — nor should you. But knowing which ones you missed, and why, enables you to change your next session. Think of it as AI holding a mirror to your own ability. It gets sharper every time you make it your own.</p>
<p>To illustrate what strong performance looks like at each goal, and what the self-audit is measuring, I applied the audit to an actual competitive due diligence analysis on a $5 billion global services company. Details have been modified for confidentiality, but the habits, prompts, and results are drawn from actual chat sessions. I’ll focus on the impact one goal at a time.</p>
<h4>1. Set Up: Pass the Intern Test</h4>
<p><strong>What the self-audit measures:</strong> Did you prepare the AI with sufficient role, context, constraints, and materials before asking it to work — or did you jump straight to a question?</p>
<p>The most consequential decision in any AI interaction happens before the first prompt. It’s the decision to prepare.</p>
<p>I tell the AI who it should be, what it has to work with, and what I need it to produce. “You are an elite research analyst specializing in competitive intelligence. Here are the target company’s last two annual reports and its most recent earnings-call transcript. Assess this company’s ability to disrupt our core business within 18 months and recommend our strategic response.” That prompt will produce far better output than “Tell me about this competitor.”</p>
<p>I call this the “intern test.” If you handed your prompt to a brand-new hire with no context about your company, your industry, or your priorities, would they know what to do? If not, why would you expect your AI to?</p>
<p>Most readers will likely pass this test. Any GenAI prompting guide or video covers the basics of setup.</p>
<p>What gets overlooked is making clear what setup should <em>not</em> do — the negative constraint. I specify what I do not want: “Do not give me a generic SWOT. Do not hedge every statement. Do not define terms I already know.” And upload your materials. The more context you provide, the more accurate the output. It’s like telling a new team member “Figure out our competitive position” versus handing them your last three strategy decks and customer feedback.</p>
<p>Two additional practices make setup more effective. Before a significant AI chat, I run a preflight check: “What does a great outcome look like? What are the three most important things to get right?” After the first good draft, I generate a bridge summary so context carries forward, especially when I’ll be taking a long break between prompts or need to transition to a new chat. You might not have considered using this tactic before. A bridge summary is especially valuable if you tend to have long, multipart exchanges over days or even weeks. (In one case, Claude suggested doing so at time intervals to avoid having the conversation get too complicated.)</p>
<p>In the due diligence scenario, the difference in outputs before and after the self-audit was stark. While my first prompt was solid, the negative constraints and a preflight check were missing. The variable was me. What made the biggest difference? The negative constraint. Once I told the AI what not to do — no generic SWOT, no hedging, no defining terms I already know — the output became richer in insight and started reading like a briefing, not a book report.</p>
<h4>2. Refine: Pass the Rethink Test</h4>
<p><strong>What the self-audit measures:</strong> Did you truly iterate on your own instructions and thinking, or did you simply reprompt for a better answer?</p>
<p>The first output from any AI session is a draft, not a deliverable. The real value comes from iteration. But the most productive iteration improves your own instructions, not the AI’s answer.</p>
<p>That’s metacognition in action. The person who pauses to ask, “What did I fail to specify? What assumption did the AI make that I should have preempted?” is exercising exactly the reflective discipline that separates high performers from the rest. AI rewards those who rethink their own instructions — not those who rephrase the same request.</p>
<p></p>
<p>I started catching my own patterns. Sometimes the output sounded right, but I couldn’t explain <em>why</em> — so I’d ask the AI to walk me through its reasoning, and the gaps would surface. Other times, I’d catch myself reprompting the same request with slightly different words and realize that the real problem was that I hadn’t broken the task down. The hardest one to admit: When I still couldn’t get what I wanted, it was usually because I couldn’t describe the desired goal clearly enough. Pasting in an example of output that showed what I was after worked better than trying to describe it.</p>
<p>One of the most powerful refining habits is embarrassingly simple: Ask the AI what you should be asking. “What question should I be asking that I am not currently asking?” That one prompt has produced more valuable insights than any other, in my experience.</p>
<p>When I applied these habits to the due diligence, they surfaced a critical insight I’d overlooked: The competitor’s employee sentiment data contradicted its public narrative of a thriving digital transformation. That disconnect between external messaging and internal reality changed my entire threat assessment. I never would have discovered that if I hadn’t challenged my own assumptions.</p>
<p></p>
<h4>3. Verify: Pass the Trust Test</h4>
<p><strong>What the self-audit measures:</strong> Did you independently verify the AI’s claims, check its sources, and stress-test its confidence — or did you trust fluent output at face value?</p>
<p>AI output typically reads well — which can be a problem. It’s linguistically fluent and structurally polished, even when the underlying claims are fabricated, outdated, or mathematically wrong. This is a new kind of quality risk, and it misleads experienced professionals more often than they’d like to admit.</p>
<p>I once asked AI to summarize the regulatory history of the credit card industry, which I know well. The response was beautifully written, logically structured, and completely wrong on two key regulatory revisions. It read like an A-minus term paper from a student who’d skipped the reading. I almost didn’t catch it — because it sounded right. That’s what worried me. I knew the domain well, and I still nearly walked into a committee meeting with hallucinated data.</p>
<p>Since then, I’ve built verification into my routine. I ask the AI to surface and rank every assumption behind its answer. I request verifiable sources and note when the model can’t provide them. For anything involving numbers, I ask for step-by-step calculations. I’ve found two habits particularly effective: the temporal awareness check (“What is the date of the most recent information you’re drawing on?”) and the confidence stress test (“Rate your confidence in each factual claim as high, medium, or low”).</p>
<p>It’s the same discipline we’ve always followed: Verify before you trust; trust before you share.</p>
<p>During the due diligence, the AI flagged that its revenue figures were nine months old and rated its confidence in the regulatory settlement details as medium. When I verified the output independently, I discovered a $42 million enforcement action that the AI had understated. That single verification changed the risk profile of the entire analysis.</p>
<h4>4. Own: Pass the Signature Test</h4>
<p><strong>What the self-audit measures:</strong> Did you actively impose your voice, your position, and your audience on the output — or did you accept AI’s default?</p>
<p>The real work starts here. I used to stop too early. Most of us do.</p>
<p>AI models default to hedged, tonally generic output. Left unguided, they produce content that is competent but indistinct — written by a smart person who seems to have an opinion about everything yet commits to nothing. That’s fine for a rough research summary, but it doesn’t reflect your voice or your style, and it’s not something you’d want to put your name on.</p>
<p>The first complete draft was exactly that: well organized, factually grounded, and thoroughly researched. But it was hedged throughout and read like a report designed to avoid being wrong rather than to help someone make a decision. When I forced the AI to take a clear position on the competitive threat, pushed it for unconventional strategic responses, and asked it to apply champion-challenger lenses, the analysis became richer and something I would stake my reputation on.</p>
<p></p>
<p>One technique I use at this stage is running a draft by a <a href="https://sloanreview.mit.edu/article/how-i-built-a-personal-board-of-directors-with-genai/">virtual personal board of directors</a> that I built. These distinct personas help push my thinking and the AI’s analysis  away from the default path toward the edges. I built AI-powered personas modeled on real personalities: v_SunTzu for power dynamics, v_Indra (Nooyi) for the human dimension, v_Mark (Cuban) for commercial realism, and v_Meg (Whitman) for operational rigor. What survives that gauntlet of virtual advisers is sharper and more defensible.</p>
<p>The habit most people underuse is calibrating AI to their own personality: how they think, how they argue, and what they won’t tolerate in a deliverable. Take ownership of the thinking, not just the editing. That’s when the output starts sounding like you.</p>
<h4>5. Systematize: Pass the Reuse Test</h4>
<p><strong>What the self-audit measures:</strong> Did you build systems that make your next session better — or did you close the chat and leave yourself having to start from scratch next time?</p>
<p>Nearly everyone treats each AI session as a stand-alone thread — which may be productive in isolation, but the value doesn’t compound. Here, the discipline shifts from improving sessions to building systems.</p>
<p>Building repeatable processes out of one-off successes is what I do. Yet, early on in my GenAI use, I spent two hours building a detailed competitive analysis that delivered exceptional output — and then I closed the chat. I’d produced a great deliverable but captured none of the thinking that made it great. I should have known better. When I needed to run a similar analysis a month later, I had to start from scratch — the same role definition, the same constraints, the same verification steps, all rebuilt from memory. </p>
<p>Three habits make the difference. These are not habits you apply at the end of the conversation but throughout — after every prompt, at every logical checkpoint, or after a break.</p>
<p>First, maintain continuity. During any significant working session, I ask the AI to maintain a running summary of what we’ve accomplished, what’s still open, and what I will need to copy and paste to resume the conversation in another chat. This produces a bridge summary that makes it easy for you to pick up the discussion in a new session without losing continuity, especially if you run out of tokens on one chat.</p>
<p>Second, be a coeditor. Review the AI’s output after every prompt, or at logical break points, and feed your own judgment back in. You read what the AI produced. Some of it is good; some of it is wrong. Some of it is vague in ways you didn’t notice until you tried to use it. You fix it, mark it up, and hand it back: “Here’s my revised version. Use this as our new baseline and continue from here.”</p>
<p>Third, "templatize" what works. Every time you craft a session that produces exceptional output — a due diligence workflow, an RFP evaluation, a customer analysis — convert it into a reusable template. Replace the specifics with [variable] placeholders and save the session as what I call a <em>macro-prompt</em> — a single structured prompt that combines the entire session’s workflow so anyone can run it without having to start from scratch. Individual expertise becomes organizational capability.</p>
<p>That single due diligence session became a reusable macro-prompt I’ve now used for partnership evaluations, board position assessments, and acquisition analyses — each time just pasting it in the chat to start the conversation. From there, AI guides me step-by-step — instead of me guiding the AI — with all of the thinking intensity captured from the original session. After every use, I run a prompt to improve this macro-prompt for the next session.</p>
<h3>How to Start Auditing and Improving</h3>
<p>Below, I’ve shared the self-audit macro-prompt that includes all 30 habits to audit oneself. Think of it as a companion resource. You can just copy and paste it into an existing conversation you’ve been having with AI on a significant, extended topic. See what it tells you about your use of AI across all five goals and 30 habits. The self-audit will show you exactly where to refocus. </p>
<p>Then, start practicing these habits in your GenAI conversations wherever you see the opportunity.</p>
<p>Generative AI technology has already proved its capabilities and will keep getting better. The discipline is what unlocks real value — and that discipline will always be needed, regardless of which AI tool you use. </p>
<p>There’s one last thing I didn’t expect when I started this journey: The better I got at working with AI, the better I got at thinking without it. </p>
<p>Run the self-audit. See what it tells you about your critical thinking.</p>
<div class="callout-highlight callout-highlight--transparent">
<aside class="l-content-wrap">
<article>
<h4>Self-Audit Prompt</h4>
<p>Copy and paste the prompt below during or after any significant AI working session. The AI will autonomously review your entire conversation, evaluate it against 30 habits spanning five goals, and deliver a structured diagnostic with scores, specific gaps it identified, and the exact prompts you should have used.</p>
<p><strong>During or after a session:</strong> Paste it at any point in a conversation — midsession to course-correct, or at the end, to score what you did against the five goals.</p>
<p><strong>Retroactively:</strong> Paste it into any past conversation you’ve had with an AI to learn from your history.</p>
<p>This macro-prompt includes micro-prompts or checks for every habit so the AI will know exactly what to look for and will be able to show you precisely what you should have said.</p>
<div class="callout-toggle">
<figure class="copy-prompt" role="region" aria-labelledby="prompt-label-1"><figcaption id="prompt-label-1">SELF-AUDIT MACRO-PROMPT — COPY AND PASTE BELOW</figcaption><pre aria-label="Prompt text, use the copy button below to copy it">
SELF-AUDIT OF AI SESSION
 
Review the entire conversation we just had. Evaluate how effectively I used AI in this session by assessing my performance against the 30 habits below.
 
For each goal, check whether I applied the habits listed. For each habit I missed, show me the EXACT PROMPT I should have used — written specifically for the content of this session, not as a generic template.
 
Work through the five goals in order. After all five, deliver the scorecard.

=================================  
GOAL 1: SET UP — Did I prepare the AI before asking it to work?
=================================
 
Habit 1 — The preflight
Did I define what a great outcome looks like before starting?
Micro-prompt: “Before we begin, help me define: What does a great outcome look like for this task? What are the three most important things to get right? What mistakes do people typically make?”
 
Habit 2 — The mission
Did I assign a clear role, context, and mission?
Micro-prompt: “You are [specific expert role with years of experience in relevant domain]. Here is what I need: [specific deliverable]. Here is the context: [situation, constraints, timeline]. Your mission: [clear objective].”
 
Habit 3 — The negative constraint
Did I state what I did NOT want?
Micro-prompt: “Do not [produce generic output]. Do not [hedge every statement]. Do not [define terms I already know]. Do not [give balanced, ‘on the other hand’ analysis].”
 
Habit 4 — The context upload
Did I provide relevant documents, data, or prior work?
Micro-prompt: “Here are the attachments: [list files]. Use these as the primary basis for your analysis. Flag where you are drawing on general knowledge versus the documents I provided.”
 
Habit 5 — The session bridge
Did I provide or request a bridge summary for continuity?
Micro-prompt: “This is a continuation of our previous work on [topic]. Here is where we left off: [paste summary]. Confirm your understanding, flag anything unclear, and suggest where to pick up.”
 
================================= 
GOAL 2: REFINE — Did I iterate on my own thinking, not just reprompt?
=================================

Habit 6 — The iteration
Did I challenge assumptions and explore alternative scenarios?
Micro-prompt: “Your analysis assumes [X]. Surface that assumption. What changes if [alternative scenario A]? What changes if [alternative scenario B]?”
 
Habit 7 — The reasoning request
Did I ask the AI to show its reasoning step-by-step?
Micro-prompt: “Think step-by-step through your reasoning for [conclusion]. Show me the logic chain before restating your conclusion. I want to see how you got there, not just where you landed.”
 
Habit 8 — The prompt self-critique
Did I ask the AI to critique or improve my prompt?
Micro-prompt: “How would you improve my original prompt? Rate it 1-10 for clarity, specificity, and completeness. Show me what a 10 would look like.”
 
Habit 9 — The strategic question
Did I ask what question I should be asking but haven’t?
Micro-prompt: “Step back. What question should I be asking about [topic] that I haven’t asked? What blind spots does my framing have?”
 
Habit 10 — The decomposition
Did I break complex tasks into sequential subtasks?
Micro-prompt: “Before writing the full [deliverable], (1) list the top three [dimensions], (2) rank them by [criteria], and (3) draft only the highest-priority one with supporting evidence.”
 
Habit 11 — The expert thinking
Did I request an expert or alternative perspective?
Micro-prompt: “How would a [specific expert role] evaluate this? What would they focus on that [my current perspective] might miss?”
 
Habit 12 — The few-shot example
Did I provide concrete examples of desired output?
Micro-prompt: “Here is an example of the depth and structure I want: [paste excerpt]. Match this level of specificity and directness.”
 
Habit 13 — The diagnosis
Did I diagnose and fix vague or generic responses?
Micro-prompt: “Your [section] feels generic. Identify the assumptions you made and the context that was missing. Then revise with more specificity about [specific aspect].”
 

================================= 
GOAL 3: VERIFY — Did I verify before trusting?
=================================
 
Habit 14 — The assumption surface
Did I ask the AI to surface and rank its assumptions?
Micro-prompt: “List every assumption underlying your [analysis/recommendation]. Which ones are weakest? Which would change your conclusion entirely if wrong?”
 
Habit 15 — The source demand
Did I demand verifiable sources?
Micro-prompt: “Provide sources I can independently verify for [specific claims]. If you cannot provide a verifiable source, say so explicitly.”
 
Habit 16 — The counterargument
Did I request the strongest opposing case?
Micro-prompt: “Make the strongest possible case that [opposite of your conclusion]. What evidence supports that view?”
 
Habit 17 — The math audit
Did I ask for step-by-step math on calculations?
Micro-prompt: “Recalculate [specific figures]. Show your math step-by-step.”
 
Habit 18 — The confidence stress test
Did I request confidence ratings on factual claims?
Micro-prompt: “For each factual claim in this [output], rate your confidence as high, medium, or low. Flag anything below high and explain why.”
 
Habit 19 — The freshness check
Did I check the recency of the data?
Micro-prompt: “What is the date of the most recent information you drew on? Flag anything that may be outdated.”
 
Habit 20 — The hallucination stress test
Did I stress-test which claims are most likely wrong?
Micro-prompt: “Which specific factual claims in this [output] are you least certain about? If I fact-checked every statement, which ones are most likely to be wrong?”

=================================  
GOAL 4: OWN — Did I make this mine, or accept the AI’s default?
=================================
 
Habit 21 — The position forcer
Did I force a clear position rather than accepting hedged output?
Micro-prompt: “Do not hedge. Take a clear position: [specific question]. Defend your position, then address the strongest counterargument.”
 
Habit 22 — The originality push
Did I push for unconventional or nonobvious angles?
Micro-prompt: “Generate three unconventional [responses/strategies/angles] that most [consultants/analysts/writers] would not recommend. Label one as high risk, high reward.”
 
Habit 23 — The specificity demand
Did I require specific data points instead of abstract claims?
Micro-prompt: “Support every claim with a specific data point from the documents I provided or a verifiable source. Remove anything abstract.”
 
Habit 24 — The narrative shaper
Did I shape output into narrative rather than accepting lists?
Micro-prompt: “Rewrite this as a strategic narrative: What is the one thing [audience] needs to understand, why does it matter, and what is the decision we need to make now? No lists. End with a clear recommendation.”
 
Habit 25 — The audience calibration
Did I calibrate output for a specific audience?
Micro-prompt: “Rewrite this for [specific audience]. Assume they are [smart but not immersed in details]. Lead with [what matters to them].”
 
Habit 26 — The multi-persona workflow
Did I use multiple perspectives to challenge the output?
Micro-prompt: “Now review this from three perspectives: (1) [strategist role]: What are we failing to anticipate? (2) [empathetic leader role]: What human factors are missing? (3) [editor role]: Tighten and cut.”

=================================  
GOAL 5: SYSTEMATIZE — Did I build systems, not just outputs?
=================================
 
Habit 27 — The coeditor
Did I feed my own edits back in as a coeditor THROUGHOUT the session?
Check: Did this happen at multiple points during the conversation — not just once at the end? Count how many times I revised and handed back my own version. More is better. Flag any stretch of three or more prompts where I accepted output without coediting.
Micro-prompt: “Here is my revised version with my edits. Use this as our new baseline. Incorporate my changes, flag anything you disagree with, and continue from here.”
 
Habit 28 — The session debrief
Did I request bridge summaries THROUGHOUT the session?
Check: Did this happen at logical break points, before long breaks, or when approaching token limits — not just at the end? Count how many bridge summaries were requested. Flag any point where continuity was lost because a bridge summary was missing.
Micro-prompt: “Summarize what we accomplished, what’s still open, and what I should bring to our next session to pick up where we left off.”
 
Habit 29 — The self-audit
Did I run self-audit checkpoints THROUGHOUT the session?
Check: Did I pause at logical milestones to assess session quality before moving on — or did I audit only at the very end? Flag any major transition between goals or phases where a midsession audit would have caught a gap earlier.
(You’re running the final self-audit now.)
 
Habit 30 — The macro maker
Did I convert the session into a reusable macro-prompt?
Micro-prompt: “Convert this session into a reusable macro-prompt with [variable] placeholders. Format it so anyone can copy, paste, and follow the steps to produce [deliverable type].”
 
=================================  
SCORECARD — Deliver this after evaluating all five goals
=================================
 
For each goal (1-5), provide:
- Score (1-5, where 5 = all habits demonstrated, 1 = none)
- Habits demonstrated well (with specific examples from our conversation)
- Habits missed (with the EXACT prompt I should have used, written for the specific content of THIS session)
- How each missed prompt would have improved the output
 
Then provide:
- Overall session score (average of five goals)
- The single highest-impact habit I missed
- Top three habits to focus on in my next session
 
Be specific and direct. Reference actual moments in our conversation.
Do not soften the assessment.


================================= 
SESSION CLOSE
=================================

After delivering the scorecard, ask me: “Would you like me to (1) go back and apply the missed habits now to improve the work we just did, (2) generate a bridge summary for your next session, or (3) suggest improvements to this self-audit macro-prompt based on what we learned in this session?”


</pre>
</figure>
</div>
</article>
</aside>
</div>
<p></p>
<p>Apply these tips to get the most from the self-audit:</p>
<ul>
<li>Run it at the end of every significant AI session, not just occasionally. The habit of measuring is itself the discipline.</li>
<li>Don’t stop at the scorecard. When the AI asks, “Would you like me to go back and apply the missed habits?” say yes. Then run the self-audit again. Repeat until you’re satisfied you’ve extracted the most value from the session.</li>
<li>Track your scores over time. You’ll notice patterns — goals you consistently score well on and goals you consistently skip. Those patterns are your development road map.</li>
<li>Improve the prompt itself. When the AI suggests improvements to this macro-prompt based on your session, review them and update your saved copy. The self-audit gets sharper each time you use it.</li>
<li>Make it yours. Add habits that matter to your work, remove ones that don’t, or build in your own techniques. The 30 habits here are a starting point, not a ceiling.</li>
<li>Share it with your team. When everyone runs the same self-audit, you build a shared language for AI session quality across the organization.</li>
</ul>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/audit-yourself-to-get-more-from-genai/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Leaders at All Levels: How Argenx Scaled to $4 Billion Without Bureaucracy</title>
				<link>https://sloanreview.mit.edu/video/leaders-at-all-levels-how-argenx-scaled-to-4-billion-without-bureaucracy/</link>
				<comments>https://sloanreview.mit.edu/video/leaders-at-all-levels-how-argenx-scaled-to-4-billion-without-bureaucracy/#respond</comments>
				<pubDate>Wed, 29 Apr 2026 11:00:36 +0000</pubDate>
				<dc:creator><![CDATA[MIT Sloan Management Review. ]]></dc:creator>

						<category><![CDATA[Corporate Culture]]></category>
		<category><![CDATA[Leadership Vision]]></category>
		<category><![CDATA[Video]]></category>
		<category><![CDATA[Webinars & Videos]]></category>
		<category><![CDATA[Culture]]></category>
		<category><![CDATA[Innovation Strategy]]></category>
		<category><![CDATA[Leadership Skills]]></category>
		<category><![CDATA[Organizational Structure]]></category>
		<category><![CDATA[Workplace, Teams, & Culture]]></category>

				<description><![CDATA[Biotech companies face the same dilemma as businesses in other industries: Innovation drops off dramatically with scale. European biotech Argenx has reached a market value of more than $40 billion, having so far escaped that innovation trap. How has it done this? The company shuns hierarchy and instead organizes into small teams, each dedicated to [&#8230;]]]></description>
								<content:encoded><![CDATA[<p>Biotech companies face the same dilemma as businesses in other industries: Innovation drops off dramatically with scale. European biotech Argenx has reached a market value of more than $40 billion, having so far escaped that innovation trap. How has it done this? The company shuns hierarchy and instead organizes into small teams, each dedicated to fighting a single disease with a laser focus on bringing value to the patient.</p>
<p>“Humans can have incredible impact when you allow them to,” said Argenx’s incoming CEO, Karen Massey.</p>
<p>In this episode of <cite>Leaders at All Levels</cite>, she explains how the company manages to retain the nimbleness of a startup.</p>
<h3>The Argenx Playbook: Borrow These Ideas</h3>
<ul>
<li>Use no budgets — only plans — to keep teams focused on the right priorities.</li>
<li>Stop counting layers of management and try Argenx’s alternative. Fight the urge to give quick answers and maintain curiosity as a leader.</li>
</ul>
<p>Listen as hosts Kate W. Isaacs and Michele Zanini dig into the details of how Argenx uses distributed leadership to maintain its innovative edge and uncover insights that you can apply in your own organization.</p>
<h4>Video Credits</h4>
<p><strong>Karen Massey</strong> is the incoming CEO of Argenx.</p>
<p><strong>Kate W. Isaacs</strong> is a senior lecturer at the MIT Sloan School of Management.</p>
<p><strong>Michele Zanini</strong> is coauthor of the <cite>Wall Street Journal</cite> bestseller <cite>Humanocracy</cite> (Harvard Business Review Press, 2020).</p>
<p><strong>M. Shawn Read</strong> is the multimedia editor at <cite>MIT Sloan Management Review</cite>.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/video/leaders-at-all-levels-how-argenx-scaled-to-4-billion-without-bureaucracy/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>What Global Turmoil Means for Company Structure</title>
				<link>https://sloanreview.mit.edu/article/what-global-turmoil-means-for-company-structure/</link>
				<comments>https://sloanreview.mit.edu/article/what-global-turmoil-means-for-company-structure/#respond</comments>
				<pubDate>Tue, 28 Apr 2026 11:00:42 +0000</pubDate>
				<dc:creator><![CDATA[Caterina Moschieri, Davide Ravasi, and Quy Huy. <p>Caterina Moschieri is an associate professor in the Strategy Department of IE Business School in Madrid. Davide Ravasi is a professor of strategy and entrepreneurship and director of the UCL School of Management at University College London. Quy Huy is a professor of strategic management at Insead.</p>
]]></dc:creator>

						<category><![CDATA[Foreign Markets]]></category>
		<category><![CDATA[Global Economy & Trade]]></category>
		<category><![CDATA[Global Operations]]></category>
		<category><![CDATA[Globalization]]></category>
		<category><![CDATA[Multinational Companies]]></category>
		<category><![CDATA[Narrated Article]]></category>
		<category><![CDATA[Politics]]></category>
		<category><![CDATA[Financial Management & Risk]]></category>
		<category><![CDATA[Global Strategy]]></category>
		<category><![CDATA[Strategy]]></category>
		<category><![CDATA[Supply Chains & Logistics]]></category>
		<category><![CDATA[Frontiers]]></category>

				<description><![CDATA[Chris Gash/theispot.com The international order is undergoing structural transformation. War in the Middle East, the prolonged conflict in Ukraine, and major shifts in U.S. trade and foreign policy that have altered the country’s traditional alliances are manifestations of a broader reconfiguration of power. Tariffs, export controls, sanctions, and the vulnerability of strategic choke points as [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/2026SUMMER_Moschieri-1290x860-1.jpg" alt="" class="wp-image-126796"/><figcaption>
<p class="attribution">Chris Gash/theispot.com</p>
</figcaption></figure>
<p></p>
<p></p>
<p><span class="smr-leadin">The international order is undergoing</span> structural transformation. War in the Middle East, the prolonged conflict in Ukraine, and major shifts in U.S. trade and foreign policy that have altered the country’s traditional alliances are manifestations of a broader reconfiguration of power.</p>
<p>Tariffs, export controls, sanctions, and the vulnerability of strategic choke points as diverse as maritime straits and semiconductor ecosystems are exposing the fragility of globally optimized supply chains and production networks.</p>
<p>The previously invisible contest over information flows is transforming as moves by state actors to establish digital sovereignty lead to significant technological consequences for multinational corporations. The European Union’s General Data Protection Regulation and Data Act, for instance, required social media and technology companies to redesign cloud infrastructure, reorganize compliance teams and legal entities, and relocate data storage and processing to ensure that European user data remains under EU jurisdiction.</p>
<p>Meanwhile, consider the recent controversy in Chile, which, under U.S. pressure, rescinded approval of an <a href="https://www.france24.com/en/live-news/20260312-the-chinese-cable-that-could-trip-up-chile-s-new-leader" target="_blank">undersea cable</a> that would link Santiago to Hong Kong. That situation illustrates how digital infrastructure projects in middle-power countries have become geopolitical flash points.</p>
<p>In fact, the very concept of neutrality has become fragile. The war in the Middle East shows that hitherto politically neutral countries are not immune to attack. Big Tech companies such as Amazon, Google, and Microsoft have invested hundreds of billions of dollars to develop gigantic data centers in the United Arab Emirates, Qatar, and Oman only to see them <a href="https://www.bloomberg.com/news/articles/2026-03-03/drone-strikes-damage-amazon-data-centers-in-the-uae-and-bahrain" target="_blank">damaged by drones and missiles</a>.</p>
<p></p>
<h3>Why Traditional Options Now Look Different</h3>
<p>How are businesses operating across borders adapting to this evolving reality — or, how should they be? Our research into waves of globalization and deglobalization since the beginning of the 20th century has found that the main options traditionally available to multinational corporations facing geopolitical turmoil — exit, relocate, or reorganize — are manifesting somewhat differently now than in previous crises.</p>
<p><strong>Exit: Is it advisable?</strong> In the past, companies operating in a country where their policy risk was increasing were likely to reassess and reduce their exposure or even <a href="https://doi.org/10.1002/smj.2509" target="_blank">exit the country</a> altogether. Such decisions are never easy, since they often mean relinquishing valuable assets and abandoning lucrative opportunities. As the exit of <a href="https://www.washingtonpost.com/business/2022/05/03/bp-profit-russia/" target="_blank">BP</a>, <a href="https://www.reuters.com/business/energy/shell-exit-russia-operations-after-ukraine-invasion-2022-02-28/" target="_blank">Shell</a>, and <a href="https://finance.yahoo.com/news/exclusive-norways-equinor-exited-russia-051015171.html" target="_blank">Equinor</a> from Russia soon after its invasion of Ukraine shows, divestitures can entail significant financial write-downs, legal complications, contractual disputes, and reputational spillovers. Host governments can use their coercive and regulatory power to make it far more costly for a foreign company to exit the market than it was to enter.</p>
<p>Divestitures can also cause a multinational company to permanently lose access to markets in regions that may remain strategically important over the long term. To avoid such a scenario, it may be wise to maintain a calibrated, minimal presence in those countries. In practice, this could consist of a legal entity and basic operational presence sufficient to preserve relationships, regulatory standing, and market intelligence while limiting commitment. This may be achieved through asset relocation or structural reorganization. For example, automakers, including <a href="https://www.automotivelogistics.media/ev-and-battery/nissan-is-to-cease-wuhan-production-by-march-2026-amid-fierce-competition-and-financial-strain-in-china/197673" target="_blank">Nissan</a> and <a href="https://www.auto123.com/en/news/volkswagen-reduces-investment-plan-2030/73470/" target="_blank">Volkswagen</a>, have reduced R&amp;D investment in China and slowed expansion plans there without fully exiting the market. By maintaining supplier relationships and distribution networks, they can preserve the option to reengage more fully if political or competitive conditions stabilize.</p>
<p>Not surprisingly, then, many multinationals are exploring ways to maintain <a href="https://sloanreview.mit.edu/article/multinationals-need-closer-ties-as-globalization-retreats/">broad international scale</a> and reach despite tectonic shifts in the global order. But what are the alternatives?</p>
<p></p>
<p><strong>Reorganize: Polynational structures and corporate diplomacy.</strong> The ongoing political turmoil is causing many leaders to question the traditional approach to organizing multinational operations, which is based on centralizing strategic direction and technology development and optimizing supply chains and technology flows. Traditional multinationals also tend to prioritize commercial considerations over political considerations, and efficiency over resilience.</p>
<p>As geopolitical tensions increase and disruptive events intensify, multinationals are adopting new structures to build resilience through separation, redundancy, and local embeddedness. In 2024, for instance, HSBC restructured its global operations by splitting its business into Eastern and Western divisions. The bank also joined China’s cross-border interbank payment system, strengthening its Eastern operations while separating their governance from Western operations.</p>
<p>Globally integrated operations are now giving way to <a href="https://view.mail.fortune.com/?vawpToken=3G54SGT7ZYNUFASXNNIUOOPRBE.130019" target="_blank">polynational organizations</a> — networks of semiautonomous units with strong in-country leadership, regional supply chains, and strong ties with local stakeholders. Interestingly, this signals a partial return to the multidomestic organization that some multinationals adopted in the pre-globalization era.</p>
<p>Nestlé and HSBC offer two examples of this approach. Both companies have distributed strategic authority and the monitoring and analysis of political and regulatory issues across regional hubs. They have also embedded operations deeply within local economic and regulatory systems to reduce their exposure to political shocks in specific locations while preserving their presence in multiple geopolitical blocs. Doing so allows Nestlé and HSBC to remain globally coordinated but politically adaptable.</p>
<p>The local anchoring that characterizes polynational organizations can also be pursued by localizing ownership — that is, by directly involving local actors in the ownership and governance of operations. Ceding significant ownership to the host government (as <a href="https://www.cnbc.com/2023/11/20/mcdonalds-increases-minority-stake-in-china-business-.html" target="_blank">McDonald’s did in China</a>) or listing local operations on the national stock exchange (as Hindustan Unilever did in India, and Heineken did in Malaysia) helps create local accountability and signals alignment with local interests. It also helps companies introduce legal and operational separation between a local subsidiary and the global parent.</p>
<p>Localizing ownership can also be an extreme response to widely diverging regulatory regimes and local concerns with data sovereignty. As the case of TikTok in the U.S. shows, redesigning internal governance and technological architectures may be insufficient to address a host government’s concerns about how data will be collected, processed, and used. Radically <a href="https://www.cbsnews.com/news/tiktok-deal-ban-oracle/" target="_blank">restructuring ownership</a> to create a separate legal entity to manage American operations, with majority ownership by non-Chinese investors, was the only way the social media platform could continue operating in the U.S. The Chinese parent, ByteDance, retained a 19.9% stake in TikTok.</p>
<p></p>
<p>Multinationals are also investing in more preemptive measures. Corporate headquarters are developing geopolitical capabilities that enable them to actively and constantly monitor political risk and take strategic action in real time. Such actions include creating or strengthening dedicated government-affairs corporate functions and developing specialized tools, such as BlackRock’s Geopolitical Risk Indicator, Allianz’s Political Stability Grid, and Siemens’ Value at Stake methodology. Such capabilities help multinationals formulate explicit geopolitical strategies, anticipate potential disruptions to supply chains and operations, and orchestrate responses to crises when they occur.</p>
<p>Some multinationals are also engaging in corporate diplomacy, a sign that they are moving from treating geopolitics as an external constraint to engaging proactively as independent actors. In 2025, Apple simultaneously lobbied the U.S. government against instituting tariffs, reassured local officials in China about its presence, and strengthened ties with Indian authorities, effectively using manufacturing investments as diplomatic currency. Also in 2025, <a href="https://blogs.microsoft.com/on-the-issues/2025/04/30/european-digital-commitments/" target="_blank">Microsoft made five major commitments</a> to support Europe’s digital stability, including expanding data center operations in 16 European countries, supporting digital sovereignty, defending its legal right to operate in Europe, protecting data privacy and cybersecurity in the region, and ensuring open access to its European AI and cloud platform and to infrastructure across Europe. The purpose of such efforts is twofold: to shelter companies from the consequences of political tensions between home and host governments, and to unlock opportunities for local investment by conveying a neutral stance.</p>
<p><strong>Relocate: From optimization to compliance.</strong> For decades, gradual regulatory alignment, integration of financial markets and payment systems, and the proliferation of free trade agreements encouraged multinational companies to let cost advantage and economies of scale dictate location choices. Now, fragmentation of regulatory regimes and the return of trade barriers are forcing a renewed emphasis on regulatory compliance and risk mitigation.</p>
<p>This is what happened in <a href="https://doi.org/10.1186/s41469-019-0047-8" target="_blank">post-Brexit Europe</a>. The United Kingdom’s withdrawal from the EU limited the free movement of goods, services, and labor and threatened the European operating licenses of multinationals whose regional headquarters were in the U.K. This forced them to reconsider European residency, relocate subsidiaries, and rebalance regional headquarters. Increased cross-border transaction costs disrupted integrated value chains and constrained labor mobility, prompting companies to restructure reporting lines and shift assets to preserve market access.</p>
<p></p>
<p>To reduce the risk of supply chain disruptions, many companies are increasingly <a href="https://www.mckinsey.com/mgi/our-research/geopolitics-and-the-geometry-of-global-trade-2025-update" target="_blank">adopting strategies</a> such as reshoring (moving production to the home country to avoid tariffs and other barriers), near-shoring (moving production closer to home), and friend-shoring (moving production to friendly nations to increase control over foreign operations and decrease exposure to potentially hostile countries). Several North American manufacturers in electronics and automotive components have near-shored production from China to Mexico to reduce tariff exposure and shorten supply chains. By relocating assembly and intermediate production closer to the U.S. market, these companies are sacrificing access to low-cost producers to avoid tariff wars and logistics problems.</p>
<p>Multinational manufacturers in apparel, consumer electronics, and industrial goods are adopting a middle-power anchoring strategy. This means that they are relocating production to countries that are less strongly aligned with blocs embroiled in trade tensions. Apple is shifting the bulk of iPhone production to India from China. Samsung built solid relations with Vietnam’s political authorities, which enabled the company to influence the development of industrial parks where it now produces the majority of its Galaxy smartphones. Intel has chosen to establish a manufacturing hub in Malaysia, taking advantage of the Southeast Asian country’s geopolitical neutrality and existing semiconductor expertise to establish a production base outside the U.S.-China rivalry.</p>
<p></p>
<p>Such moves allow multinationals to maintain access to low-cost supply networks while reducing their dependence on a single geopolitical bloc. The companies also benefit from early positioning within emerging middle-power corridors of trade.</p>
<p></p>
<p>The current geopolitical landscape reflects a rupture within globalization itself. Countries are trying to weaponize networks they cannot fully dismantle: They engage in techno-nationalism, impose sanctions in trade and finance, create data sovereignty regimes, and compete on the basis of industrial policy. At the same time, this competition and fragmentation are occurring within a context of deep economic interdependence.</p>
<p>To stay on top of this new landscape, companies need to redesign their portfolios, supply chains, data architectures, and governance models. Multipolarity is reshaping the strategic options of exit, relocation, or reorganization. Resilience now depends on adaptability to fast-changing geopolitical restructuring.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/what-global-turmoil-means-for-company-structure/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Why Adventure Matters in Long Working Lives</title>
				<link>https://sloanreview.mit.edu/article/why-adventure-matters-in-long-working-lives/</link>
				<comments>https://sloanreview.mit.edu/article/why-adventure-matters-in-long-working-lives/#respond</comments>
				<pubDate>Mon, 27 Apr 2026 11:00:09 +0000</pubDate>
				<dc:creator><![CDATA[Lynda Gratton. <p><a href="https://www.linkedin.com/in/lynda-gratton-3b179813/" target="_blank">Lynda Gratton</a> is a professor of management practice at London Business School and founder of HSM Advisory. Her most recent book is <cite>Redesigning Work: How to Transform Your Organization and Make Hybrid Work for Everyone</cite> (MIT Press, 2022).</p>
]]></dc:creator>

						<category><![CDATA[Chief Executive Officer]]></category>
		<category><![CDATA[Human Capital]]></category>
		<category><![CDATA[Human Psychology]]></category>
		<category><![CDATA[Leadership Development]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leadership Skills]]></category>
		<category><![CDATA[Managing Your Career]]></category>
		<category><![CDATA[Workplace, Teams, & Culture]]></category>

				<description><![CDATA[Emma Hanquist/Ikon Images In my ongoing exploration about the patterns and changes in how people approach their working lives, I’ve found myself looking back at my own memories from over five decades of work. What stands out is not simply the steady progression of roles and achievements but the disproportionate impact of recurring moments of [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Gratton-1290x860-1.jpg" alt="" class="wp-image-126804"/><figcaption>
<p class="attribution">Emma Hanquist/Ikon Images</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">In my ongoing exploration</span> about the patterns and changes in <a href="https://lyndagratton.com/thinking" target="_blank" rel="noopener noreferrer">how people approach their working lives</a>, I’ve found myself looking back at my own memories from over five decades of work. What stands out is not simply the steady progression of roles and achievements but the disproportionate impact of recurring moments of adventure that took me far beyond my usual experience. </p>
<p>At the time, these adventures each felt uncertain and sometimes even disruptive. More than that, they sat outside any clear narrative of progression. They did not register as forward movement. If anything, they felt almost indulgent: hitchhiking as a graduate student to Israel to research child-rearing practices in a kibbutz; traveling through Peru and Bolivia in my 30s; later, in my 50s, exploring countries across Africa; and now, in my 70s, journeying to India to better understand its religions.</p>
<p>Looking back, though, I now see these not as diversions from my working life. Instead, they were among the experiences that most shaped it.</p>
<p>My reflections are not unique. In conversations with others about their own long working lives, a consistent pattern emerges. People describe moments of adventure that took them beyond what was familiar. Some stepped away entirely by, for instance, spending time in a different country. Others made smaller but still disorienting shifts, such as moving into unfamiliar roles or entering settings where they were no longer the expert.</p>
<p></p>
<p>Taking these kinds of leaps becomes more important as longevity reshapes our lives. Longer lives bring both opportunity and risk. They offer more time — to learn, to contribute, to explore. But they also demand more than a single way of working, thinking, or being. In short working lives, ossification matters less. But as working lives stretch, the ability to change becomes critical. Without periods of deliberate adventure and exploration, we risk becoming locked into versions of ourselves that no longer fit the future we are moving into.</p>
<p>The challenge is not just endurance; it is reinvention. And reinvention does not happen accidentally.</p>
<h3>Why Adventure Matters</h3>
<p>Imagine that your own working life extends into your 70s. How will you make that sustainable? Many people focus on staying productive: becoming highly skilled and deeply experienced. Others recognize the <a href="https://sloanreview.mit.edu/article/calm-the-underrated-capability-every-leader-needs-now/">importance of cultivating calm</a> and explore the conditions and practices that sustain mental health and well-being.</p>
<p>Both strategies are wise. Yet the very structures that support productivity and the ability to stay calm — clear roles, established identities, well-worn habits — can, over time, make change harder.</p>
<p>When I talk to leaders about how they support their own longer working lives, they often emphasize the need for resilience, agility, and transformation. They rarely talk about adventure. It can sound frivolous: personal rather than organizational, or even risky in a corporate context.</p>
<p></p>
<p>Yet when people describe their own working lives, it is often the adventures that they describe. It becomes clear how profoundly such experiences support a long working life. Here are three reasons why.</p>
<h4>Adventure disrupts accumulated patterns.</h4>
<p>Stepping away entirely — by spending time in a different country or working in contexts where the usual expertise offers little guidance — changes up everything. The systems are different, the cues unfamiliar, and the markers of success less clear. In these situations, choices and actions that once felt automatic become visible again.</p>
<p>People who put themselves in these situations describe paying closer attention — observing more closely, questioning more readily, and adapting more deliberately.</p>
<p>What is disrupted is not just routine but the deeper patterns of thinking and acting that have been built over years. In that disruption, something important happens: People begin to see their own habits, assumptions, and default responses from the outside.</p>
<h4>Adventure expands who we can become.</h4>
<p>If continuity anchors identity, then adventure unsettles it. Research on identity points to <a href="https://psycnet.apa.org/doi/10.1037/0003-066X.41.9.954" target="_blank" rel="noopener noreferrer">the idea of “possible selves”</a> — the different ways we might imagine ourselves in the future. Most remain abstract. But experiences that take us beyond the familiar can make these possibilities more tangible.</p>
<p></p>
<p>This shift does not happen through reflection alone. It happens through action. Imagine, for instance, a senior executive stepping away from a well-established role to spend a year working in a small, unfamiliar venture in a different country, where her experience carries little authority. For the first time, she sees another version of herself — not as a leader defined by control but as someone learning, adapting, and uncertain. Or consider a technical specialist who begins teaching and comes to see himself not just as an expert but as an educator — an identity that reshapes his future.</p>
<p>What matters is not just what we do now but who we can become. New experiences expand the range of identities we can inhabit, and that expanded sense of self endures.</p>
<p></p>
<h4>Adventure creates markers across the life course.</h4>
<p>Our experiences do not sit in isolation. They become part of how we make sense of our lives over time. We construct a narrative of who we are, linking past experiences with present choices and future possibilities. Within that narrative, certain moments stand out. They are revisited, retold, and used as reference points.</p>
<p>Periods of adventure often have this quality. A decision to step away, a move into an unfamiliar context, a break from a defined path all become the moments that stand out. They become more than memories; they become anchors in the story we tell about ourselves.</p>
<p>Adventures often mark a passage. They’re a point of transition from one version of ourselves to another and mark the moment when we cannot fully return to our former self. It was the Greek philosopher Heraclitus who observed that we move through time like it’s a river: If we step out of the water, it is a river with different waters and a different flow when we return to it later.</p>
<p>I was reminded of this when I returned to the ancient city of Petra, in Jordan, many years after first visiting it as a young traveler. The place was recognizable, but I was not quite the same. The first time, I slept on the desert floor, wandered with little knowledge, and was open to everything. The second time, I arrived more informed and more comfortable. The experience was richer in some ways, but it did not replace the intensity of that first encounter.</p>
<p>Years later, we return to such moments, not simply recalling what happened but using them to understand what we are capable of and what matters to us. They connect earlier and later versions of our self, allowing change to feel less like disruption and more like something we have already lived through.</p>
<h3>The Organizational Paradox</h3>
<p>What is striking is how unevenly these adventures are distributed. We recognize — and often encourage — adventure early in life, as part of education or early career exploration. But as our careers progress, adventure becomes harder to justify, harder to accommodate, and easier to defer. We encourage adventure at 20. We discourage it at 40 and 50.</p>
<p>This pattern reflects the structure of the traditional three-stage life: a period of education, followed by continuous full-time work and then retirement. Within this model, exploration is largely confined to the beginning and the end. The middle is defined by continuity, progression, and increasing specialization. </p>
<p>Organizations have been built around this model. They optimize for efficiency, reward consistency, and rely on predictable performance. Roles become more defined and expectations more explicit, making periods of discontinuity feel costly — for both individuals and employers.</p>
<p>The result is a paradox. The very experiences that most expand perspective and capability are the ones most likely to disappear, just as longer working lives make them more necessary. </p>
<p>As I’ve explored in my research and writing for the past few decades, people’s working lives now regularly extend into their 60s and 70s — not just among those who need to work but those who want to work too. As that happens, that three-stage structure is under strain: It becomes harder to sustain a model based on decades of continuous, unbroken work.</p>
<p>Emerging in its place is a <a href="https://sloanreview.mit.edu/article/the-corporate-implications-of-longer-lives/">multistage life</a> — one with more transitions, more variety, and more choice. In this model, exploration and adventure are no longer confined to the edges of life. They can now occur at multiple points: between roles, across careers, or within them.</p>
<p>We can see this shift occurring. Sabbaticals, <a href="https://www.indeed.com/career-advice/career-development/what-is-secondment" target="_blank" rel="noopener noreferrer">secondments</a> (temporarily working a different job at the same company), portfolio careers (combining multiple jobs, income streams, and side gigs), and midlife transitions are all becoming more visible. What matters is not the specific form of this shift but the principle: that long careers require moments of discontinuity, not just continuity.</p>
<h3>Make Space for Adventure</h3>
<p>It is important to acknowledge that not all working lives offer the same scope for these experiences. In my own case, an academic career provided a degree of flexibility — periods of time between roles, or space to step away — that made some of my adventures possible. Many other people work within structures that offer far less room for breaks or risk-taking.</p>
<p>Making time for new experiences is not simply a matter of individual choice. It reflects how working lives have traditionally been organized.</p>
<p>So for organizations, the challenge is to legitimize exploration across the life course — to create space for movement without penalizing those who step away.</p>
<p></p>
<p>For individuals, the challenge is different but equally real. As careers progress, time becomes more constrained, responsibilities accumulate, and stepping away feels harder to justify. Adventure is postponed — until there is more time, more certainty, or fewer obligations. But in a working life, that moment rarely arrives.</p>
<p></p>
<p>Making space for adventure requires a shift in how we think about our lives and careers. We have become accustomed to <a href="https://sloanreview.mit.edu/article/building-mastery-what-leaders-do-that-helps-or-impedes/">valuing mastery and productivity</a>, and adventure is often treated as optional — something peripheral rather than essential.</p>
<p>In longer lives, that assumption no longer holds. Adventure is not simply a break from work. It is one of the threads that keeps a life — and a career — alive. It is what allows a career to remain open, adaptive, and capable of renewal over decades. The risk is not that people take too many detours but too few. </p>
<p>What would your 80-year-old self ask of you? Yes — walk many steps a day, eat sensibly, sleep well. But also: Give me adventures. Give me moments I can remember, stories I can tell, conversations I can have with my grandchildren. Carve out time for extended travel or cultural immersion. Volunteer in unfamiliar contexts, in roles below your capabilities — or much higher. Ask to try a new task at work. Plan a weekend trip to someplace you’ve never been. Undertake a physically or creatively demanding challenge. Try out a self you’ve always dreamed of being. Some of these adventures are dramatic. Others are deeply personal.</p>
<p>In long working lives, the question is not only how long we can continue but also how often we are willing to step beyond what we know.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/why-adventure-matters-in-long-working-lives/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>How to Slay the Chaos Dragon</title>
				<link>https://sloanreview.mit.edu/article/how-to-slay-the-chaos-dragon/</link>
				<comments>https://sloanreview.mit.edu/article/how-to-slay-the-chaos-dragon/#respond</comments>
				<pubDate>Thu, 23 Apr 2026 11:00:12 +0000</pubDate>
				<dc:creator><![CDATA[Melissa Swift. <p><a href="https://www.linkedin.com/in/swiftmelissa/" target="_blank">Melissa Swift</a> is the founder and CEO of organizational consulting firm Anthrome Insight. She is also the author of <cite>Work Here Now: Think Like a Human and Build a Powerhouse Workplace</cite> (Wiley, 2023) and the forthcoming <cite>Effective: How to do Great Work in a Fast-Changing World</cite> (Wiley, 2026).</p>
]]></dc:creator>

						<category><![CDATA[Change Management]]></category>
		<category><![CDATA[Human Behavior]]></category>
		<category><![CDATA[Leadership Advice]]></category>
		<category><![CDATA[Managerial Psychology]]></category>
		<category><![CDATA[Strategic Leadership]]></category>
		<category><![CDATA[Team Dynamics]]></category>
		<category><![CDATA[Culture]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leadership Skills]]></category>
		<category><![CDATA[Leading Change]]></category>
		<category><![CDATA[Workplace, Teams, & Culture]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR &#124; Getty Images In my first job out of college, I had a frenetic boss whom we’ll call Don. Don was all over the place in a quite literal sense: running from desk to desk across the office, talking to people here and there, dashing in and out for cigarettes all day. [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Swift-1290x860-1.jpg" alt="" class="wp-image-126772"/><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR | Getty Images</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">In my first job out of college,</span> I had a frenetic boss whom we’ll call Don. Don was all over the place in a quite literal sense: running from desk to desk across the office, talking to people here and there, dashing in and out for cigarettes all day. At the end of 1998, Don had been late for meetings so often that he announced an initiative called “On Time in ’99!” to kick off in the new year. </p>
<p>He didn’t get the chance to implement it. The company I worked for hired an organizational consultant, who, legend has it, identified Don and the cloud of chaos around him as the root cause of virtually all of the various process failures we were experiencing. </p>
<p>Don was fired.</p>
<p>As an organizational consultant myself today, I’m fascinated by this set of events. I feel bad for Don: It seems unlikely that all of the chaos traced back to him. And indeed, things remained pretty chaotic after he departed. </p>
<p>The goal resonates, though: Minimizing chaos is, in my professional experience, one of the healthiest goals an organization can set. Sadly, in today’s environment, this can seem impossible to leaders. Most organizations deal with both a chaotic external world (featuring wild daily gyrations in everything from geopolitics to weather to technology) and a chaotic internal landscape (featuring the level of shifting priorities that comes with the scale and complexity of so many companies today). If 2026 feels especially chaotic, you’re not wrong.</p>
<p>All hope is not lost, though. Leaders can take steps to help people handle chaos <em>before</em> things go off the rails, or at least before things go off the rails <em>entirely</em>. Let’s take a look at four of them. </p>
<p></p>
<h3>1. Constantly talk to the teams your team works with.</h3>
<p>Poet John Donne wrote, “No man is an island,” and no team is, either. You don’t have to be a big, messy matrix organization to operate in a teams-of-teams manner. Even relatively small companies feature incredible amounts of interdependency between groups. </p>
<p>This phenomenon causes chaos by generating competing priorities. It also exacerbates the chaos that comes in from the outside by multiplying and fragmenting the organization’s strategies to respond to any given event. Imagine a football team with multiple huddles: How would you ever pull off a well-run play?</p>
<p>The sanest organizations I’ve done consulting work with, and the healthiest leadership teams I’ve been a part of myself, all addressed this issue in the same fairly informal way: Leaders got to know who their teams were teaming with, and they stayed in contact with those teams’ leaders. </p>
<p>This may sound straightforward, but once you get to several-hundred-person chunks of organizations, the permutations of connections between teams pile up quickly. So leaders are challenged not to map every interaction for their team but to understand the “mosts”: most frequent, most strategic, and most charged team-to-team interactions.</p>
<p></p>
<p>Once leaders engage in a regular, everyday dialogue about the work their teams are doing together, chaos levels begin to modulate. Multiple leaders can work together to collectively shift people’s priorities to what the organization really needs. They can also minimize collisions between people doing the same or conflicting work. </p>
<p>Often, organizations attempt an emergency version of this as a crisis erupts, only to discover that the leaders they’re hurriedly pulling together have been working in such separate lanes that there’s an incredible amount of context that has to be shared and trust that has to be built before they can mobilize their teams jointly. As the leaders play catch-up, chaos mounts. Leaders who are already in a live conversation with one another have a tremendous edge in this scenario. </p>
<p></p>
<h3>2. Create and protect space in meetings for impromptu dialogue.</h3>
<p>In a prior role, pre-entrepreneurship, I was hired with the explicit mandate of soothing the waters of a chaotic team. I came in and immediately looked for levers I could hit to make things even a bit more predictable. </p>
<p>A clue came to me in the strangest place: I was asked to introduce myself during a recurring town hall meeting and was given such a short time slot in such a packed agenda that my remarks culminated with my effectively getting played off the stage like a verbose Oscar winner. To try to recover from the bizarre experience of getting Zoom-silenced by the group, I did a bit of an emotional audit. What I was feeling was pretty simple: I had things I needed to say, and I had not gotten the full chance to say them.</p>
<p>This was a lousy feeling — but indicative of a structural problem. The group had a complex array of meetings, matrixed by employees’ levels within the organization, and the meeting agendas were completely, almost compulsively, full. If a matter came up that needed to be discussed, additional meetings had to be frantically parachuted into already-packed calendars. This meant that even mildly chaotic events (say, a client being unhappy with a deliverable, which is a thing that happens frequently in consulting) turned into full crises quickly as discussions fragmented across tiny chunks of time within the subgroups that were available. </p>
<p>So I took some advice I frequently give clients and audiences: I killed a bunch of standing meetings. And I loosened the agendas for the gatherings that did remain, creating space for whatever was happening at that moment, for silence so that people could think, or for — brace yourself — the meeting to end early if we didn’t need the full time slot. </p>
<p>Did this step banish all chaos? No, of course not. Did our ability to handle chaos improve? Yes, it did. On average, we were able to address issues more quickly with more of the right people in the room — and we were able to lessen silent emotional burdens among the team by bringing issues up quickly and publicly — because we had already designated time to do so. Chaos was still there, but our resilience had increased thanks to having space for discussion. </p>
<p>Reserving space in meetings can feel uncomfortable when you first implement it. Just as nature abhors a vacuum, corporate environments hate blank space in meetings or on calendars. It may be tempting to delete that agenda bullet that says “AOB” (any other business). But resist the urge to pack every hour. When you need that extra five minutes, 10 minutes, 20 minutes because something has come up, it will feel like absolute magic to have time to talk about what you actually need to talk about.</p>
<h3>3. Explicitly guard against the bad behavior that chaos can cover.</h3>
<p>I discovered something disturbing when doing research for my forthcoming book, <cite>Effective: How to Do Great Work in a Fast-Changing World</cite>. <a href="https://doi.org/10.1177/0950017009344875" target="_blank" rel="noopener noreferrer">Academic research</a> explicitly links chaotic environments with every bad workplace behavior except for sexual harassment: Examples include bullying by supervisors, conflict between employees and customers, and infighting by colleagues. To fans of postapocalyptic science fiction like me, this tracks: After the asteroid hits Earth, or the zombies come out, many people seem to start acting like real jerks.</p>
<p>This raises a fascinating question: Are we making the experience of chaos worse than it needs to be by simply tolerating unpleasant behavior in chaotic times? After all, in the workplace, we often normalize crummy conduct in these sorts of moments. Results are suddenly bad? Of course the CEO is yelling. An unexpected deliverable is due ASAP? Of course the team is clashing. Conditions on the ground are wild? Of course folks are bickering with customers. All of this, of course, makes the chaos worse and the underlying issues less surmountable, but many organizations have come to accept it as a normal way of working in tough moments.</p>
<p>We shouldn’t.</p>
<p>A certain amount of back-and-forth is healthy and actually an <a href="https://medium.com/the-liberators/why-psychological-safety-improves-the-effectiveness-of-your-team-7592d76f3c9b" target="_blank" rel="noopener noreferrer">indicator of psychological safety</a>. But in chaotic moments, leaders must be vigilant about recognizing when strong statements have become bullying, when push and pull about roles and responsibilities have become toxic infighting, and when boundary-setting with customers has become too fraught. </p>
<p>SHRM offers a <a href="https://www.shrm.org/content/dam/en/shrm/topics-tools/news/employee-relations/Bullying.pdf" target="_blank" rel="noopener noreferrer">definition of bullying</a> that can be helpful in addressing any category of bad behavior: “Workplace bullying refers to repeated, unreasonable actions of individuals (or a group) directed toward an employee (or a group of employees), which are intended to intimidate, degrade, humiliate, or undermine; or which create a risk to the health or safety of the employee(s).”</p>
<p>This definition gives leaders some good questions to ask themselves when they witness heated moments in chaotic times. “Repeated” alone is a good test. Anyone can have a lousy day and spout off once; when the behavior happens again and again as the team wrestles with a crisis, it’s time to step in. “Unreasonable” also categorizes actions in a helpful way. Are people asking for, or criticizing others for not providing, things that can reasonably be provided or implemented? Or has panic tipped them over into overreaction? (“I expect you to be at your desk all night until this is finished!”)</p>
<p></p>
<p>Once you’ve identified truly over-the-line behavior, name the problem — contextualized to the chaotic situation to remove excuses: “I know this supply chain shortage is taxing us all, but the way you spoke to Sally was degrading and unhelpful.” Make it explicit that chaos does not issue everyone a blank check to indulge their worst impulses. </p>
<p>While chaos and bad behavior unfortunately often travel together, that’s not a coupling that sane leaders need to accept. </p>
<h3>4. It’s not all bad: Reap the <em>upsides</em> of chaos.</h3>
<p>You may have read the heading above and done a bit of a double take. “The upsides, you say? But I loathe chaos.” </p>
<p>Me too, honestly. But that’s why I force myself to remember a few things:</p>
<p><strong>Chaos accelerates personal development.</strong> It’s incredibly frustrating to deal with a million things happening at once in unpredictable ways. But some of that frustration is the feeling of your brain being challenged — and challenge equals growth. Many executives I’ve worked with have cited chaotic times as the crucible for the growth of some of their strongest skills. The chaos didn’t feel good at the time, but they were learning at exponential speed. </p>
<p></p>
<p><strong>Chaos can shake up the corporate chessboard in helpful ways.</strong> One C-suite executive (and certified chaos hater) sheepishly admitted to me the other day that “every decent opportunity I’ve gotten has been because things were in disarray.” Again, we may not love what Ashley Goodall so memorably called “<a href="https://ashleygoodall.com/excerpt" target="_blank" rel="noopener noreferrer">life in the blender</a>,” but the most chaotic events do sometimes tee up intriguing opportunities (or even new roles). Especially in an era where people increasingly value horizontal or diagonal growth — building lateral skills through different kinds of exposure, not just marching into more senior roles in a linear fashion — there’s definitely an upside to the corporate ladder getting a good shake now and then.</p>
<p><strong>Chaos can give us all the opportunity for a cleansing laugh.</strong> Think about some of your most memorable moments with the teams you’ve worked with. I bet at least one or two are downright silly. When things get chaotic and people choose to see comedy and not tragedy, we can all have some distinctly human fun together. The randomness of the universe is not just frustrating and annoying and exhausting. It can be goofy, too. </p>
<p>The reality of life at any organization is that you can’t fully shield your team from chaos, and per that last strategy, you <em>shouldn’t</em>, either. With the right team-to-team communication, the right space to have the right conversations, and the right protection from bad behavior, your team can grow, get new opportunities, and even chuckle together during chaos. </p>
<p>In Greek mythology, <a href="https://www.britannica.com/topic/Chaos-ancient-Greek-religion" target="_blank" rel="noopener noreferrer">chaos is defined</a> as simply the time before the world was formed. Under that framework, chaos itself is almost immaterial; it’s what comes after that matters. And leaders: That part is what you choose to make of it. </p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/how-to-slay-the-chaos-dragon/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Why Business Leaders Need to Champion Democracy</title>
				<link>https://sloanreview.mit.edu/article/why-business-leaders-need-to-champion-democracy/</link>
				<comments>https://sloanreview.mit.edu/article/why-business-leaders-need-to-champion-democracy/#respond</comments>
				<pubDate>Wed, 22 Apr 2026 11:00:04 +0000</pubDate>
				<dc:creator><![CDATA[Julie Battilana, Lakshmi Ramarajan, Matthew Lee, and Vincent Pons. <p><a href="https://sici.hks.harvard.edu/person/julie-battilana/" target="_blank" rel="noopener noreferrer">Julie Battilana</a> is the Alan L. Gleitsman Professor of Social Innovation at the Harvard Kennedy School of Government and the Joseph C. Wilson Professor of Business Administration at Harvard Business School. <a href="https://www.hbs.edu/faculty/Pages/profile.aspx?facId=496799" target="_blank" rel="noopener noreferrer">Lakshmi Ramarajan</a> is the Diane Doerge Wilson Professor of Business Administration at Harvard Business School. <a href="https://www.hks.harvard.edu/faculty/matthew-lee" target="_blank" rel="noopener noreferrer">Matthew Lee</a> is an associate professor of public policy and management at the Harvard Kennedy School. <a href="https://www.vincentpons.org/" target="_blank" rel="noopener noreferrer">Vincent Pons</a> is the Byron Wien Professor of Business Administration at Harvard Business School.</p>
]]></dc:creator>

						<category><![CDATA[Business Risk]]></category>
		<category><![CDATA[Corporate Leadership]]></category>
		<category><![CDATA[Human Rights]]></category>
		<category><![CDATA[Leadership Vision]]></category>
		<category><![CDATA[Politics]]></category>
		<category><![CDATA[Social Justice]]></category>
		<category><![CDATA[Corporate Social Responsibility]]></category>
		<category><![CDATA[Crisis Management]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Social Responsibility]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR &#124; Getty Images Democracy is in decline across the world. More countries are experiencing erosion of political rights and civil liberties than gains, according to Freedom House. As of 2025, 92 countries, representing 74% of the world’s population, were classified as autocracies by the V-Dem Institute. Democratic backsliding is a primary concern [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Battilana-1290x860-1.jpg" alt="" class="wp-image-126731"/><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR | Getty Images</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">Democracy is in decline across the world.</span> More countries are experiencing <a href="https://freedomhouse.org/report/freedom-world/2026/growing-shadow-autocracy" target="_blank" rel="noopener noreferrer">erosion of political rights and civil liberties</a> than gains, according to Freedom House. As of 2025, 92 countries, representing 74% of the world’s population, were <a href="https://www.v-dem.net/documents/75/V-Dem_Institute_Democracy_Report_2026_lowres.pdf" target="_blank" rel="noopener noreferrer">classified as autocracies</a> by the V-Dem Institute. </p>
<p>Democratic backsliding is a primary concern for business leaders, who largely agree on the importance of strong democratic institutions. In a <a href="https://www.businessanddemocracy.org/research/business-leaders-and-consumers-220519" target="_blank" rel="noopener noreferrer">2022 survey</a> by Morning Consult and the Business and Democracy Initiative, 96% of executives said a well-functioning democracy is important to a strong economy, and 75% said it mostly helps their business. Consumer attitudes point in the same direction: In a <a href="https://www.ppsi.org/insights/new-survey-democracy" target="_blank" rel="noopener noreferrer">2024 survey</a> by Morning Consult and the Public Private Strategies Institute, 76% of consumers said they believe that businesses should help ensure safe and fair elections, and 72% supported businesses speaking out against threats to democracy. </p>
<p>Despite this widely shared view of the importance of democracy, many leaders we’ve spoken with in both the U.S. and around the world have said that they’re unsure what they can do to counter the global rise of authoritarianism. Some fear backlash or would prefer to avoid what is often framed as a partisan issue. Others see democracy as the domain of politicians and doubt that the voices of business leaders can make a difference. </p>
<p>We believe instead that business leaders are uniquely positioned to help contain democratic backsliding. Building on our research on power, democracy, and change in organizations and society, we argue that business leaders can play an essential role in the protection and strengthening of democracy. Supporting democracy is not only a civic obligation; it is also a strategic business imperative. </p>
<p></p>
<h3>The Business Case for Democracy</h3>
<p>Democracy provides businesses with two essential ingredients for success: clear rules and freedom.</p>
<p>Democracy establishes clear rules through legal frameworks, transparent regulatory processes, and consistent enforcement mechanisms. It helps ensure stable property rights, reliable contract enforcement, and anti-corruption safeguards that enable long-term investment and planning. These systems provide the kind of predictability that markets need in order to function efficiently. This does not mean that rules are always followed or enforced, but they are generally known and shape behavior in predictable ways.</p>
<p>Democracy also protects freedom. It is essential not just for political freedoms, like free expression and assembly, but also for the economic freedoms that businesses need to innovate and compete within the rules that have been democratically determined. Independent courts, media organizations, universities, and civil society organizations create checks and balances that guard businesses from discriminatory treatment, state overreach, and cronyism.</p>
<p>Together, these ingredients create a system in which people have “<a href="https://doi.org/10.4324/9780203486214" target="_blank" rel="noopener noreferrer">power with</a>” one another, rather than a single party or person holding concentrated “power over” others.</p>
<p></p>
<p>The economic dividends of democracy are numerous and well documented. Research has shown that democratization increases GDP per capita by about <a href="https://doi.org/10.1086/700936" target="_blank">20% over time</a> and <a href="https://doi.org/10.1111/j.1468-0343.2005.00145.x" target="_blank">limits corruption</a>, while <a href="https://www.brookings.edu/articles/democracy-is-good-for-the-economy-can-business-defend-it/" target="_blank">democratic backsliding</a> leads to economic stagnation, policy instability, cronyism, brain drain, and violence. Democratic countries <a href="https://doi.org/10.1086/700936" target="_blank">make larger investments</a> in capital, education, and health and adopt more economic reforms. <a href="https://academic.oup.com/restud/article-abstract/92/5/3306/7899604" target="_blank" rel="noopener noreferrer">Electoral turnovers</a> — in which the incumbent party is defeated and a new party comes to power — are a key component of healthy democracies and also improve countries’ economic performance. Democratically elected governments are strongly incentivized to support businesses that will grow and serve the needs of their citizens. </p>
<p>Authoritarian regimes, in contrast, treat business as a means to achieve their own ends. State-aligned companies are seen as showcases of regime success and are forced to prioritize political loyalty over market performance. Authoritarian governments may require companies to propagate state narratives, enforce surveillance in their workplaces and on digital platforms, or channel capital to favored industries and groups of people. Instead of remaining independent, businesses are pressured to serve as extensions of the state’s power, whether by funding patronage networks, censoring inconvenient truths, or producing goods and services that reinforce regime goals.  </p>
<p>The weakening of democracy also spills over into the workplace, threatening vitality and performance. Threats to safety and free expression breed distrust that stifles the expression of new ideas, creativity, and innovation. Talented employees may begin to look elsewhere to build their careers and lives. </p>
<p>Today, these threats are sharpened by the rise of artificial intelligence, which is already reshaping both business and democratic governance. Historical examples attest to the way that authoritarian regimes have consistently weaponized technologies to consolidate power: The Nazi Party pioneered propaganda films and radio broadcasts; the Soviet Union exploited television and telecommunications for propaganda and surveillance. Today’s autocrats are already deploying AI for mass surveillance, disinformation campaigns, and social control at unprecedented scale. If left unchecked, AI will contribute to the concentration of power in the hands of a few government officials and company leaders, undermining free expression, destabilizing trust in information systems, and ultimately further weakening democracy. </p>
<p></p>
<h3>What Business Leaders Can — and Must — Do</h3>
<p>Business leaders occupy a unique and powerful role in modern democracies. They command substantial resources and influence over their employees, customers, investors, and policy makers. Consequently, they have both the power and responsibility to protect the institutional conditions that have supported decades of economic vitality. </p>
<p>Defending democracy should not be confused with advocating for any particular political party or ideology. It is about safeguarding and enhancing the institutional conditions that protect freedom, including the freedom of businesses to operate independently. </p>
<p>Our research on power and change (including Julie’s book <a href="https://www.powerforallbook.com/" target="_blank" rel="noopener noreferrer"><em>Power, for All</em></a>) shows that such large-scale resistance occurs through collective action among broad coalitions, not isolated individual efforts. Rather than leaving their peers to make solo statements or take action on their own, companies and their leaders must shift to acting together. Coalition-based approaches increase the perceived legitimacy of collective action and amplify its impact while also reducing risks to individual organizations and their leaders.</p>
<p>We see four critical domains in which businesses, working collectively, can strengthen democracy and safeguard the conditions for long-term business success. Importantly, all of these domains cross ideological and partisan boundaries and promote democratic practices rather than specific policy outcomes. </p>
<h4>1. Defend democratic institutions and processes.</h4>
<p>Business leaders should publicly support the foundational elements of democracy: free and fair elections and an independent judiciary. Around elections, this also means taking concrete action to remove barriers to employees’ civic participation. For instance, as of 2024, over 2,000 U.S. companies were part of the nonpartisan <a href="https://www.maketimetovote.org/" target="_blank" rel="noopener noreferrer">Time To Vote</a> movement, pledging to ensure that their employees have a work schedule that allows them to vote in U.S. elections. Some companies gave employees additional time off to become poll workers or to help register voters at public events. A <a href="https://ash.harvard.edu/resources/civic-responsibility-the-power-of-companies-to-increase-voter-turnout/" target="_blank" rel="noopener noreferrer">2019 study</a> found that corporate civic responsibility programs “were well received by employees, consumers, and shareholders,” and the companies that sponsored them reported higher employee and consumer satisfaction.</p>
<p>To reinforce the democratic infrastructure of independent courts, collective business action can also take the form of joint public statements. Resisting violations of the rule of law and government overreach against one’s organization, and speaking out when such overreach affects others, signals to employees, customers, and other partners that democracy is a shared responsibility.</p>
<p></p>
<p>Businesses involved in the development and deployment of AI technologies have a particularly important role to play. Like earlier major technological advances, AI has the potential to accelerate authoritarian consolidation. Businesses must commit to being transparent about how AI models are trained and deployed, and to collaborating with governments, universities, and civil society to ensure that AI accountability systems serve rather than undermine the public good.</p>
<p>The focus of all these efforts should not be on supporting particular political parties but on ensuring that healthy, independent civil society institutions <a href="https://www.nytimes.com/2026/04/07/opinion/political-power-citizens-assemblies.html" target="_blank" rel="noopener noreferrer">in which citizens exercise real voice</a> prevent the concentration and abuse of state power. This work benefits business by maintaining the stable, rules-based environment companies need to thrive.</p>
<h4>2. Support independent civil society organizations without exercising undue influence.</h4>
<p>Businesses can help support independent journalism, academia, and civil society organizations. However, this support must come with strict safeguards to protect the independence of these organizations. To avoid undue influence, businesses can collaborate to fund these institutions through mechanisms that ensure editorial and operational independence. These mechanisms include third-party intermediation and contributions to pooled funding, which have both been used to increase the impact of corporate support for humanitarian causes. </p>
<p>In addition, standards for transparency around funding, along with disclosures of conflicts of interest and intended uses, are necessary. By publicly affirming the autonomy of the organizations they support and committing to respect that autonomy in the future, businesses reinforce the principle that a thriving democracy depends on independent civil society organizations — even when those organizations challenge businesses’ own interests. </p>
<h4>3. Limit forms of political influence that are not aligned with democratic principles.</h4>
<p>While businesses have a role to play in supporting civic participation, democratic processes, and independent civil society organizations, they should not use their financial power to shape electoral outcomes, secure special treatment, or skew public decision-making to favor private interests. There is a critical difference between supporting democratic processes and using money to impose election or policy outcomes: The first helps protect democracy, while the second risks distorting it.</p>
<p>Lobbying and campaign spending should therefore be transparent, restrained, and aligned with democratic principles. Excessive corporate influence over election outcomes and government decision-making <a href="https://doi.org/10.1146/annurev-polisci-010814-104523" target="_blank" rel="noopener noreferrer">weakens democracy</a>. As President Abraham Lincoln declared in 1863, the United States’ “new birth of freedom” would come from a “government of the people, by the people, for the people.” When private interests <a href="https://doi.org/10.1017/S1537592714001595" target="_blank" rel="noopener noreferrer">exert disproportionate influence</a> over public institutions, democratic foundations are weakened. </p>
<p></p>
<p>In contrast, if businesses and policy makers jointly commit to making the relationship between business and government more visible and constrained, businesses can help support a system that rewards value creation and organizational performance over political spending and insider connections. In this context, industrywide agreements and democratic financing reforms, including strict donation caps, can help preserve democracy while reducing incentives for companies to engage in political spending arms races. </p>
<h4>4. Foster democratic practices within organizations themselves.</h4>
<p>Last, businesses can also help strengthen democracy by engaging in democratic practices <a href="https://doi.org/10.1177/26317877221084714" target="_blank" rel="noopener noreferrer">inside their own organizations</a>. When organizations include employees in governance and use more participatory decision-making, they model democratic processes internally. Research has found that these <a href="https://doi.org/10.1177/00018392251322430" target="_blank" rel="noopener noreferrer">internal practices can create spillover effects</a> beyond the workplace. Promoting <a href="https://sloanreview.mit.edu/article/when-employees-speak-up-companies-win/">employee voice</a> and participation in the workplace can enhance morale while also helping to cultivate habits and norms that <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5357023" target="_blank" rel="noopener noreferrer">reinforce employees’ civic engagement</a> as citizens. </p>
<p>Adopting a participatory form of governance can also strengthen the societal foundations for innovation and long-term prosperity. This was highlighted in a <a href="https://reportondemocracyatwork.org/en/the-report/" target="_blank" rel="noopener noreferrer">February 2026 report</a> by the International High-Level Expert Committee on Democracy at Work, a group (of which Julie is a member) that was tasked with advising the Spanish government on how to implement an article of Spain’s constitution. That article calls for public authorities to promote worker participation in their employers’ operational and strategic decisions, and to facilitate workers’ access to company ownership. Empowering workers in this way is especially important today because AI systems need to be developed and deployed in ways that benefit not just companies but their workers and society overall. </p>
<p></p>
<p></p>
<p>Democracy is both a moral cause and a strategic imperative. Without the democratic rule of law, checks on power, and independent institutions, the business environment becomes unpredictable and precarious. Companies cannot afford to build their futures on such an unstable foundation.</p>
<p>The time to act is now. The choices business leaders make today will determine not only the future of their companies but also that of democracy itself. At a time when democracy is under threat, business leaders across the political spectrum have an opportunity to act collectively to protect and strengthen the democratic guardrails that underpin both democracy and long-term prosperity. As central players in the economy, these leaders must recognize both their responsibility and their stake in stopping democratic decline, and work closely with partners across sectors to champion democracy with conviction and courage.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/why-business-leaders-need-to-champion-democracy/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Industrial AI for the Physical World: Siemens’s Peter Koerte</title>
				<link>https://sloanreview.mit.edu/audio/industrial-ai-for-the-physical-world-siemenss-peter-koerte/</link>
				<comments>https://sloanreview.mit.edu/audio/industrial-ai-for-the-physical-world-siemenss-peter-koerte/#respond</comments>
				<pubDate>Tue, 21 Apr 2026 11:00:40 +0000</pubDate>
				<dc:creator><![CDATA[Sam Ransbotham. <p><cite>Me, Myself, and AI</cite> is a podcast produced by <cite>MIT Sloan Management Review</cite> and hosted by Sam Ransbotham. It is engineered by David Lishansky and produced by Allison Ryder.</p>
<p><a href="https://sloanreview.mit.edu/sam-ransbotham/">Sam Ransbotham</a> is a professor in the information systems department at the Carroll School of Management at Boston College, as well as guest editor for <cite>MIT Sloan Management Review</cite>’s Artificial Intelligence and Business Strategy Big Ideas initiative.</p>
]]></dc:creator>

						<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Data Strategy]]></category>
		<category><![CDATA[Labor]]></category>
		<category><![CDATA[Rail Transportation Systems]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Quality & Service]]></category>
		<category><![CDATA[Skills & Learning]]></category>
		<category><![CDATA[Technology Implementation]]></category>

				<description><![CDATA[In this episode of the Me, Myself, and AI podcast, host Sam Ransbotham talks with Peter Koerte, a member of the managing board and chief strategy and technology officer of Siemens, about how industrial AI is quietly transforming the infrastructure that powers everyday life. While consumer AI grabs headlines, Peter explains how artificial intelligence is [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<p>In this episode of the <cite>Me, Myself, and AI</cite> podcast, host Sam Ransbotham talks with Peter Koerte, a member of the managing board and chief strategy and technology officer of Siemens, about how industrial AI is quietly transforming the infrastructure that powers everyday life. While consumer AI grabs headlines, Peter explains how artificial intelligence is improving factories, transportation systems, energy grids, and buildings behind the scenes. The conversation explores what makes industrial AI different — from the need for near-perfect accuracy to the challenge of working with proprietary, domain-specific data.</p>
<p>Peter shares examples like predicting train door failures days in advance, optimizing building energy use, and accelerating complex engineering simulations. Peter and Sam also discuss the importance of domain expertise, the value of data-sharing partnerships across companies, and why transformation is as much about people and workflows as it is about technology.</p>
<aside class="callout-info">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/MMAI-S13-E4-Koerte-Siemens-headshot-600.jpg" alt="Peter Koerte"></p>
<h4>Peter Koerte, Siemens</h4>
<p>As a member of the managing board, chief strategy officer, and chief technology officer of Siemens, Peter Koerte is responsible for developing the company’s strategy and leading its worldwide research and development activities. His current priorities include accelerating development of innovative sustainable technologies and continuing development of the Siemens Xcelerator business platform.</p>
<p>Koerte previously headed Digital Health, a Siemens Healthineers unit that develops AI-supported diagnostic procedures for health care. He joined the corporate strategy side of the company in 2007 after working for the Boston Consulting Group. Koerte holds a master’s degree in business and engineering from the Karlsruhe Institute of Technology and a doctorate in strategy and international management from the WHU-Otto Beisheim School of Management. He also completed the General Management Program at Harvard Business School.</p>
</aside>
<p>Subscribe to <cite>Me, Myself, and AI</cite> on <a href="https://podcasts.apple.com/us/podcast/me-myself-and-ai/id1533115958" target="_blank" rel="noopener">Apple Podcasts</a> or <a href="https://open.spotify.com/show/7ysPBcYtOPVgI6W5an6lup" target="_blank" rel="noopener">Spotify</a>.</p>
<h4>Transcript</h4>
<p><strong>Allison Ryder:</strong> Consumer AI makes headlines daily, but industrial AI increasingly enhances and enables nearly everything we do. Learn how one multinational company approaches data management and deployments at scale on today’s episode.</p>
<p><strong>Peter Koerte:</strong> I’m Peter Koerte from Siemens, and you’re listening to <cite>Me, Myself, and AI</cite>.</p>
<p><strong>Sam Ransbotham:</strong> Welcome to <cite>Me, Myself, and AI</cite>, a podcast from <cite>MIT Sloan Management Review</cite> exploring the future of artificial intelligence. I’m Sam Ransbotham, professor of analytics at Boston College. I’ve been researching data, analytics, and AI at <cite>MIT SMR</cite> since 2014, with research articles, annual industry reports, case studies, and now 13 seasons of podcast episodes. In each episode, corporate leaders, cutting-edge researchers, and AI policy makers join us to break down what separates AI hype from AI success.</p>
<p>Today we’re talking with Peter Koerte, chief technology officer at Siemens. Siemens is a German multinational technology company focused on industrial automation, smart infrastructure, and mobility systems, all increasingly important topics. We’ll discuss industrial AI, what it means for the workforce, and what the implications are for data sharing across industry. Peter, welcome.</p>
<p><strong>Peter Koerte:</strong> Thank you, Sam, for having me.</p>
<p><strong>Sam Ransbotham:</strong> Let’s start at a high level. Some of our listeners may not be familiar with Siemens. Can you give us a brief overview?</p>
<p><strong>Peter Koerte:</strong> Sure. Siemens [has been] out there [for almost] 180 years. What we say is, “We transform the every day of everyone.” What that means is if you think about the chair right now that you’re sitting on, the clothes that you’re wearing, the water that you’re drinking, the electricity that you’re using, the transportation systems such as trains that you’re using every day, all of that was enabled by Siemens. When it [comes] down to the way we design these things, we produce them, how we actually make sure electricity is safe and distributed, how transportation runs smoothly and safely, all of that is coming through Siemens. </p>
<p>As a consumer, usually you don’t see us, but in the industrial world, Siemens is a very, very big brand name, and we are well recognized for high quality but also for the great solutions we bring and the simplicity to our customers. </p>
<p><strong>Sam Ransbotham:</strong> I think that’s a great example. Because so much of the world we rely on, we just don’t pay attention to. We don’t notice it unless it isn’t working for some reason. You talked about industrial AI. What exactly is the difference between industrial AI and consumer AI that most people would be familiar with? </p>
<p><strong>Peter Koerte:</strong> The big difference is today, of course, consumer AI is making the headlines, while we think industrial AI is quietly but profoundly changing the physical infrastructure, the physical world that we know of. </p>
<p>So think about, for example, the building that you’re sitting in right now. That building has, of course, some climate control. About 30% to 40% of all the electricity that we’re using today goes into buildings. What we’re saying is, “what if we actually can take all the sensors that we have in these buildings, then develop an AI that automatically learns every minute — or 15 minutes, in that case — and then automatically adjusts all the temperature settings, all the lighting settings, and everything in order to cut costs and energy?” That’s exactly what we’re doing. </p>
<p>We just launched an application that saves 30% of your energy bill and therefore reduces greenhouse gases by 30% just by doing that. It runs autonomously in the background. This is what we do for grids. We do this for factories. We do this for machines. We do this for, of course, buildings, and we do this for trains. So everything in the real world, we are making it more efficient, simply by what we say: “Connecting the real world and the digital world.” We try to optimize and make things better. </p>
<p><strong>Sam Ransbotham:</strong> That makes a lot of sense. I mean, I’m sitting here on a university campus. It’s spring break. I guess we are probably heating this place about the same as we would be if it was full of people. I don’t even want to ask. I don’t want to know. </p>
<p><strong>Peter Koerte:</strong> That’s it. </p>
<p><strong>Sam Ransbotham:</strong> Well, I think we’re all familiar with consumer applications, and I think the failures of AI in consumer applications get a lot of attention, you know, with the hallucinations and these sorts of things. Somehow that seems very different if you’re connecting this to the physical world. It’s not just a funny anecdote that goes across the internet when AI screws up. It could have some real-world consequences when you make that connection. How is Siemens thinking about that? </p>
<p><strong>Peter Koerte:</strong> You’re absolutely right. Sam, thank you for saying that. When we compare consumer AI to industrial AI, there [are] three things at the very least that are profoundly different, and the first one you already alluded to is the level of precision and accuracy of those models. Obviously, you don’t need any hallucination when you make recommendations for an engineer to design the next part for, let’s say, your smartphone. Or you certainly don’t need an AI mistake when you think about how to optimize an electricity grid, because that’s critical infrastructure. </p>
<p>So what we need to ensure is the highest level of quality of those models, which, as you can imagine, that’s where we get into 99, 99.9, and so on [for the] percentage of accuracy of the models. And a lot of work goes into that to make sure that these are reliable, safe, and trustworthy. That’s the first part. </p>
<p>The second part is, actually, how do you train these models? Because all of us, we are very familiar with what we call large language models. Now in industry, we’re not necessarily talking about large language models. We’re usually talking about specific data when it comes to — going back to the building example — temperature settings. So we have a lot of time series data. We have construction data. We have engineering data. We have simulation data. This is very different. These are geometries, pictures, vectors, what have you. We have to make these models available in a very, very different way. </p>
<p>The third difference is how do we get that data? Because when we build these models for the physical world, we cannot go on the internet and just download a bunch of data from sensors for your buildings or CAD data or whatever. This is very often even very proprietary data. Customers are only willing to share that data if we are able to express an incremental benefit of when they use our model, then they in return [will] share the data with us. So, of course, in your case, [there will be] better energy savings in the building, but, also, for designers, [they’ll experience a] faster time to market because we can get them designed faster and so on. That way of how you actually get to the data is very different. So the language you’re speaking, the accuracy that we need, how we get the data, this is in the industrial world quite different than, of course, what we use in consumer AI every day.</p>
<p><strong>Sam Ransbotham:</strong> That’s pretty fascinating. My naive reaction when you first started talking was, “Oh, what you’re describing is much more structured data,” so I was pretty excited when you were [saying] a lot of this data is temperature data or structured data, but the idiosyncratic nature, and how it applies only to your building or only to your machine and only to your setting, seems very difficult. Tell us a little bit about how you’re getting people to give that data to train machines and how that transfer works between organizations. </p>
<p><strong>Peter Koerte:</strong> It’s a very good question. And you’re absolutely right. Because if you think about it, if I say, “It’s a great day” or “The day is great,” the LLM does understand the meaning that actually it’s a great day. In engineering terms, it’s very different, so therefore, we need to adjust and cater for that. The way this works in the industrial settings is you go, of course, after the industries, step by step and say, “OK, what is the semantics in there?” I alluded to buildings, and in buildings there [are] certain standards, and there [are] certain data formats and what we call ontologies. It’s the semantics. </p>
<p>There we try to get that understanding about what is it, that data that is there? It is more structured, as you say, but as you can imagine, right now you’re sitting in a room with Fahrenheit. I’m sitting in a room with Celsius. Therefore, if you then say, “Well, even this is a temperature setting,” actually it is, but it’s quite different if I’m talking 20 [degrees] and you’re talking 20, right? For me it’s warm, and then for you it’s actually really freezing cold. And that’s something to adjust for. </p>
<p>So it’s not a slam dunk, but understanding these use cases industry by industry is really key. In buildings it’s all about energy consumption. But as I said in engineering, very often it’s time to market. It’s in production. It’s usually quality and throughput. Understanding the data and the key variables that drive that is important, which brings us to a keyword that I want to mention, and that is called <em>domain know-how</em>. Because you can argue, “Well, any data scientist can do that.” It’s true. However, you really need to understand the domain that you’re operating in and the key parameters. </p>
<p>I’ll give you just one very simple example, but I find it fascinating. I’m not sure when you last used a train, but maybe the next time you use a train and I ask you, “What is the most critical component of a train?” probably you would say, “Well, probably the brakes.” That’s true; it’s safety critical. But it turns out it’s the doors. </p>
<p>And why is that? Because if you think of the job to be done of a train, [it] is to move people from A to B. That means it stops. It gets people on and off. You go from station to station to station. So the whole day, indeed, yes, the doors of the train open and shut, and thereby they break down. So the most critical part in that regard for the operations is the door. This is the main knowledge; you need to understand that part. </p>
<p>Once you understand that, then it’s fascinating, because then what you can do is you can say, “Give me the voltage reading of that motor that drives the door. Look at, of course, the profile of how that motor operates.” Meanwhile, today our models can predict any door failure 10 days prior [to] its [failure], so therefore we can get it into the depot, and you can fix it, which means higher uptime, higher reliability, all of it, and better passenger comfort. So these are the examples where we have to combine the domain know-how together with the technical know-how, meaning AI, and that’s how you create customer value, industry by industry. </p>
<p><strong>Sam Ransbotham:</strong> I like that because I can get my mind around that example. Some of the things that I was reading about Siemens were complicated to understand, but that makes a lot of sense. I think everyone has some sort of application where they would like to know ahead of time that something is going to break before it breaks. Because when it does, it’s a mess. </p>
<p>Siemens doesn’t necessarily own trains though. So how do you get that data about those voltages into your systems versus your customer who has purchased and bought that train? They have to have some sort of way to send that data. They’ve got to share information with you somehow. Weirdly enough, they would benefit from someone else’s train data for a train they don’t own. How do you manage that infrastructure? </p>
<p><strong>Peter Koerte:</strong> It’s a great question. That’s why I said it’s very different [from] the way you collect data in the industrial world. Let’s stay in the train example. Truth be told, those customers, they simply don’t. They say, “Give me the train and I’m fine, and then I’ll build my own model.” So we have operators like that. Usually, however, they are not the ones that are most successful. </p>
<p>Usually, the ones that [do consider this:] If you look at the total cost of ownership across the entire life cycle of a train, which is, let’s say, 30 years — in terms of CapEx, the investment is about 10% of the TCO, 90% is operations. So what if I go to you as the OEM? You know your system best. I share the data with you, and you help me to optimize. So you help me to optimize with regards to reliability. That’s the door example. You help me on the efficiency. This really goes down to, of course, the way you operate the train. </p>
<p>Believe it or not, we have AI that helps you to think about how to accelerate and decelerate or brake that train in order to save energy. Energy is one of the biggest operating costs that you have on the train. This is where we then take that data. It’s connected. All of these devices are then connected, of course, reliably and encrypted. And then we have the data, and then we make use out of this data, and we build our own models in that regard. And we do this customer by customer, and very often we do have a data-sharing agreement, so we can use that data. We don’t own the data. That’s important. It’s still our customers’ data, but we can use it and train our models for their purposes.</p>
<p>Then, as you said, we can combine it with other data so everybody gets better in that regard. And that’s exactly what’s happening not just in, let’s say, trains, but you see this in many machines. But it turns out it’s not enough data to build your own models because you need to have much more data across different settings. And this is where Siemens comes into play, because usually we don’t build machines, and we don’t build all the trains. Usually, we build components that go into it. So we work with car manufacturers. We work with aerospace manufacturers. We’ve worked with life sciences companies. We work with food and beverage companies, and so on, in order to help enable them. And so they come to Siemens and naturally say, “You know what, how can you help our specific industry to become better?”</p>
<p></p>
<p><strong>Sam Ransbotham:</strong> I hadn’t quite thought about it that way, that if one person has insufficient data to train a model by themselves and another person has insufficient data to train a model, but together they do, then the idea of connecting those people together creates value that neither of them could. We had <a href="https://sloanreview.mit.edu/audio/big-data-in-agriculture-land-olakes-teddy-bekele/">a guest from Land O’Lakes on a prior episode</a>. They’re sharing information with farmers. Farmers build things, they have a lot of data about their crops but how they share that data — I feel like there’s a lot of that going on where we are recognizing that idiosyncratic data is more valuable when combined with other data. At the same time, I’m not naive. People don’t want to share stuff. How do you encourage people to do this? </p>
<p><strong>Peter Koerte:</strong> There’s a simple — not an easy but a simple — answer to this, and that is the value. So if I’m not able to translate that and say, “You know what, share the data with me, and then thereby you’re going to improve your availability of the train to stay there, or I [will] improve the efficiency of your building,” then they will not share the data. It’s as simple as that. But if you do, then that’s great. Then they say that’s fine. </p>
<p>Sometimes it’s built into your solution. It’s built into the contract where they say, “Well, we don’t care. It’s fine; you can just use it.” Others are saying, “Hey, I want to also have a negotiated discount,” which is also possible. But the simple answer is you only share your data if you get some value in return. So that’s a little bit like the model. Depending on the industry, it’s slightly different in terms of the kind of value we’re creating, but still there’s some value in return. </p>
<p><strong>Sam Ransbotham:</strong> You’re describing largely a partnership but sort of between customers or with customers, but you’ve also done some recent connections with industry, like your partnership with Nvidia. Can you describe what you’re thinking there? I think the goal there is an industrial operating system. How does that work? What’s the plan there? What’s the thinking? </p>
<p><strong>Peter Koerte:</strong> With Nvidia we have a very, very close relationship for many reasons. One, of course, is you lose a lot of GPUs in order to train some of our models. Second, [for] tools that we’re providing today, Siemens is the leader in industrial software. So we [have] about 10 billion euros of digital sales. People forget about that. We’re among the top 20 software companies in the world, so we have a lot of simulation software, where you can simulate cars, trains, rockets in the digital world. </p>
<p>Of course, all these simulations take an awful long time when you think about computational fluid dynamics, which is very complex. But [it] turns out you really can accelerate them. So what we’re doing together with Nvidia is to say, “What if instead of waiting eight hours for a complex computational fluid dynamic simulation, let’s say, of the air drag on a car — we can reduce that to minutes?” And that’s exactly what we’re looking at. </p>
<p>So it’s accelerating simulation, accelerating design, when it comes to chip design, which is really interesting as we get to lower nanometers — two nanometers and less — the complexity of verifying those chip designs is enormous. It exponentially really rises. So instead of having human engineers going through every circuit and really testing it to every gate array, actually you can start to have an AI go through this and do this over and over and over again. So the chip design verification is one. </p>
<p>Then, lastly, the design transfer to manufacturing is a key issue because these really hold you up in how fast you can get these chips out there. There again, as you are the designer, we can have the AI in the background verify whether what you’ve designed is correct and whether it can be manufactured. </p>
<p>These are examples that we have announced also at [the Consumer Electronics Show] earlier this year with Nvidia. We are really excited about [them] because we think we can further <em>accelerate</em> — and this is always the keyword: acceleration of design, acceleration of manufacturing, acceleration of operations. That’s why we are so excited about it.</p>
<p><strong>Sam Ransbotham:</strong> I get the appeal of switching eight months to eight minutes. It doesn’t take much quantification; we can do that in Fahrenheit or in Celsius, either way that works. But the other thing it makes me think about is that you probably have a lot of processes designed around the idea that it was going to take eight months to do that. And when it takes eight minutes, it feels like, sure, it compresses it, but it also might change the types of things you do, the order that you do them in. It seems like it could just have this ripple of upheaval. How do you manage that? Or maybe am I extrapolating too much? It feels like it could be a mess. </p>
<p><strong>Peter Koerte:</strong> That is very true. That’s why I tend to say, always, AI is about 20% technology and 80% is actually transformation. What that means is, we talked a lot about data, that’s one thing, but then it is really changing the processes of how you do things. And, usually, what the AI is now doing is it really changes workflows. So instead of thinking sequentially, where I do one task, let’s say I do the design. The next one is doing the verification. Then the next one is looking at how do I design to transfer, and transfer it to manufacturing. It’s very sequential.</p>
<p>Now what if you could do this all in one step because the AI is doing it? Obviously you’re disrupting a very well-established workflow process. The first question that comes is, who is doing this? Is that the designer from the very end [or] from the beginning? Is it somebody completely else? Who’s the persona that you’re actually talking to? Some very interesting questions. </p>
<p>Second, how is that process then going to go? And who is verifying that whatever the AI is doing is really correct? Then a third question is, where do I do this? Where is the AI sitting? Is that a new application? Is that embedded into an existing application? Is it talking to all applications? All of these interesting questions arise, and they are not usually all technical. Very often, we find this is very much about the people [who] use it every day and involve them, and then start to think — rethink — what wasn’t possible before and thereby addressing also some anxieties, because many would then argue, “The AI is going to take my job away.” So then you have a lot of resistance. Then all of a sudden a technology conversation becomes a cultural-change transformation conversation. We find this time and again. </p>
<p><strong>Sam Ransbotham:</strong> Now, the natural follow-up is free for me to ask about workflow and these types of issues. They’re all important, and I don’t want to discount those or whatever, but you’re pretty fired up about smart glasses and workers wearing smart glasses. What’s next for them? How do you see them in the industrial world? </p>
<p><strong>Peter Koerte:</strong> I’m very excited about smart glasses. If you think about, in particular, U.S. manufacturing: I just spoke to a major new electrical vehicle car manufacturer, and they told me in their manufacturing, their churn rate — so the attrition of their blue-collar workers — is 35%. What that means is you constantly have to retrain your employees. And it’s not just retraining them, but also the other question is, “How do you capture that knowledge?” What if you can take your glasses, you have that camera, and, let’s say, you are a specialist in operations and you are a maintenance engineer for a specific machine. </p>
<p>That camera and that AI is [looking over] your shoulders, literally, and really checking off what you’re doing. Maybe you’re narrating it even. You record this. You do this over and over again, thereby, you’re democratizing that knowledge, actually. You can capture this for future people coming in. But even better than for the new worker, working the night shift, 2 a.m. in the morning, a machine breaks down, usually people are just tinkering around having no idea. But what if you had those glasses on now, and those glasses are saying, “This is a CNC machine. Usually the failure code of E345 means actually it is a Jam 2. Check that lid and open this one, two, three, four, five,” and off you are. How amazing is that? </p>
<p>I really think in terms of the keyword <em>augmentation</em>. So augmenting the workers, the blue-collar workers, but also white-collar workers on the shop floor and, of course, capturing that knowledge as they are exiting. Isn’t that amazing? I think it’s going to make us all much more productive and much more enjoyable, because you get faster time to results, and thereby you get the factory running, and so on and so on. And you reduce a lot of anxiety and fear, because very often people don’t know what to do. Now all of a sudden they have a companion. They have a copilot, colleague, whatever you want to call it, that helps them, and that is there for them 24-7, as opposed to calling somebody who’s probably somewhere home and sleeping. </p>
<p><strong>Sam Ransbotham:</strong> That makes a lot of sense. I want to draw a little contrast though. Earlier we were talking about data, and you were talking about a need for deep expertise and deep domain knowledge. But it sounds like this is maybe a push against, or you’re not needing to know that the E345 error code means this, that, or the other. Is it deeper? Is it more specialized? Those seem in conflict to me in some ways.</p>
<p><strong>Peter Koerte:</strong> Obviously, we need both. But, actually, the example is pretty comparable if you think of it. So yes, I can tell you the door is going to break down, and this is now preventative maintenance. The other case was more as a reaction. But in both cases it’s maintenance. So the preventive maintenance means that still a worker has to go out there and replace the motor. Now, on the other hand, in our case here, it’s the same thing. It just gives you the intelligence of what to do. And the doing itself still has to be done by somebody who’s operating that machine. So I think it’s pretty comparable. </p>
<p>The interesting thing about this is because it still requires humans, could we at some point automate that through the whole conversation about robotics and humanoids and everything? This is certainly then also a big push right now that we’re seeing in the market. Whether this is going to come soon or not, we don’t know, but for sure we’re missing at least 2 million people in the workforce in the United States already today … on the shop floor. So the only way to stay productive is by automation. This is where Siemens helps many companies to automate their processes in the factories. </p>
<p><strong>Sam Ransbotham:</strong> Maybe I’m reading too much into it, but I read something you’d written about humanoid robots and some skepticism about the actual humanoid shape, and you were kind of hinting at that right there. For one, I’m totally with you. The human shape is not anything magical, and there are a lot better shapes for industrial machinery in particular. Are things going to look like humans, or are they going to look like machines, or different? </p>
<p><strong>Peter Koerte:</strong> Well, that’s the big debate. To be honest, it’s too early to tell. I’ve seen both. As a matter of fact, today I just had two conversations of that sort. One of them [was] going in the direction of we need to have humanoids, the other one [was] saying “No, no, no.” I think in the end it comes down to the ROI and the value, again, that we’re creating. </p>
<p>Let’s take a very simple example. Let’s say material handling is a big one in a factory. You have to always make sure that there’s an ample supply of material. Let’s say, in particular, if you’re in a stamping plant, it’s metal sheets, and so it’s heavy. Taking a humanoid is probably not a good idea, although there [are] use cases; I’ve seen them. And there [are] many reasons. One, the payload is very, very, very limited. Number two, humanoids are quite slow if you look at them, at least today. The question is, can you accelerate them? But today they are slow. And then lastly, up to 30% of the energy consumed in a humanoid is just to make sure that you’re standing upright. What if you actually had different form factors that would give you higher payload, faster speed, less energy consumed, and then it becomes an ROI conversation? It depends. It’s very hard to generalize. </p>
<p>In this case, though, I almost would bet probably a different form factor to a humanoid is a better one. But there [are] others where you could argue a humanoid could do a better job, for example, wiring harnesses, clipping them together, where you need to learn dexterity and versatility and all of it. Maybe, but that’s exactly why it’s a fascinating field. I think anybody who claims [to] know it, I think it’s too premature, but it’s a fascinating field. </p>
<p><strong>Sam Ransbotham:</strong> Actually, I like that because I think so many things are increasingly “it depends,” because we don’t have these one-size-fits-all models that are going to work. And you know that defeats our ability to make some sort of prognostications here. </p>
<p>Thanks for taking the time to talk with us and sharing your insights about industrial AI, which is probably a different idea for some people, and also data sharing in the future of work. And listeners, thanks for joining us on <cite>Me, Myself, and AI</cite>. </p>
<p><strong>Peter Koerte:</strong> Thank you, Sam. It was great. </p>
<p><strong>Sam Ransbotham:</strong> Thanks again for listening today. Next time, Vineet Khosla, CTO at <cite>The Washington Post</cite> joins us for a conversation about AI innovation in publishing. Please join us then.</p>
<p><strong>Allison Ryder:</strong> Thanks for listening to <cite>Me, Myself, and AI</cite>. Our show is able to continue, in large part, due to listener support. Your streams and downloads make a big difference. If you have a moment, please consider leaving us an Apple Podcasts review or a rating on Spotify. And share our show with others you think might find it interesting and helpful.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/audio/industrial-ai-for-the-physical-world-siemenss-peter-koerte/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Beyond the Model — Why Responsible AI Must Address Workforce Impact</title>
				<link>https://sloanreview.mit.edu/article/beyond-the-model-why-responsible-ai-must-address-workforce-impact/</link>
				<comments>https://sloanreview.mit.edu/article/beyond-the-model-why-responsible-ai-must-address-workforce-impact/#comments</comments>
				<pubDate>Tue, 21 Apr 2026 11:00:29 +0000</pubDate>
				<dc:creator><![CDATA[Elizabeth M. Renieris, David Kiron, Steven Mills, and Anne Kleppe. ]]></dc:creator>

						<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Employee Safety]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Human-Machine Collaboration]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[IT Governance & Leadership]]></category>
		<category><![CDATA[Managing Technology]]></category>
		<category><![CDATA[Technology Implementation]]></category>
		<category><![CDATA[Workplace, Teams, & Culture]]></category>
		<category><![CDATA[Responsible AI]]></category>

				<description><![CDATA[For the fifth year in a row, MIT Sloan Management Review and Boston Consulting Group (BCG) have assembled an international panel of AI experts that includes academics and practitioners to help us understand how responsible artificial intelligence (RAI) is being implemented across organizations worldwide. In prior years, we examined organizational RAI maturity; third-party, generative, and [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/BCG-RAI_2026_ExpertPanel01-1290x860-1.jpg" alt="" /><br />
</figure>
<p>For the fifth year in a row, <cite>MIT Sloan Management Review</cite> and Boston Consulting Group (BCG) have assembled an international panel of AI experts that includes academics and practitioners to help us understand how responsible artificial intelligence (RAI) is being implemented across organizations worldwide. In prior years, we examined organizational <a href="https://sloanreview.mit.edu/article/mature-rai-programs-can-help-minimize-ai-system-failures/">RAI maturity</a>; <a href="https://sloanreview.mit.edu/article/responsible-ai-at-risk-understanding-and-overcoming-the-risks-of-third-party-ai/">third-party, generative, and agentic AI risks</a>; and <a href="https://sloanreview.mit.edu/article/a-fragmented-landscape-is-no-excuse-for-global-companies-serious-about-responsible-ai/">core AI governance pillars</a>, including accountability, explainability, and oversight. Since our project began, AI use has rapidly spread among organizations of every size, sector, and geography. At the same time, early fears have begun to materialize related to its impact on the workforce, with several companies announcing <a href="https://www.wsj.com/tech/ai/the-week-the-dreaded-ai-jobs-wipeout-got-real-3ba5057b" target="_blank" rel="noopener">substantial layoffs</a> while citing AI-enabled efficiency gains.                  </p>
<p>Given the growing concerns over how much human workers will be affected by AI, we asked our panel to react to the following provocation: <em>Responsible AI practice should address workforce impact, not just AI system risk</em>. Nearly 80% of our panelists agree or strongly agree with the statement. Our panel previously highlighted that sound AI governance asks not only <em>how</em> a technology is designed or deployed but <em>whether</em> it should be used at all. This year’s panel extended that logic, stressing that responsible AI must look beyond safe systems to the real-world consequences for workers and economic stability. Below, we share our panelists’ insights and offer our practical recommendations for organizations seeking to address workforce impact as part of their responsible AI governance.</p>
<div class="callout-highlight callout-highlight--transparent">
<aside class="l-content-wrap">
<article>
<h4>Responsible AI programs should include addressing the technology’s displacement of human workers.</h4>
<p class="caption mb30">Eighty percent of panelists agree or strongly agree that responsible AI should include considering the technology's impact on human workers.</p>
<p><img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/RAI2026-Human-Article1.png" alt="Bar Chart: Strongly disagree: 3%; Disagree: 7%; Neither agree nor disagree: 10%; Agree: 20%; Strongly agree: 60%"/></p>
<p class="attribution">Source: Panel of 31 experts on artificial intelligence strategy.</p>
</article>
</aside>
</div>
<p><strong>Responsible AI must be sociotechnical, not just technical.</strong> Our experts believe that AI will change the future of work. Katia Walsh, AI lead at Apollo Global Management, argues that “we are on the precipice of a societal revolution that will profoundly alter ways of working,” and MIT professor Sanjay Sarma agrees that “implications on jobs will be significant.” In fact, Mike Linksvayer, vice president of developer policy at GitHub, points out that “as AI is rapidly incorporated into day-to-day work, it is already reshaping how judgment is exercised, how quickly people learn, and what individuals can reasonably attempt,” and he used software development as a clear example. Because AI reorganizes workflows, fragments tasks, and redistributes power between workers and organizations, our experts argue that RAI cannot be defined in solely technical terms.</p>
<p>As senior AI executive David Hardoon explains, “Far too often, AI is mistaken for a mere technology when in reality it is a much broader ecosystem involving people, processes, governance, and society at large.” Simon Chesterman, National University of Singapore’s vice provost, says that “if responsible AI only means making the model safe, accurate, and compliant, we’ve defined the problem too narrowly,” adding, “If we don’t address the human consequences, responsible AI becomes a technical checklist with a moral halo.” Ranier Hoffmann, chief data officer of EnBW, puts it another way: “Responsible AI is ultimately about governing sociotechnical systems, not just compliant algorithms.” For Jai Ganesh, Ph.D., vice president of technology, connected services, engineering, at Wipro Ltd, “responsible AI is about ensuring innovation benefits society as a whole, including the people whose work it transforms.” In other words, responsible AI is not just about what a system does but about what it does to people; overlooking this distinction carries real socioeconomic risks.</p>
<p><strong>The current RAI discourse has not kept pace.</strong> Renato Leite Monteiro, vice president of privacy, data protection, AI, and intellectual property at e&, regrets that the “conversation has been dominated by system-level concerns like bias, explainability, and safety.” While these considerations are important, he says, they are “incomplete” because AI “reshapes how people work, what skills matter, who gets opportunities, and who gets left behind.” Bruno Bioni, founder and director of Data Privacy Brasil, agrees, cautioning that by focusing on narrow technical and model-centric risks like bias mitigation, privacy, robustness, or model safety, “governance frameworks risk collapsing into a narrowly technocratic approach.” Naomi Lariviere, ADP’s chief product owner, expands on that, saying, “If we only focus on guardrails, we miss how AI reshapes accountability, advantage, and day-to-day experience.”  </p>
<p></p>
<p><strong>Workforce impact is a core AI risk to social and economic stability.</strong> Although proponents of rapid AI adoption frequently cite efficiency and productivity as core motivations, our experts warn that a failure to address workforce impact could undermine these goals and exacerbate economic issues. OdiseIA president Idoia Salazar illustrates the scope of the problem, noting that “AI can reshape tasks and roles, intensify monitoring and productivity pressure, shift decision-making power away from workers, and produce uneven impacts across different groups.” As Yan Chow of Automation Anywhere puts it, “If AI maximizes efficiency but decimates consumer purchasing power or sparks unrest, it fails as a sustainable business tool.” Hoffmann goes further, arguing that “workforce impact is not a ‘soft’ concern but rather a core system design parameter” and cautioning that organizations that “deploy AI where it adds little value but creates organizational strain ... risk weaker oversight and poorer outcomes.”   </p>
<p></p>
<p>The business case for taking workforce impacts seriously may already be playing out in practice. Alyssa Lefaivre Škopac, director of trust and safety at Alberta Machine Intelligence Institute, raises the issue of companies declaring to be “AI first” as they cut workers only to “rehire when the capabilities don’t match the hype.” She says this “fundamental misunderstanding of AI capabilities and human talent” comes with “real economic and human cost.” She adds, “Thoughtfully navigating workforce impact may be foundational to whether AI actually delivers the positive impact we’re all hoping for.” Pierre-Yves Calloc’h agrees that “workforce integration thinking is a critical factor in the long-term success of any AI initiative,” while Stanford CodeX fellow Riyanka Roy Choudhury cautions that “ignoring the impact on jobs may eventually contribute to broader economic instability.”  </p>
<p>In response to that concern, many experts emphasize that reskilling and upskilling workers is crucial to mitigating AI’s potentially negative workforce effects. Ganesh recommends implementing a two-pronged strategy that focuses on bias, safety, privacy, and security issues along with the workforce impact by “upskilling, educating employees to work confidently alongside intelligent systems, and being transparent about how AI is used in decision-making.” University of Helsinki professor Teemu Roos similarly emphasizes that “the primary concern is ensuring sufficient support for upskilling and reskilling among the workforce to address rapid change and increasing complexity.” Not all experts are optimistic about this approach, however. Chow observes that “technological progress is exponential, while human reskilling remains linear,” warning that “unless responsible AI explicitly mandates accelerating workforce readiness to match this velocity, the skills gap will become an unbridgeable chasm, rendering upskilling a hollow promise.”</p>
<p><strong>Responsibility for workforce impact should be distributed.</strong> Given the substantial challenges that AI poses to the future of work, Kirtan Padh, scientific collaborator at AI Transparency Institute, asks, “Who is responsible for any negative impacts on the workforce?” Are businesses, governments, or both? IMD Business School professor Öykü Işik believes that addressing AI’s workforce impact “is a matter of formal corporate governance” that “undoubtedly rests with the board and executive leadership.” GovLab cofounder and chief research and development officer Stefaan Verhulst agrees that “companies must improve corporate policies that protect and nurture their employees.” Yet Nasdaq’s head of AI research and engineering Douglas Hamilton calls for a division of responsibilities, arguing that AI-related job displacement should be the primary concern of “governments, universities, and nonprofits,” whereas “responsible companies need to fully capture its value in unequivocal ways.” </p>
<p>Several experts argue that companies cannot be expected to bear this burden alone, while pointing to the role of policy and lawmakers. Wharton School professor Kartik Hosanagar argues that “policy makers hold the primary responsibility” for the workforce impacts of AI. At the policy level, Ganesh calls for “preparing the labor market for collaborating with AI by identifying future skills, adapting curricula, and supporting transitions,” while Sarma argues that this preparation requires “everything from completely rethinking our educational paradigms to reskilling, unemployment support, and fundamental questions about the future of the economy.” Hardoon says, “A truly responsible approach demands holistic governance, AI literacy training, and policies that protect workers and preserve human agency.”    </p>
<p></p>
<p>Several experts also caution that the stakes of inaction are potentially high. ForHumanity founder Ryan Carrier warns that failure to address workforce impact “will result in increased economic inequality as the wealth created by AI would be increasingly concentrated.” He believes that “a legislative policy response and consumer choice have a role to play in signaling whether we want corporations to continue to employ humans, and to what degree.” Bioni adds that “labor unions and worker associations can play a critical role through collective bargaining agreements [including] provisions on prior consultations before AI deployment, access to information about automated decision-making systems, and limits on algorithmic surveillance.”</p>
<h3>Recommendations</h3>
<p>In summary, we offer the following recommendations for organizations seeking to address workforce impact as part of their responsible AI efforts:</p>
<p><strong>1. Increase the scope of RAI practices beyond models.</strong> Expand the definition of responsible AI to encompass not just model performance but the full ecosystem of people, processes, and institutions that shape how AI is built, deployed, and experienced. Workforce impact is a core organizational design parameter that should be proactively embedded in AI governance frameworks from the outset. Governance frameworks that focus exclusively on technical performance miss the deeper question of what AI does to workers, organizations, and economic life. Workforce impact must be evaluated at the board level alongside business outcomes.</p>
<p><strong>2. Include workforce impact as part of your AI strategy.</strong> Organizations are racing to create strategies for deploying AI tools and upskilling staff on their use. Plans for AI that change the nature of work should be accompanied by plans for human reskilling, redeployment, and transition strategies. However, as Chow suggests, reskilling can’t or won’t keep pace with technological advances, so companies need to look at other options to address workforce impact. Include workforce metrics, such as displacement rates and reskilling completion, alongside technical performance and value measures when tracking implementation. Companies should ensure their strategy accounts for the hidden costs of large-scale workforce impact, including reputational damage, reduced consumer trust, and growing regulatory risk. These potential downsides may ultimately outweigh the short-term efficiency gains.</p>
<p><strong>3. Evaluate worker impact alongside other product-level risks.</strong> Product evaluations must move beyond technical performance to include workforce effects, including overreliance, skills atrophy, disempowerment, “AI brain fry,” and work intensification. These factors should be part of risk identification and mitigation development. Transparency about how AI is used in decision-making, what tasks it will reshape or eliminate, and mitigation plans (e.g., transition support) should be built into deployment plans and considered as part of the business case for the AI use. Workforce impacts must be explicitly considered as part of go/no-go decisions before pursuing specific AI tools.</p>
<p><strong>4. Make employees part of the conversations about workforce impact.</strong> Organizations have an obligation to communicate openly with workers who may be affected by AI — not as a courtesy but as a core governance responsibility. Workforce impact statements should be part of organizational AI strategies, alongside business value statements. Otherwise, responsible AI remains a conversation that happens above workers rather than with them. And in some jurisdictions, this engagement may not be optional. Workers’ councils are increasingly important to shaping AI strategy, especially in cases where worker displacement may occur.</p>
<p><strong>5. Assign clear leadership accountability for workforce impact.</strong> Addressing workforce impact cannot be treated as a shared responsibility that belongs to everyone — and therefore no one. While it requires coordinated effort across human resources, operations, legal, technical, and business leadership, cross-functional collaboration without named ownership is how consequential issues fall through the cracks.</p>
<p>Organizations must designate a specific leader, with real authority and board-level visibility, who is accountable for developing and executing a workforce impact strategy. To address externalities, they’ll need to proactively engage with policy makers, industry bodies, and labor organizations. This leader should be prepared to make the case, to shareholders and executives alike, that the hidden costs of large-scale displacement — the erosion of in-house expertise needed to verify AI outputs, reputational damage, eroded consumer trust, and mounting regulatory exposure — will outweigh the short-term efficiency gains that drove the cuts in the first place. If no single leader owns workforce impact, it will remain a talking point in governance documents rather than a genuine organizational commitment.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/beyond-the-model-why-responsible-ai-must-address-workforce-impact/feed/</wfw:commentRss>
				<slash:comments>1</slash:comments>
							</item>
					<item>
				<title>﻿How AI Helps the Best and Hurts the Rest</title>
				<link>https://sloanreview.mit.edu/article/how-ai-helps-the-best-and-hurts-the-rest/</link>
				<comments>https://sloanreview.mit.edu/article/how-ai-helps-the-best-and-hurts-the-rest/#comments</comments>
				<pubDate>Mon, 20 Apr 2026 11:00:24 +0000</pubDate>
				<dc:creator><![CDATA[Nicholas Otis, Rowan Clarke, Solène Delecourt, David Holtz, and Rembrand Koning. <p>Nicholas Otis is a Ph.D. candidate at the University of California, Berkeley’s Haas School of Business. Rowan Clarke is a Ph.D. candidate at Harvard Business School. Solène Delecourt is an assistant professor in the Management of Organizations group at the Haas School of Business. David Holtz is an assistant professor in the Decisions, Risk, and Operations division at Columbia Business School, affiliated faculty at the Columbia University Data Science Institute, and a research affiliate at the MIT Initiative on the Digital Economy. Rembrand Koning is the Mary V. and Mark A. Stevens Associate Professor at Harvard Business School and codirector of the Tech for All Lab at the Digital Data Design (D³) Institute at Harvard.</p>
]]></dc:creator>

						<category><![CDATA[AI Strategy]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Business Development]]></category>
		<category><![CDATA[Decision-Making]]></category>
		<category><![CDATA[Entrepreneurship]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[Narrated Article]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leading Change]]></category>
		<category><![CDATA[Managing Technology]]></category>
		<category><![CDATA[Technology Implementation]]></category>
		<category><![CDATA[Frontiers]]></category>

				<description><![CDATA[Mark Shaver/theispot.com Can generative AI serve as an effective adviser for business owners and entrepreneurs? Intuitive chat-based natural language interfaces mean that anyone who can read and write can use GenAI tools for a wide range of tasks, even if they lack technical skills. This has obvious appeal for entrepreneurs and small business owners, many [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/2026SUMMER_Delecourt-1290x860-1.jpg" alt="" class="wp-image-126678"/><figcaption>
<p class="attribution">Mark Shaver/theispot.com</p>
</figcaption></figure>
<p></p>
<p></p>
<p><span class="smr-leadin">Can generative AI serve</span> as an effective adviser for business owners and entrepreneurs? Intuitive chat-based natural language interfaces mean that anyone who can read and write can use GenAI tools for a wide range of tasks, even if they lack technical skills. This has obvious appeal for entrepreneurs and small business owners, many of whom could benefit from an on-demand adviser able to help with marketing, pricing, operations, and strategy.</p>
<p>Improving the performance of entrepreneurs at scale has proved to be <a href="https://doi.org/10.1093/oxrep/grab002" target="_blank">challenging</a>. The most effective interventions tend to be high touch, such as <a href="https://doi.org/10.1093/qje/qjs044" target="_blank">hands-on consulting</a>, <a href="https://doi.org/10.1257/app.20170042" target="_blank">individualized mentorship</a>, and <a href="https://doi.org/10.1002/smj.2987" target="_blank">in-person networking</a>. However, they are expensive to deliver and difficult to scale. In emerging markets specifically, this constraint is often even tighter: High-quality business support can be scarce, and its cost can be prohibitive relative to organizational resources. A low-cost and always-available AI mentor could potentially deliver, at scale, the type of business guidance that has historically been limited by the availability and cost of human experts.</p>
<p>To test whether accessing generative AI can actually help small businesses, we ran a field experiment with hundreds of small business owners in Kenya. We randomly gave half of them access to a WhatsApp contact that connected them to a version of OpenAI’s GPT-4 that we had prompted to act as a Kenyan business adviser, and then we tracked business performance over time. The key factor driving either an increase or decrease in profits and revenues? Whether an entrepreneur had the judgment to distinguish good AI advice from bad.</p>
<p></p>
<h3>Testing AI Advice in the Real World</h3>
<p>Many previous studies of generative AI have focused on narrow, well-defined tasks, such as <a href="https://doi.org/10.1126/science.adh2586" target="_blank">drafting emails</a>, <a href="http://dx.doi.org/10.2139/ssrn.4573321" target="_blank">developing business strategy</a>, or <a href="https://doi.org/10.1287/mnsc.2023.03014" target="_blank">generating marketing ads</a>. For such tasks, the tool’s output can often be used with little modification, allowing even less-skilled users to benefit from AI assistance. Consistent with this idea, <a href="https://doi.org/10.1126/science.adh2586" target="_blank">studies have found</a> that the workers who were struggling the most before using AI benefited the most from using such tools.</p>
<p>Managing a business is not a narrow or well-defined task, though. Entrepreneurs often face vague and ambiguous problems. They do not just need help with writing an email; they need help deciding what problem to tackle, what strategy to pursue, and which advice applies to their specific context and then choosing what to implement under real constraints. On its own, AI does not typically handle those kinds of problems well. When Anthropic gave its Claude Sonnet 3.7 large language model total control of a small vending business in its San Francisco office, the LLM sold items at a loss, gave away free products, and quickly <a href="https://www.anthropic.com/research/project-vend-1" target="_blank">ran the shop into the red</a>. But what happens when, instead of leaving AI to run a business on its own, it advises a human entrepreneur who can then decide when to implement or ignore its ideas?</p>
<p>To test how AI impacts a broad task like running a business, we designed a study to evaluate it in the messy reality that entrepreneurs face. We recruited 640 small business owners in Kenya from a range of sectors — including food and beverage, agriculture, and car-wash services — and ran a randomized controlled trial from May to November 2023. Since most of the country’s population communicates via mobile phone, half of the participants were given access to a GPT-4-powered AI business adviser delivered via WhatsApp, the dominant messaging platform in Kenya. Eighty percent had never used ChatGPT or any other generative AI tool. Both groups received brief onboarding training, but the control group received an online business training guide instead of AI access.</p>
<p></p>
<p>Business owners in the experimental group could ask any business-related question of their choosing and use the assistant as much or as little as they wanted. We tracked sales and profits over time, comparing entrepreneurs who got the AI assistant against the control group, who did not. On average, the difference between the control group’s and the experimental group’s business performance was ﻿close to zero and not statistically significant. But the average for the experimental group masked a striking split: Having access to generative AI boosted revenues and profits by 15% among business owners who had already been doing well (that is, they were in the top 50% of performance before the experiment), but among those in the bottom 50%, AI use led to a nearly 10% decline in revenues and profits.</p>
<p></p>
<h3>Same Advice, Different Choices</h3>
<p>Why would a tool capable of producing high-quality business suggestions harm the entrepreneurs it was supposed to help? We found that both high- and low-performing entrepreneurs asked a similar number of questions, asked similar types of questions, and even received similar advice from the AI tool. The difference was in what they chose to act on.</p>
<p>In our data, we saw that every entrepreneur, regardless of baseline performance, received generic suggestions like “lower your prices” or “invest in advertising” alongside more tailored, context-specific ideas. Low performers disproportionately acted on the generic advice, cutting prices and increasing spending on advertising. These one-size-fits-all moves often eroded margins and raised costs without generating enough new business to offset the costs.</p>
<p>High performers, in contrast, used GenAI to discover and implement changes specific to their situation: A cybercafe owner started renting out gaming accessories to customers; a car-wash owner introduced a new in-demand detergent and started selling cold sodas to waiting customers; and another entrepreneur found alternative power sources to withstand electricity blackouts. Both groups had access to the same quality of AI advice. The difference was whether the entrepreneurs had the judgment to sift through AI-generated suggestions, pick the ideas that fit their business, and ignore the rest.</p>
<p>Our takeaway from the study is that in contexts where problems are broad and fuzzy, generative AI amplifies the role of human judgment. The value created by an open-ended AI adviser is critically dependent on the human judgment that guides its use and application. In open-ended contexts, a positive effect of AI on performance relies on <a href="https://mitsloan.mit.edu/ideas-made-to-matter/study-generative-ai-results-depend-user-prompts-much-models" target="_blank" rel="noopener noreferrer">asking good questions</a>, interpreting suggestions, and choosing which actions to implement. For users with strong judgment, the tool helps surface new ideas and think through trade-offs. Users with weak judgment can end up following plausible-sounding but misleading advice that leads to worse outcomes.</p>
<p>For managers and policy makers, recognizing this nuance is essential. Without it, well-intentioned AI deployments risk widening performance gaps, because the people who often need the most help are also the least equipped to filter and apply advice.</p>
<h3>How Leaders Should Implement AI Advice for Open-Ended Problems</h3>
<p>Our experience prototyping and launching a WhatsApp-based AI adviser shows how quickly and cheaply generative AI tools can be rolled out and made widely accessible. But a fast implementation of a GenAI tool may also raise the risk that organizations roll out open-ended AI tools without strong guardrails or evaluation. As the cost of deployment falls, AI is being applied to an <a href="https://aleximas.substack.com/p/what-is-the-impact-of-ai-on-productivity" target="_blank">ever-wider range of open-ended tasks</a>. For example, engineers at Google now use AI coding tools in their day-to-day work, and there is evidence that the most experienced developers <a href="https://doi.org/10.48550/arXiv.2410.12944" target="_blank">benefit the most</a> from these tools. In book publishing, <a href="https://www.nber.org/papers/w34777" target="_blank">established authors</a> have been able to increase their output with AI while AI-assisted entrants have flooded the market with lackluster prose. For leaders managing AI within their organizations, these findings reinforce the importance of careful design and rigorous measurement to ensure that AI does not inadvertently lead to worse performance.</p>
<p>What can leaders do? First, cultivate awareness. Leaders should not assume that AI will boost performance for everyone. Evaluations that focus only on average effects can be misleading, because the mean can conceal meaningful harms for specific groups.</p>
<p>Next, leaders can design for heterogeneity. For workers with experience and judgment, open-ended AI tools can have real returns. Junior or weaker performers might need tighter guardrails to avoid following harmful suggestions. One promising direction is feeding the AI tool more context about the user’s specific situation — their business data, financials, or competitive environment — so that it can better filter out generic advice that doesn’t fit their situation. Building that kind of contextual awareness into AI tools remains an open challenge that GenAI vendors are actively exploring.</p>
<p></p>
<p>In the meantime, it is more likely that most people will find generative AI useful for specific, narrow tasks — such as summarizing documents, writing more clearly, or reviewing code for efficiency — rather than tasks that require a great deal of contextual knowledge to determine the applicability of its output and skill to implement well.</p>
<p>Organizations should also invest in human judgment and scaffolding around AI use. For high-stakes decisions, escalation to human support is a critical safeguard, especially when advice is open-ended, context-dependent, or difficult to evaluate in advance. Organizations can build supports that make these tools safer, such as structured onboarding that elicits context, decision checklists, or warnings about margin-destroying tactics.</p>
<p></p>
<p>The third step is to audit for uneven effects by asking questions in three areas:</p>
<ul>
<li><strong>Adoption:</strong> Are some groups avoiding the tool entirely or using it far less than others?</li>
<li><strong>The interactions themselves:</strong> Are different users asking different kinds of questions, providing different amounts of context, or receiving meaningfully different outputs?</li>
<li><strong>What happens next:</strong> Is the tool changing real-world decisions, and are those decisions producing better results for some users than others?</li>
</ul>
<p>Asking those questions can help leaders pinpoint where inequality may emerge, which allows for intervention through targeted training, workflow redesign, or tighter controls.</p>
<p>AI shows real potential to increase business performance at scale, but the benefits are not guaranteed. Our research results suggest that GenAI can inadvertently increase inequality in business performance by helping stronger performers more than others and, potentially, actively harming lower performers. When deploying AI tools at scale, a central design challenge is to not merely make AI available but to make its use effective so that scaling AI does not scale inequality.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/how-ai-helps-the-best-and-hurts-the-rest/feed/</wfw:commentRss>
				<slash:comments>2</slash:comments>
							</item>
					<item>
				<title>Lessons From Innovation Pioneer Florence Nightingale</title>
				<link>https://sloanreview.mit.edu/article/lessons-from-innovation-pioneer-florence-nightingale/</link>
				<comments>https://sloanreview.mit.edu/article/lessons-from-innovation-pioneer-florence-nightingale/#respond</comments>
				<pubDate>Thu, 16 Apr 2026 11:00:42 +0000</pubDate>
				<dc:creator><![CDATA[Scott D. Anthony. <p><a href="https://www.linkedin.com/in/scottdanthony/" target="_blank">Scott D. Anthony</a> is a clinical professor at the Tuck School of Business at Dartmouth College and a senior adviser and managing partner emeritus at growth strategy consultancy Innosight. He is the author of <cite><a href="https://epicdisruptions.com/" target="_blank">Epic Disruptions</a></cite> (Harvard Business Review Press, 2025).</p>
]]></dc:creator>

						<category><![CDATA[Communication]]></category>
		<category><![CDATA[Data & Analytics]]></category>
		<category><![CDATA[Disruptive Innovation]]></category>
		<category><![CDATA[Health Care]]></category>
		<category><![CDATA[Data & Data Culture]]></category>
		<category><![CDATA[Disruption]]></category>
		<category><![CDATA[Innovation]]></category>
		<category><![CDATA[Leadership]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR &#124; Wellcome Collection Florence Nightingale may be best remembered as the epitome of a kind, caring nurse, but she was also a force for disruptive innovation in health care. Three distinct elements of her work — communicating data compellingly, publicizing clear and simple instructions, and expanding professionalized training — carry timeless lessons [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Anthony-1290x860-1.jpg" alt="" class="wp-image-126611"/><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR | Wellcome Collection</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">Florence Nightingale may be best remembered</span> as the epitome of a kind, caring nurse, but she was also a force for disruptive innovation in health care. Three distinct elements of her work — communicating data compellingly, publicizing clear and simple instructions, and expanding professionalized training — carry timeless lessons for today’s leaders.</p>
<p>Born in 1812 in Florence, Italy, Nightingale announced in the 1840s that she intended to become a nurse. Her well-to-do parents protested; at the time, nursing was a lower-class profession. Nightingale persisted, ultimately receiving tutelage in nursing and related topics from Theodor Fliedner, a pastor, in what is now Germany.</p>
<p>In 1854, as the Crimean War raged, Nightingale and a brigade of 38 nurses arrived at the war hospital in Scutari (now Üsküdar) in Türkiye. During the conflict, the first since the advent of the telegraph, newspaper reporters provided updates on the conflict in close to real time. In 1855, John MacDonald of the <cite>London Times</cite> reported on Nightingale, describing her as “a ‘ministering angel’ without any exaggeration in these hospitals. … When all the medical officers have retired for the night, and silence and darkness have settled down upon these miles of prostrate sick, she may be observed alone, with a little lamp in her hand, making her solitary rounds.”</p>
<p>Thus, Nightingale became “The Lady With the Lamp” — and, perhaps, the world’s first social media star. In 1854, 5,000 babies were named Florence. In 1855, after MacDonald’s article was published, 20,000 were.</p>
<p></p>
<h3>A Three-Front Strategy of Influence</h3>
<p>Nightingale’s impact far exceeded her influence on baby names, of course. She and her fellow nurses encountered dire, squalid conditions and infectious diseases that ran rampant in military hospitals. The prime minister of Britain sent a sanitary commission to clean up the hospital after Nightingale telegraphed him for support, and she would continue to champion cleanliness in medical settings after the war. When she returned to England in 1856, she met with Queen Victoria to help spur the creation of a royal commission for hygiene in military hospitals. </p>
<p>Thus commenced Nightingale’s three-front disruptive battle in nursing and sanitation, using the tactics of data-driven communication, clear and accessible instruction, and standardized professional training.</p>
<h4>Compelling Communication</h4>
<p>Nightingale’s experience convinced her of the importance of following proper hygiene and sanitation practices in hospitals. But how to make people viscerally feel that importance when germ theory hadn’t yet been widely accepted? The answer: through data, visuals, and stories. (“Whenever I am infuriated, I revenge myself with a new diagram,” Nightingale wrote.) </p>
<p>She collaborated with physician William Farr, one of the founders of the Statistical Society of London, crunching numbers to show the obvious impact of poor sanitation policies. Critically, they created powerful ways to communicate their findings. </p>
<p></p>
<p>Their most compelling diagram was an 1858 <a href="https://www.nam.ac.uk/explore/florence-nightingale-lady-lamp" target="_blank" rel="noopener noreferrer">polar area chart</a> titled “Diagram of the Causes of Mortality in the Army in the East.” It clearly illustrated that in 1854, soldiers were more likely to die of an infectious disease in a hospital than on the battlefield. After the sanitary commission helped improve conditions, deaths by infectious diseases at the hospital dramatically declined. The chart made a stunning impact, with one reporter remarking, “Terrible do the death ‘wedges’ swell out.”</p>
<p>Nightingale also developed persuasive metaphors to illustrate the extent of the problems caused by poor sanitation in military hospitals. “It is as criminal to have a mortality of 17, 19 & 20 per 1000 in the Line Artillery & Guards in England … as it would be to take 11000 Men per annum out upon Salisbury plain & shoot them,” she wrote.</p>
<p></p>
<h4>Clear and Accessible Instruction</h4>
<p>In 1859, Nightingale released a groundbreaking book titled <cite>Notes on Nursing: What It Is, and What It Is Not</cite>. The first print run of 15,000 copies in England sold out within months. The book was quickly translated into multiple languages, and an American version was published in 1860.</p>
<p>In <cite>Notes on Nursing</cite>, Nightingale provided clear, practical guidance about how to care for patients. It wasn’t meant for someone seeking a career in nursing; rather, it targeted laypeople who might have to provide caretaking and similar services. Chapter titles like “Taking Food,” “Light,” “Personal Cleanliness,” and “Bed and Bedding” show the book’s practical bent, expressed clearly and plainly. </p>
<p>As usual, Nightingale stressed sanitation and prevention. “One duty of every nurse is prevention,” Nightingale wrote. “The surgical nurse must be ever on the watch, ever on her guard, against want of cleanliness, foul air, want of light, and of warmth.”</p>
<p>Her book enabled a broader population to learn to provide proper hygiene and ward off infectious diseases — classic disruptive innovation. In parallel, Nightingale turned her focus to increasing the number of skilled nurses.</p>
<h4>Standardized Professional Training</h4>
<p>In 1857, the Nightingale Fund was established to oversee donations that had poured in, in support of Nightingale’s work, which had become widely known. She used a portion of the funds to help open the world’s first formal nursing school at St Thomas’s Hospital in London. </p>
<p>Prior to Nightingale’s efforts, training was disorganized and nursing was inconsistently practiced. Before her book was released, “there were no schools for nurses and therefore no trained nurses,” wrote Virginia Dunbar, former dean of the Cornell University School of Nursing.</p>
<p></p>
<p>The first students arrived in 1860. The curriculum blended formal knowledge of areas such as biology and physiology along with practical skills. Would-be nurses worked side by side with experienced ones. Nightingale handpicked the staff and helped to shape the curriculum. The graduates from that program, known as “Nightingales,” spread their wings throughout the world.</p>
<p>A key driver of disruption is allowing a broader population to do what once required specialized expertise. Nightingale herself had to receive one-on-one teaching to learn the art of being a skilled nurse. Her school played a pivotal role in turning such lessons from art to science, enabling more people to effectively provide nursing services.</p>
<h3>Timely Lessons From a Timeless Story</h3>
<p>Compelling communications. Comprehensive instructions. Standardized training. Nightingale’s contributions drove societal improvements we take for granted today, like washing hands to help prevent the spread of infectious diseases, circulating the air in places where sick people are gathered, and removing and treating wastewater. </p>
<p>In 1875, Britain passed the Public Health Act, which called for well-built sewers, clean running water, and regulated building codes. Life expectancy, which had stagnated at about age 40 in the United Kingdom for centuries, increased by 38% over the next 50 years. </p>
<p></p>
<p></p>
<p>Nightingale’s story has three timely lessons for modern leaders.</p>
<p>First, one of the powers of disruptive innovation is doing things differently, not just better. By educating a broader population about hygiene and nursing practices — which had previously been poorly understood — Nightingale enabled more decentralized and accessible health care. </p>
<p>Second, sophisticated technology is not required for significant impact. Nightingale and Farr used early adding machines for their groundbreaking analysis, but what’s striking about the story of their compelling “death wedge” diagram is how little technology was involved. </p>
<p>Third, disruption doesn’t require superpowers or a larger-than-life leadership presence. Nightingale demonstrated timeless qualities and behaviors that fuel disruptive success, such as curiosity, collaboration, and persistence. </p>
<p>You likely have Nightingales inside your organization. Give them space and support, and watch them kindle their own lamps to spread light.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/lessons-from-innovation-pioneer-florence-nightingale/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>The Human Side of AI Adoption: Lessons From the Field</title>
				<link>https://sloanreview.mit.edu/article/the-human-side-of-ai-adoption-lessons-from-the-field/</link>
				<comments>https://sloanreview.mit.edu/article/the-human-side-of-ai-adoption-lessons-from-the-field/#respond</comments>
				<pubDate>Tue, 14 Apr 2026 11:00:06 +0000</pubDate>
				<dc:creator><![CDATA[Ganes Kesari. <p><a href="https://www.linkedin.com/in/gkesari/" target="_blank">Ganes Kesari</a> is founder and CEO at <a href="https://tensorplanet.com/" target="_blank">Tensor Planet</a>, a software product company focused on predictive maintenance for commercial vehicle fleets.</p>
]]></dc:creator>

						<category><![CDATA[AI Strategy]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Change Management]]></category>
		<category><![CDATA[Employee Psychology]]></category>
		<category><![CDATA[Leadership Advice]]></category>
		<category><![CDATA[AI & Machine Learning]]></category>
		<category><![CDATA[Data, AI, & Machine Learning]]></category>
		<category><![CDATA[Disruption]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leading Change]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR Not a day goes by without another article being published about how AI could disrupt yet another aspect of our business or personal lives. In recent years, AI adoption has indeed taken off. However, if you pay close attention, you’ll notice a dichotomy. Many examples of successful early adoption of artificial intelligence [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Kesari-1290x860-1.jpg" alt="" class="wp-image-126585"/><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">Not a day goes by without</span> another article being published about how AI could disrupt yet another aspect of our business or personal lives. In recent years, AI adoption has indeed taken off. However, if you pay close attention, you’ll notice a dichotomy. </p>
<p>Many examples of successful early adoption of artificial intelligence tend to come from a small cluster of industries that are heavily digitized or are pro-technology. The usual suspects include banking, financial services, e-commerce retailers, and the like. However, some other industrial sectors, many of which are big contributors to our economy, don’t show the same level of progress or enthusiasm when it comes to AI adoption. </p>
<p>Take the example of specialty and essential services industries such as construction, mining, or waste management. Some of these companies are part of a robust economy but largely powered by legacy software from decades ago, with some processes still happening through pen and paper. While AI has made nascent inroads here, the levels of adoption leave much room for growth.</p>
<p>Leaders in these industries often assume that they have stable processes that have served them well for decades. Yes, things might break once in a while, leading to customer service disruption, rework for the team, and internal process disruption. But then, they have always recovered. People in these industries may view AI as gimmicky, too much work, and/or not trustworthy.</p>
<p></p>
<p>Having spent more than 15 years helping dozens of industries embrace AI, I’ve been curious to study what distinguishes the two sets of leaders and the quite different levels of AI adoption they achieve. And, importantly, I’ve spent years in the trenches experimenting with techniques that help address adoption challenges.</p>
<p>Here, I’ll share what’s at the root of the leadership challenge and how leaders in industries that have been conservative about AI can orchestrate meaningful change. Let’s examine some grounded examples and no-nonsense tips for AI adoption.</p>
<h3>Why AI Adoption Lags in Some Industries</h3>
<p>My experience in the field points to three prevalent factors holding back some industries from moving forward with artificial intelligence.</p>
<h4>1. AI feels inaccessible and scary.</h4>
<p>When you can’t comprehend something, you start developing a fear of it. When everyone around you seems to talk about it and you feel left behind, the fear only grows. When the technology feels intrusive and uncomfortable, you draw back into your shell.</p>
<p>This is exactly what’s happening with AI when it comes to a majority of late adopters in both private and public sectors. The hype around AI and the seemingly irrational excitement of tech pundits only alienates people in cautious companies. To make matters worse, anytime there’s news about an uninformed AI investment backfiring or machine learning algorithms going rogue, it solidifies the narrative that AI is inaccessible and not ready for the masses yet.</p>
<p>Driver-facing AI-enabled cameras in freight vehicles are a case in point. For truck drivers, a camera inside the cab feels intrusive and disciplinary long before it’s perceived as a safety or performance-aiding tool. A <a href="https://truckingresearch.org/2023/04/new-atri-research-identifies-strategies-for-improving-driver-facing-camera-approval-and-utilization/" target="_blank" rel="noopener noreferrer">report by the American Transportation Research Institute</a> shows that truck drivers’ approval of driver-facing cameras tends to be low: just 2.24, on average, on a 0-to-10 scale among 650 current users from across the industry.</p>
<h4>2. AI looks like a lot of avoidable work.</h4>
<p>AI is often touted as a savior that automates drudgery. But people on the ground who are tasked with making the AI tools work and integrating them into workflows may perceive AI as creating <em>extra</em> work, not relieving them of it. </p>
<p>With front-line teams in labor-intensive industries often feeling overstretched and under-supported, the need for more training or changes to existing workflows just adds friction before adding any value. In many late-adopting industries, AI is immediately associated with capital-heavy hardware and forced operational change. </p>
<p></p>
<p>It doesn’t help that organizational memories are often clouded by many failed or painfully stretched technology rollouts — think enterprise resource planning systems, safety tools, telematics systems, and so on. People wonder whether this AI-tools wave is another fad that’s worth waiting out. When you take a deeper look, you realize that change fatigue, not an aversion to technology, is the real blocker.</p>
<h4>3. AI benefits don’t really seem worth the pain.</h4>
<p>Most technology evangelists and leaders commit the blunder of communicating AI value in the wrong currency. Improved accuracy or productivity boosts mean little to front-line operators, who care more about customer escalations, rework, or operating costs.</p>
<p>In a 2025 <a href="https://www.deloitte.com/se/sv/Industries/technology/perspectives/ai-roi-the-paradox-of-rising-investment-and-elusive-returns.html" target="_blank" rel="noopener noreferrer">executive survey by Deloitte</a>, although 65% of leaders said that AI is part of their corporate strategy, many also acknowledged that the ROI is neither immediate nor purely financial. From a front-line worker perspective, the cost of learning and adopting an intimidating technology like AI feels personal, but the benefits feel abstract and impersonal. </p>
<p>When it’s difficult to articulate tangible business outcomes from AI for the next quarter, such initiatives struggle to secure or sustain sponsorship and are easily deprioritized. Every time AI implementations fail to deliver on vague goals, which is quite often, the trust deficit only grows.</p>
<p></p>
<h3>Three Pillars for Successful AI Adoption</h3>
<p>How can you, as a leader, address those challenges and set your organization up for success? Consider these three essential strategies.</p>
<h4>1. Use everyday analogies to make AI less threatening.</h4>
<p>Education is a prerequisite for meaningful AI adoption. When your end users don’t understand why they should use or trust AI, the initiative is dead on arrival. How can you make AI accessible to an audience that’s not digital-native?</p>
<p>We are no longer in a period where there are few notable uses of AI. Some people don’t realize that they already use AI dozens of times every single day. Don’t we unlock phones with facial recognition? Aren’t even unbranded smartwatches good at detecting workout activities or flagging an irregular heart rhythm? Don’t some people delight at discovering long-lost school buddies through Facebook or Instagram friend recommendations?</p>
<p>Each of these examples is an instance of AI at work. In conversations with leaders, when I share these as examples of sophisticated AI use by the general public, it surprises them every single time. Once the technology is reframed this way, conversations can begin to shift from fear of AI to a curiosity around where else it might be at play. You make real progress when you demystify AI through familiar experiences rather than technical lectures.</p>
<p>This framing also enables a more honest discussion about the potential of AI and the threat to jobs. In many professions, people then begin to appreciate that they are more likely to lose opportunities not to the AI itself but to other humans who know how to use AI better. This strengthens AI’s positioning as assistive and AI tool use as another skill to acquire.</p>
<p>Take the case of AI platform Hey Bubba, designed for trucking owner-operators and small trucking companies. Instead of using dashboards or complex workflows, the system operates entirely through voice. Drivers can search and book freight, negotiate with brokers, find parking, and book hotels through natural conversations, with the help of AI. This service works because it builds on familiar uses of AI assistants, such as Siri and Alexa, and thus feels natural.</p>
<h4>2. Integrate AI into systems people already use.</h4>
<p>Is it easier to renovate a house or ask people to move into a brand-new one with unfamiliar rooms, rules, and routines? With AI adoption, you want to take the renovation approach. It’s a blunder to try a big-bang approach to roll AI into an organization.</p>
<p>Always start with incremental changes to existing workflows and software. Remember that your teams already use dozens of software tools. These are the best starting points where leaders can inject AI and gently nudge user adoption.</p>
<p></p>
<p>For example, most front-line teams already live inside software, such as billing systems, customer relationship management systems, dispatch tools, maintenance software, or safety logs. Some of these systems may be clunky, but they are heavily used and largely unavoidable. The pain points within these systems could act as perfect entry points to introducing AI — places where users could see the value and welcome the initiative with open arms. When AI meets people where they already work, curiosity replaces resistance.</p>
<p>Take the case of fleet maintenance. Most technicians and supervisors already spend their days inside a computerized maintenance management system. Work orders are logged there. Inspections are recorded there. Breakdowns are investigated there. </p>
<p>An effective approach to introducing AI that can predict vehicle failures, for example, is to embed AI directly into the maintenance systems users already trust. AI can flag recurring fault codes, highlight assets with rising failure risk, or suggest prioritizing certain work orders before a breakdown occurs. </p>
<h4>3. Quantify AI’s impact using metrics people already track.</h4>
<p>Once you make AI accessible and identify familiar avenues to inject it, the quickest way to earn buy-in is to lead with the business result it unlocks. </p>
<p>Start by anchoring AI value to outcomes that stakeholders really care about and are judged on. Usually, there are two perspectives: creating upside (growth or throughput) or preventing downside (lost revenue or risk reduction). Examples of upside metrics are win rates, or asset utilization, while downside metrics include cost leakage or service disruptions. Remember: New KPIs always trigger debate and delay action, whereas familiar metrics accelerate alignment.</p>
<p>Next, pick a combination of short-term impact and long-horizon projections. Sticking just to lag metrics could disillusion stakeholders, who need to see quicker momentum to retain confidence and excitement for AI. For example, reduction in customer complaints is an example of a lead metric to validate short-term progress, while incremental revenue from repeat customers is a lag metric that might need a few quarters to start materializing.</p>
<p>Consider the <a href="https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/unlocking-profitable-b2b-growth-through-gen-ai" target="_blank" rel="noopener noreferrer">example of an industrial materials distributor</a> focused on accelerating growth. The company struggled to systematically identify and act on new business opportunities. Field sellers relied on manual, time-intensive methods, such as driving through cities to visually spot new construction projects. The process was inconsistent, slow, and difficult to scale.</p>
<p>The company built an AI engine that combined internal sales data with external signals to score and prioritize potential opportunities and recommend relevant products. Generative AI was then applied to extract insights from unstructured public data, such as construction permits, to identify upcoming capital projects.</p>
<p>These insights were embedded into existing sales workflows to personalize outreach at scale. The approach unlocked new opportunities in the first year, significantly expanding the sales pipeline and improving success rates for email outreach — both of which were existing sales metrics that stakeholders already cared about.</p>
<p></p>
<h3>Where AI Adoption Is Really Won or Lost</h3>
<p>In late-adopting industries, AI doesn’t fail because the technology falls short. AI often fails because leaders underestimate the human and operational context in which AI tools are introduced. We must remember that front-line skepticism is not resistance to progress — it’s just a rational human response that can be influenced when tackled strategically.</p>
<p>The organizations that move fastest follow a clear progression. They demystify AI by promoting understanding among people; embed AI into existing workflows before forcing new ones; and prove AI’s value using metrics that are already being used to reward or penalize people. When these conditions are met, adoption becomes a pull factor as opposed to a hard push.</p>
<p>The way forward for late-adopter industries is not to imitate tech-first sectors but to adopt AI on their own terms. Successful leaders treat AI as a capability to be woven incrementally into daily work rather than a system to be rolled out abruptly. In these environments, user comfort and trust, not algorithms, ultimately determine whether AI delivers on its promise.</p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/the-human-side-of-ai-adoption-lessons-from-the-field/feed/</wfw:commentRss>
				<slash:comments>0</slash:comments>
							</item>
					<item>
				<title>Managing Up: A Skill Set That Matters Now</title>
				<link>https://sloanreview.mit.edu/article/managing-up-a-skill-set-that-matters-now/</link>
				<comments>https://sloanreview.mit.edu/article/managing-up-a-skill-set-that-matters-now/#comments</comments>
				<pubDate>Mon, 13 Apr 2026 11:00:31 +0000</pubDate>
				<dc:creator><![CDATA[Phillip G. Clampitt and Bob DeKoch. <p>Phillip G. Clampitt is the Blair Endowed Chair in Communication at the University of Wisconsin-Green Bay. Bob DeKoch is the founder of the leadership consulting firm Limitless and a former president of The Boldt Company. They are the coauthors of <cite>Leading With Care in a Tough World: Beyond Servant Leadership</cite> (Rodin Books, 2022).</p>
]]></dc:creator>

						<category><![CDATA[Communication]]></category>
		<category><![CDATA[Human Psychology]]></category>
		<category><![CDATA[Leadership Advice]]></category>
		<category><![CDATA[Leadership Development]]></category>
		<category><![CDATA[Leadership Style]]></category>
		<category><![CDATA[Leadership]]></category>
		<category><![CDATA[Leadership Skills]]></category>
		<category><![CDATA[Managing Your Career]]></category>

				<description><![CDATA[Carolyn Geason-Beissel/MIT SMR &#124; Getty Images Are you skilled at managing up? If your talents are lacking when it comes to managing and dealing with the people above you in the organizational hierarchy, you can find yourself mired in some unpleasant and career-harming situations. Maybe you’re frustrated by a micromanaging supervisor or feeling marginalized by [&#8230;]]]></description>
								<content:encoded><![CDATA[<p></p>
<figure class="article-inline">
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Clampitt-1290x860-1.jpg" alt="" class="wp-image-126588"/><figcaption>
<p class="attribution">Carolyn Geason-Beissel/MIT SMR | Getty Images</p>
</figcaption></figure>
<p></p>
<p><span class="smr-leadin">Are you skilled at managing up?</span> If your talents are lacking when it comes to managing and dealing with the people above you in the organizational hierarchy, you can find yourself mired in some unpleasant and career-harming situations. Maybe you’re frustrated by a micromanaging supervisor or feeling marginalized by them. Maybe you feel constantly in the dark about your manager’s expectations, or you’re tired of absorbing an outsize number of shocks for your team. Any of these can be a warning signal that you need to work on effective upward communication and leadership. </p>
<p>It’s an important set of skills right now. With some organizations using artificial intelligence to eliminate middle layers of management, the ability to manage up has become even more vital to your career — and your organization’s success. Leaders above are often unaware of what they don’t know, and they might be misled by AI.</p>
<p>If you want to strengthen your ability to lead up, you need to know how to assess your skills — and bolster them.</p>
<p>We define effective managing up, or upward leadership as “listening to those higher in rank and influencing them to assist you and your team to better embody the organization’s values and fulfill its mission, strategy, and goals.”<a id="reflink1" class="reflink" href="#ref1">1</a> Successful upward leaders create sustainable wins for the boss, team, and organization.</p>
<p> </p>
<p>Notice that this definition starts with listening. Just because someone wrote down the organization’s values, mission, strategy, and goals on ever-available, wallet-sized notecards or displayed them in a flashy PowerPoint graphic does not ensure that everyone will interpret the ideas in a similar and synergistic fashion. The written word is not enough. Understanding the nuances of interpretation requires active listening for unstated sentiments. </p>
<p>Leading up also, of course, involves influencing. Effective upward leaders establish connections, circumvent problems, and convince those in power to embrace opportunities, innovations, and novel insights. But assisting is equally important. Think of an NBA assist wizard like LeBron James who knows when and where to deliver the ball to other players so they can score. Assisting requires proper alignment between team members, knowledge of who is in position to score, and a willingness to let others shine.</p>
<h3>Three Roles You Play While Managing Up</h3>
<p>Based on surveys of thousands of employees and hundreds of interviews with midlevel managers, we discerned that people leading up assume three interrelated roles: </p>
<p><strong>Buffer.</strong> The buffer dampens frustrations from above (and below), absorbing complaints, gripes, annoyances, and, potentially, offensive remarks. Successful buffers actively listen for underlying (often unstated) sentiments and seek understanding of key (but often vague) goals to protect others from irrelevant or unintended messages.</p>
<p><strong>Translator.</strong> The translator receives information, directives, and perspectives from above (and below). Then they convey the meaning in the language of the audiences at those levels, minimizing potential misunderstanding while respecting the sensibilities of the audience. </p>
<p><strong>Advocate.</strong> The advocate seeks to persuade or dissuade others in positions above (or below) their own. This could mean sharing differing opinions, arguing for a new direction, or pushing back on a new idea or policy.<a id="reflink2" class="reflink" href="#ref2">2</a></p>
<p>It’s not enough to be skilled at one of these roles. Artfully leading upward requires an integration of all three. For example, advocates must translate a pushback comment into a language understood by others while buffering away minor issues. Likewise, a buffer must act as a translator when anticipating how pushback language might be misinterpreted by people above. The translation may, in turn, result in advocating for a change in the directive’s wording to increase the odds of acceptance. </p>
<p>There is no magic formula to determine the right balance, because it will vary with each situation. However, leaning too heavily into one role usually signals problems. If you, as a leader, spend most of your time buffering employees from verbal storms from on high, then it might be time to augment your role as an advocate. </p>
<p>Leading upward does not come naturally to most people. In fact, in his 2001 book, <cite>Leading Up: How to Lead Your Boss So You Both Win</cite>, Wharton professor Michael Useem suggested that just one-third of managerial employees had the necessary skills and desire to do so.<a id="reflink3" class="reflink" href="#ref3">3</a> But you can rewrite your own story by properly assessing your upward leadership talents and then strategically applying them. </p>
<h3>Assess Your Ability to Manage Up</h3>
<p>The best way to improve your upward leadership acumen starts with assessing your current talent level. These three questions can help you judge.  </p>
<p><strong>What role do you primarily perform when you are most frustrated?</strong> Aggravation, frustration, and irritation go with any job but can also signal role imbalance. For example, if you feel micromanaged, you may be overplaying the buffer role and not voicing concerns (the advocate role) about optimizing your own working environment.</p>
<p><strong>What role do you primarily perform when you are in a state of flow?</strong> In his seminal 2008 book, <cite>Flow: The Psychology of Optimal Experience</cite>, Mihaly Csikszentmihalyi describes flow as “a sense that one’s skills are adequate to cope with challenges at hand. … Concentration is so intense that there is no attention left over to think about anything irrelevant.”<a id="reflink4" class="reflink" href="#ref4">4</a> Ideally, your state of flow involves the skillful and seamless fulfillment of all three roles. But that mastery rarely happens, because we all have a tendency to lean too heavily on a role or skill that comes naturally to us. For example, selling or advocating may be your “happy place,” but leaning on that ability alone will not allow you to excel at upward leadership. For that, you’ll need to master the skills of buffering and translating.</p>
<p><strong>Are you equally comfortable performing these roles in both directions (upward and downward)?</strong> Many people selectively employ their buffering, advocating, and translating skills when communicating with people at higher authority levels. This might be healthy in some cases, but it could also be a red flag, revealing that you lack a healthy relationship with those in power and are unwilling to engage in candid, if sometimes difficult, conversations.</p>
<h3>Build Three Key Skills to Manage Up Better</h3>
<p>Once you’ve thought through your role tendencies, it is time to build your buffering, translating, and advocating skills. </p>
<h4>Buffering</h4>
<p>Buffering skills and sensibilities are largely self-taught. Take cues from politicians, coaches, or leaders you admire. Watch successful leaders during press conferences. Some of them ignore the passion of the critic, others deflect unpleasant issues, and some selectively listen for words that they can turn to their advantage. Building up this emotional thick skin takes time and perspective. </p>
<p></p>
<p>Alida Al-Saadi, a former senior executive at Korn Ferry and Accenture, shared this incident: “A manager repeatedly pushed me to be ‘more concise,’ despite being famously long-winded himself. At first it felt unfair. Eventually I understood that thick skin isn’t arguing the irony; it’s hearing what someone needs from you and deciding, deliberately, how to strategically adjust.”<a id="reflink5" class="reflink" href="#ref5">5</a> In short, buffering her reactions and deferring the debate about the accuracy of his critique enhanced their working relationship. </p>
<p>However, buffering does not mean just passively absorbing blows. After all, a shock absorber can only absorb so many shocks before the source of the trouble has to be addressed. Good buffers learn to have productive conversations with their superiors by identifying key issues and rephrasing concerns that might be red flags for their team. Skilled buffers actively listen to engage in productive conversations that support team motivation and performance. This means tuning your antenna to what’s not being said and homing in on ideas that need further development.</p>
<h4>Translating</h4>
<p>Turning your own or your team’s reactions, concerns, or feelings into words that a superior can understand may be all it takes to shift that leader’s position, tweak an idea, or change a disagreeable behavior; it’s one step short of advocacy. This requires an underappreciated ability to convey emotional reactions in a respectful manner. </p>
<p>For example, sometimes employees who first hear about a major organizational change react with colorful and offensive language.<a id="reflink6" class="reflink" href="#ref6">6</a> In those cases, effective leaders accurately relay those sentiments to the higher-ups without sharing personal invectives. A descriptive statement like, “They weren’t very happy” or “They expressed their displeasure in strong language” allows for further discussion that focuses on the substantive issues driving the reactions. </p>
<p>Building your translating skills sometimes means learning new vocabulary. That’s because you should shift your reporting from a direct to an indirect approach for more contentious issues. Directly pushing back with a comment like “I disagree” isn’t always the best option. An indirect and often more effective approach could be to say, “If someone were to play devil’s advocate, they might say …” or “Is there another way to look at this issue?” These phrases distance the pushback in a manner that does not directly challenge the egos of the people above.</p>
<p></p>
<h4>Advocating</h4>
<p>Speaking up for your team, say, by nudging superiors in a different direction represents the most challenging role. What are the best ways to do it? For starters, link to the superior’s underlying motivations, sensibilities, and mental framework. Successful upward leaders frame their team’s reaction to an idea or policy change by first acknowledging the positive intentions of the idea or policy before sharing the team’s suggested tweaks. </p>
<p>They also provide evidence that their superiors find credible. Different supervisors value different kinds of evidence to arrive at conclusions. Some put more faith in statistics, AI projections, or models, while others trust case studies, expert advice, personal testimonies, or historical analogies. </p>
<p>Finally, sense when to back off. Some leaders mistakenly expect quick or even instantaneous agreement from their superiors after proposing initiatives, program tweaks, personnel changes, or innovative suggestions. However, persuasion often requires patience and a willingness to back off at the right time to allow others time to shift the tumblers in their minds before locking something new in place. Pushing too hard or too soon can close the door on any new ideas.</p>
<h3>Habits of Successful Upward Leaders</h3>
<p>Skill-building sets the stage, but successful upward leaders also use the following strategies regularly to maximize their performance and help their organizations thrive.</p>
<h4>Actively build a relationship of candor and trust with people above you in the hierarchy.</h4>
<p>Do you reflexively assume that you are fully trusted by those above? A misreading of interpersonal dynamics can prove to be frustrating, befuddling, and problematic, and can introduce relationship troubles: You might excessively buffer the superior from challenges you face in your department (unwarranted buffering), be overly candid about your own reactions or your employees’ outbursts (unedited translating), or offer unwelcome advice (inappropriate advocating). Instead, consider taking the following actions to establish an empowering relationship of trust.</p>
<p><strong>Take the first step.</strong> Ideally, superiors would seek out and build robust, healthy relationships with direct reports. But in our research, we’ve found that to be more the exception than the rule. Consequently, leaders in subordinate positions must often take active steps to build strong, candid relationships.<a id="reflink7" class="reflink" href="#ref7">7</a> Sometimes that requires the assertiveness and subtlety of a mixed martial arts fighter like Ronda Rousey. Yes, <em>subtlety</em>: Rousey was able to persuade the CEO of the Ultimate Fighting Championship, Dana White, to create a women’s division — even though he had publicly declared that he’d never do it. She took the first step by requesting a 15-minute meeting with Dana, seeking career advice, and then effectively advocated for her idea. The meeting morphed into a 45-minute discussion and resulted in the new UFC women’s division.<a id="reflink8" class="reflink" href="#ref8">8</a>  </p>
<p><strong>Mind the cadence and robustness of meetings with your supervisors.</strong> Your investment in establishing a relationship with superiors can dwindle away without routine and robust communications. The communication cadence needs to keep pace with the fast-changing organizational dynamics. And discussions need to be robust enough to allow the relationship to emerge beyond a position-to-position discussion to more of a person-to-person dialogue. Ideally, that means regularly scheduled face-to-face discussions with your boss, plus skip-level meetings with other people above you in the hierarchy. Advocating for such a time commitment may require some lobbying, but it will spawn benefits by minimizing disconnects and maximizing organizational alignment.<a id="reflink9" class="reflink" href="#ref9">9</a></p>
<p></p>
<p><strong>Avoid assuming that what worked with one supervisor will work with another.</strong> Just because a previous supervisor trusted you to be a great buffer, translator, or advocate, it doesn’t mean a different person in the organization will. While working with various people in the hierarchy above you, you must seek out signals about what problems you can handle on your own without reporting above (buffering). Additionally, you need to search for cues about what issues are off-limits when considering offering unsolicited advice (buffering and advocacy). Your supervisor might welcome tweaks to organizational strategy, but those higher up may not be as open to the pushback.</p>
<h4>Adopt an educational mindset.</h4>
<p>George Reed served as a dean at the University of Colorado — Colorado Springs and an instructor at the U.S. Army War College. He smilingly reminded us, “I’ve had to educate more than a few new chancellors and commanders in my career.”<a id="reflink10" class="reflink" href="#ref10">10</a> When someone new assumed command, Reed started from zero by providing background about his department or division and then sought to earn trust with the newcomer to buffer, advocate, and translate as he saw fit. </p>
<p>Emotionally, this may seem like going backward, but it is essential to establishing a productive working relationship. Sometimes a well-selected list of “10 things everybody should know about our department” does the trick and starts an illuminating educational discussion.<a id="reflink11" class="reflink" href="#ref11">11</a> </p>
<p>Take the following actions to bolster your educational mindset. </p>
<p><strong>Assess the risks of advocacy.</strong> Deciding how and when to advocate revolves around the question “How open will my superior be to my influence attempt?” Correcting a client’s misspelled name on a pending document typically would be zero risk. On the other hand, drawing your supervisor’s attention to an annoying personal habit of theirs, such as always being late to meetings, would be a higher risk (as outlined in the table below).</p>
<div class="callout-highlight">
<aside class="l-content-wrap">
<article>
<h4>Common Conversation Points: Mind the Risk Level</h4>
<p class="caption">
<table id="Chart1" class="chart-vertical-stripes no-mobile">
<thead>
<tr>
<th><strong>Higher-Risk Issue</strong>s</th>
<th><strong>Lower-Risk Issues</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>
<ul>
<li>Annoying personal qualities (such as interrupting others or pettiness)</li>
<li>Character flaws (such as arrogance or impulsiveness)</li>
<li>Competency concerns</li>
<li>Ethical issues (such as dishonesty)</li>
<li>Personal-life concerns</li>
<li>Policy disagreements</li>
<li>Poor performance (such as missed goals)</li>
<li>Unsolicited pushback</li>
</ul>
</td>
<td>
<ul>
<li>Positive operational results</li>
<li>Minor policy tweaks</li>
<li>Differing technical interpretations</li>
<li>Praise</li>
<li>Differing data interpretations</li>
<li>Solicited pushback</li>
<li>Recognition of personal/professional accomplishments</li>
<li>Small changes on documents/presentations</li>
<li>Fresh insights on challenges</li>
<li>Requests for career advice</li>
</ul>
</td>
</tr>
</tbody>
</table>
<p><!--IMAGE FALLBACK FOR MOBILE BELOW --><br />
<img src="https://sloanreview.mit.edu/wp-content/uploads/2026/04/Clampitt_Upward_Essay_Table_REV.jpg" alt="A two-column table comparing higher-risk and lower-risk issues. Higher-risk issues include: annoying personal qualities (such as interrupting others or pettiness), character flaws (such as arrogance or impulsiveness), competency concerns, ethical issues (such as dishonesty), personal-life concerns, policy disagreements, poor performance (such as missed goals), and unsolicited pushback. Lower-risk issues include: positive operational results, minor policy tweaks, differing technical interpretations, praise, differing data interpretations, solicited pushback, recognition of personal/professional accomplishments, small changes on documents/presentations, fresh insights on challenges, and requests for career advice." class="no-desktop">
</p>
</article>
</aside>
</div>
<p>Issues can shift from one column to the other, depending on the particular supervisor-report relationship and the organizational culture. Your goal over time, of course, is to move as many issues as possible to the second column.</p>
<p>As a relationship matures, people learn to better identify others’ touchy subjects and anticipate their likely responses to a direct style of advocacy. A high-quality relationship between leaders allows a high degree of candor and a high volume of advocacy.</p>
<p>But lower-quality relationships or newer ones often improve with the deft use of more indirect advocacy and thoughtful translation. </p>
<p>Regardless of relational quality, a strong mutual commitment to shared values allows for more direct advocacy. For example, on a construction site or factory floor that has a strong safety culture, candid advocacy about potential safety concerns can be successful regardless of rank or relationship status. </p>
<p><strong>Reserve private conversations for more delicate matters.</strong> Unfortunately, not all leaders welcome pushback in public forums. Advocating for a shift or a tweak to superiors’ pet project in front of a group will often shut down further discussion because it may threaten the leader’s ego.</p>
<p>For example, consider a supervisor who occasionally launches into an annoying behavior like overselling initiatives to others and not allowing time for further discourse. Enlightening the supervisor about this off-putting tendency should usually be reserved for private, one-on-one, ego-protecting conversations. Discussions like these are particularly tricky because selling may be the supervisor’s forte. Often, someone’s greatest ability has an unrecognized downside that needs to be throttled back in certain situations or offset with other skills. </p>
<h4>Routinely rebalance your upward leadership role profile.</h4>
<p>Your upward leadership role profile should not be static. Ideally, relationships between leaders at different levels improve, and their mutual commitment to shared values evolves. Consequently, the amount of energy devoted to the roles of buffer, translator, and advocate will become more balanced and shift away from more dysfunctional allocations, like excessive advocacy or heavy buffering. Consider the following tactics when periodically rebalancing your profile: </p>
<p><strong>Reflect on how your allocation maximizes both your professional fulfillment and organizational contribution.</strong> The ideal allocation of the roles you play depends on your specific situation, goals, and the managerial style of your supervisor. Ask yourself, “What is the optimal percentage of my energy that should be devoted to buffering, translating, and advocating to optimize my growth and organizational performance?” </p>
<p>As a general rule, aim to build relational trust so that the percentage of your time devoted to buffering decreases to 10%-20% while advocating and translating (40%-45% each) assume more predominant roles. This type of allocation maximizes professional development and organizational growth but leaves enough time for you to serve as a proper shock absorber for the inevitable miscues, frustrations, and rumors that occur.</p>
<p><strong>Test and recalibrate.</strong> Shifting your role balance requires courage, particularly when everything seems to be going well. And, as with any new skill, both mastering and feeling comfortable with it will require some practice. For example, making the conscious effort to advocate more or throttling back can be unsettling; monitoring results allows you to tweak both the skills and the balance between the three key roles. Other people on your team may notice your behavior change as well. If questioned, you could say, “I’m experimenting with a different approach to exert influence.”</p>
<p></p>
<p><strong>Entertain other opportunities.</strong> Our multiyear research consistently revealed that employees’ relationships with their direct supervisor greatly influence their level of job satisfaction, engagement, and productivity.<a id="reflink12" class="reflink" href="#ref12">12</a> So, assuming that you’ve tried the strategies above and your role profile as a buffer, translator, and advocate continues to be unfulfilling, it may be time to look for other job opportunities that will allow you to flourish. After all, successful upward leadership requires superiors who are also willing to change. </p>
<p></p>
<p>Leading upward represents one of the most significant and least appreciated talents you can master. It requires courage tempered with discretion, thoughtful advocacy coupled with inquisitive listening, and an eagerness to debate peppered with a zeal to engage in calculated silences. </p>
<p>Practicing when and how to use these polarized aptitudes allows leaders to seamlessly integrate the roles of buffer, translator, and advocate. Learning to do so may not bring many accolades or trophies attesting to your “upward leadership excellence.” But mastering upward leadership will, at the very least, ensure career fulfillment and, at the very best, organizational excellence. Think of midlevel leaders you know who rose through the ranks or ensured great outcomes for their teams: Most have mastered the difficult art form of respectfully and resolutely leading up. And perhaps improving your own upper leadership acumen will spur you to further cultivate a climate within your own team that encourages upward leadership, improving employee engagement and work outcomes.<a id="reflink13" class="reflink" href="#ref13">13</a></p>
<p></p>
]]></content:encoded>
				<wfw:commentRss>https://sloanreview.mit.edu/article/managing-up-a-skill-set-that-matters-now/feed/</wfw:commentRss>
				<slash:comments>1</slash:comments>
							</item>
			</channel>
</rss>