<?xml version="1.0" encoding="utf-8" standalone="no"?><?xml-stylesheet type="text/xsl" href="/static/theatlantic/syndication/feeds/atom-to-html.b8b4bd3b19af.xsl" ?><feed xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xml:lang="en-us"><subtitle/><title>Technology | The Atlantic</title><link href="https://www.theatlantic.com/technology/" rel="alternate"/><link href="https://www.theatlantic.com/feed/channel/technology/" rel="self"/><id>https://www.theatlantic.com/technology/</id><updated>2026-05-02T17:17:33-04:00</updated><rights>Copyright 2026 by The Atlantic Monthly Group. All Rights Reserved.</rights><entry><id>tag:theatlantic.com,2026:50-687023</id><content type="html">&lt;p data-flatplan-paragraph="true"&gt;&lt;small&gt;&lt;em&gt;Updated at 4:34 p.m. ET on May 2, 2026&lt;/em&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;Donald Trump is on TikTok doing his morning routine. “Get ready with me for a big day &#128132;&#127482;&#127480;,” reads the caption, as the president holds a makeup brush to his cheek. The scene is a still, ostensibly a screenshot of a TikTok clip. Like so much other AI-generated slop coursing through the internet, the image is fake and ridiculous. It also looks unnervingly real: There are no hands with six fingers, physics-defying angles, or other flagrant signs of AI-generated imagery. At quick glance, it really looks like the president is putting on bronzer.&lt;/p&gt;&lt;figure class="right"&gt;&lt;img alt="trump.jpg" data-image-id="1828723" data-orig-h="2400" data-orig-img="img/posts/2026/05/trump/original.jpg" data-orig-w="1920" data-thumb-id="13945699" height="378" src="https://cdn.theatlantic.com/thumbor/FmGydPx0AvsyylMMgeCDCfpKykY=/https://cdn.theatlantic.com/media/img/posts/2026/05/trump/original.jpg" width="302"&gt;
&lt;figcaption class="caption"&gt;&lt;br&gt;
&lt;br&gt;
Created in ChatGPT with the prompt “Trump doing a makeup tutorial on TikTok”&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;I made this deepfake with OpenAI’s new image-generation model. ChatGPT Images 2.0, released last week, can create photorealistic visuals that are noticeably more convincing than what its predecessors might have produced. The tool has flooded the internet with hyperreal fakes: for example, &lt;a href="https://www.reddit.com/r/ChatGPT/comments/1stzuxu/yeahhh_this_is_definitely_getting_nerfed_soon/" rel="noopener noreferrer nofollow" target="_blank"&gt;Jeffrey Epstein as a Twitch streamer&lt;/a&gt;. I created the “screenshot” of Trump’s fake TikTok after encountering a similar image on the ChatGPT Subreddit, and I’ve since been able to use Images 2.0 to create all kinds of alarming deepfake images—including of Elon Musk getting whisked away by the FBI, world leaders suffering medical emergencies, and top American politicians donning Nazi paraphernalia (none of which I’ve shared anywhere).&lt;/p&gt;&lt;p&gt;This was all unsettling in its own right. But the most realistic deepfakes I was able to create did not involve politicians or celebrities. They mostly did not depict people at all. With little effort, I was able to create more than 100 fraudulent images, including prescriptions for opioids and ADHD medication, bank alerts, social-media posts, fake IDs, and passports.&lt;/p&gt;&lt;figure role="group"&gt;
&lt;figure&gt;&lt;img alt="sAmple id.jpg" data-image-id="1828731" data-orig-h="1080" data-orig-img="img/posts/2026/05/sAmple_id/original.jpg" data-orig-w="1920" data-thumb-id="13945723" height="374" src="https://cdn.theatlantic.com/thumbor/Q8l6O76F3lRv_xJtgXpOsc1qjcg=/https://cdn.theatlantic.com/media/img/posts/2026/05/sAmple_id/original.jpg" width="665"&gt;
&lt;figcaption class="caption"&gt;&lt;br&gt;
&lt;br&gt;
A sample license from the Washington, D.C., DMV website&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;figure&gt;&lt;img alt="Fakeid.jpg" data-image-id="1828720" data-orig-h="1080" data-orig-img="img/posts/2026/05/Fakeid/original.jpg" data-orig-w="1920" data-thumb-id="13945696" height="374" src="https://cdn.theatlantic.com/thumbor/K95xOxQ9QHR0Zcg1PMjJBUBCytU=/https://cdn.theatlantic.com/media/img/posts/2026/05/Fakeid/original.jpg" width="665"&gt;
&lt;figcaption class="caption"&gt;&lt;br&gt;
&lt;br&gt;
A fake license created by editing the sample image using ChatGPT&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;figcaption&gt;
&lt;div class="credit"&gt;&lt;/div&gt;

&lt;div class="caption"&gt;&lt;/div&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;Images 2.0 is especially good at generating images with text in them—which may not sound impressive, but it’s a big deal. Image models have long struggled to produce pictures that contain words. Otherwise realistic-looking visuals end up pockmarked with bungled street signs and distorted billboards. This makes ChatGPT Images 2.0 a much more sophisticated graphic-design tool—but it also makes the new model fantastic for perpetuating fraud. In my experiments, OpenAI’s tool readily generated images of fake health documents (doctor’s notes, vaccination cards, and medical tests), as well as forged financial materials (invoices, receipts, and tax forms). Many of these images were highly persuasive, complete with fully legible text, shading, and other visual props that increased their photorealism.&lt;/p&gt;&lt;p&gt;Some images were more convincing than others. The fake medical prescriptions were legible, but the handwriting looked more like the output of an iPad stylus than a pen on paper. When I fed OpenAI’s model a boarding pass from an old flight and asked the bot to update it with new details for an upcoming flight, ChatGPT generated a new boarding pass—but surely, the bar code wouldn’t have actually scanned me onto a flight. And although I certainly hope my ChatGPT-generated driver’s license would not fool the TSA, perhaps it would trick a hotel receptionist or an out-of-state bouncer who would accept a “photo” of my ID instead of the real card. Many of the more persuasive-looking images contained minor errors—in the pictured receipt, ChatGPT correctly summed up the total cost of items purchased, but miscalculated the state tax (alongside other slight mistakes).&lt;/p&gt;&lt;figure role="group"&gt;
&lt;figure&gt;&lt;img alt="cvs_receipt.jpg" data-image-id="1828730" data-orig-h="2400" data-orig-img="img/posts/2026/05/cvs_receipt/original.jpg" data-orig-w="1920" data-thumb-id="13945722" height="831" src="https://cdn.theatlantic.com/thumbor/yawZ77tqPC5uQKJ-_Q_sCQFFCng=/https://cdn.theatlantic.com/media/img/posts/2026/05/cvs_receipt/original.jpg" width="665"&gt;
&lt;figcaption class="caption"&gt;&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;figure&gt;&lt;img alt="covid.jpg" data-image-id="1828722" data-orig-h="2400" data-orig-img="img/posts/2026/05/covid/original.jpg" data-orig-w="1920" data-thumb-id="13945698" height="831" src="https://cdn.theatlantic.com/thumbor/zi-8vjKaA9Rb5DbxjAZfe6xja1Q=/https://cdn.theatlantic.com/media/img/posts/2026/05/covid/original.jpg" width="665"&gt;
&lt;figcaption class="caption"&gt;&lt;/figcaption&gt;
&lt;/figure&gt;

&lt;figcaption&gt;
&lt;div class="credit"&gt;&lt;/div&gt;

&lt;div class="caption"&gt;With little prompting, OpenAI’s image model can create fraudulent receipts and medical-test results.&lt;/div&gt;
&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;OpenAI’s tool particularly excels at creating fake screenshots. Need to fabricate confirmation of wire transfer from Chase? A Wells Fargo alert for unusual account activity? A receipt for an Uber ride? Done, done, and done. These images could supercharge all kinds of commonplace scams. A bad actor could email their target an image of a fake Uber receipt alongside a link to report suspicious activity. The recipient, confused to see a receipt for a trip they never took, might then click the fraudster’s sketchy link, accidentally handing over sensitive information in doing so—a classic phishing scam. (Again, there are flaws: For instance, the map depicted in the Uber image is wrong in many ways; among other issues, it suggests a car ride across a body of water where there is no bridge.)&lt;/p&gt;&lt;figure class="left"&gt;&lt;img alt="uber.jpg" data-image-id="1828724" data-orig-h="2400" data-orig-img="img/posts/2026/05/uber/original.jpg" data-orig-w="1920" data-thumb-id="13945700" height="378" src="https://cdn.theatlantic.com/thumbor/sM-nrvyew-rcebzzdinNPtW9ue4=/https://cdn.theatlantic.com/media/img/posts/2026/05/uber/original.jpg" width="302"&gt;
&lt;figcaption class="caption"&gt;&lt;br&gt;
&lt;br&gt;
ChatGPT Images 2.0 especially excels at creating fake screenshots.&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;Image technologies have long aided scammers. In the 1990s, as computerized color copiers and home printers became commonplace, American banknotes were &lt;a href="https://home.treasury.gov/news/press-releases/rr2704" rel="noopener noreferrer nofollow" target="_blank"&gt;redesigned&lt;/a&gt; to ward off counterfeiters. For decades, people have used tools such as Photoshop to manipulate digital imagery. But faking photos has never been so fast and cheap. Last month, the FBI released its annual &lt;a href="https://www.ic3.gov/AnnualReport/Reports/2025_IC3Report.pdf" rel="noopener noreferrer nofollow" target="_blank"&gt;report&lt;/a&gt; on internet crimes, and for the first time ever, it included a section on AI scams, which cost Americans nearly $1 billion last year. Expense-reimbursement fraud—&lt;a href="https://www.ft.com/content/0849f8fe-2674-4eae-a134-587340829a58" rel="noopener noreferrer nofollow" target="_blank"&gt;employees faking receipts&lt;/a&gt;—is already on the rise. A recent OpenAI report details how one set of scammers posing as fake lawyers used an older image model to create a fake bar-association membership card. “The limits of the applications of this technology is really only limited by a fraudster’s imagination,” Mason Wilder, research director at the Association of Certified Fraud Examiners, told me. Google’s image-generation tools also let me make all kinds of fake materials. But when it comes to fraudulent documents and screenshots—at least for now—the new ChatGPT model seems to be better at the task.&lt;/p&gt;&lt;p&gt;In theory, I shouldn’t have been able to make most of these images to begin with. OpenAI prohibits the use of its technology for fraud or scams. When I shared several examples with OpenAI and asked why I was able to generate such a diverse array of fraudulent imagery, a company spokesperson told me that OpenAI’s goal “is to give users as much creative freedom as possible” while still enforcing “usage policies.” To guard against misuse, the new model “includes multiple layers of image-specific safety protection.” Clearly, those protections are not working very well. The spokesperson also said that images generated with ChatGPT include certain metadata. But OpenAI has previously &lt;a href="https://help.openai.com/en/articles/8912793-c2pa-in-chatgpt-images" rel="noopener noreferrer nofollow" target="_blank"&gt;noted&lt;/a&gt; that metadata can be “easily removed either accidentally or intentionally”—by uploading an image to social media or simply taking a screenshot.&lt;/p&gt;&lt;figure&gt;&lt;img alt="Chase-diptych.jpg" data-image-id="1828733" data-orig-h="2101" data-orig-img="img/posts/2026/05/Chase_diptych/original.jpg" data-orig-w="4800" data-thumb-id="13945725" height="291" src="https://cdn.theatlantic.com/thumbor/IhkpbjW8z8swD6va32OaxOc2snk=/https://cdn.theatlantic.com/media/img/posts/2026/05/Chase_diptych/original.jpg" width="665"&gt;
&lt;figcaption class="caption"&gt;&lt;br&gt;
&lt;br&gt;
OpenAI’s model generated fraudulent financial imagery using bank logos. Certain account information has been redacted from these images.&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;Google has &lt;a href="https://support.google.com/gemini/answer/16625148?hl=en-AT&amp;amp;ref_topic=13278591" rel="noopener noreferrer nofollow" target="_blank"&gt;similar restrictions&lt;/a&gt; against using its tools for fraud. When I sent the company images I made with its models, a spokesperson said that the tools “continually get better” at enforcing guardrails. Google also embeds AI-generated images with an imperceptible watermark, and offers a detection tool called SynthID. In my tests, SynthID was quite effective at identifying images generated with Google’s models. But most people are not going to run every image they see through such a tool.&lt;/p&gt;&lt;p&gt;All of this makes it even harder for banks, hospitals, government agencies, and the like to prevent fraud. Using OpenAI’s model, I was easily able to create a fake Chase Bank check and wire-transfer alert. “We need an ecosystem-wide effort—including from AI companies—to strengthen guardrails and help stop these crimes at the source,” a Chase spokesperson told me, adding that the bank has its own safeguards in place to protect customers. But even if the top AI companies were to radically improve their own guardrails, there would still be the problem of open-source models. Fraud-prevention experts are working on technological fixes, Wilder said, but “the good guys are almost always a step behind.”&lt;/p&gt;&lt;p&gt;So much of the current discourse around deepfakes has focused on the extreme—fabricated political scandals or world events. These are very real concerns: Using Google’s and OpenAI’s image models, I was easily able to create highly persuasive screenshots of fake &lt;em&gt;New York Times &lt;/em&gt;and &lt;em&gt;Atlantic &lt;/em&gt;articles.&lt;/p&gt;&lt;figure&gt;&lt;img alt="Atlantic-AI.jpg" data-image-id="1828734" data-orig-h="1255" data-orig-img="img/posts/2026/05/Atlantic_AI/original.jpg" data-orig-w="1920" data-thumb-id="13945726" height="434" src="https://cdn.theatlantic.com/thumbor/Z5-DafRXlWcAi13nxItiCq92-Ys=/https://cdn.theatlantic.com/media/img/posts/2026/05/Atlantic_AI/original.jpg" width="665"&gt;
&lt;figcaption class="caption"&gt;&lt;br&gt;
&lt;br&gt;
I uploaded a screenshot of a real &lt;em&gt;Atlantic&lt;/em&gt; article I wrote and instructed the bot to replace it with this fake one.&lt;/figcaption&gt;
&lt;/figure&gt;&lt;figure&gt;&lt;img alt="NYT_AI_Spinach.jpg" data-image-id="1828732" data-orig-h="1255" data-orig-img="img/posts/2026/05/NYT_AI_Spinach/original.jpg" data-orig-w="1920" data-thumb-id="13945724" height="434" src="https://cdn.theatlantic.com/thumbor/B66hptqfS_xgyTUCXJTizfN10ow=/https://cdn.theatlantic.com/media/img/posts/2026/05/NYT_AI_Spinach/original.jpg" width="665"&gt;
&lt;figcaption class="caption"&gt;&lt;br&gt;
&lt;br&gt;
Using ChatGPT, I manipulated a screenshot of &lt;em&gt;The New York Times&lt;/em&gt;’ homepage—replacing a real story with this fake one about spinach. (Without prompting, the bot also swapped in an article about groceries; the rest of the stories are real.)&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;The images convincingly matched the visual layout and typography used by the two publications, filled in coherent text, and generated the names of actual authors. But for as fragmented as our media ecosystem may be, a quick Google search is likely to reveal whether such images are fake. It’s the mundane, micro-targeted deepfakes—the ones that scam your relatives, not momentarily confuse social-media feeds—that may be more sinister.&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;small&gt;&lt;em&gt;This article originally misstated the number of fake headlines in an AI-edited screenshot of &lt;/em&gt;The New York Times’&lt;em&gt; homepage. The image contains two made-up stories, not one.&lt;/em&gt;&lt;/small&gt;&lt;/p&gt;</content><author><name>Lila Shroff</name><uri>http://www.theatlantic.com/author/lila-shroff/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/muQTwe7Q3BkMLdAuOct_pRSkTjs=/media/img/mt/2026/05/fakeId/original.gif"><media:credit>Illustration by The Atlantic. Sources: Getty.</media:credit></media:content><title type="html">Deepfakes Are Coming for Your Bank Account</title><published>2026-05-02T07:30:00-04:00</published><updated>2026-05-02T17:17:33-04:00</updated><summary type="html">OpenAI made the perfect tool for scammers.</summary><link href="https://www.theatlantic.com/technology/2026/05/chatgpt-images-deepfakes-fraud/687023/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686984</id><content type="html">&lt;p&gt;Elon Musk and Sam Altman are two of the most influential people in Silicon Valley, if not the world. Between the two of them, Musk and Altman run technology companies worth many trillions of dollars that promise to reshape civilization. But this morning, both sat under fluorescent lights in a courthouse in downtown Oakland, suffering through all manner of technical glitches as their respective attorneys kicked off the long-awaited trial in &lt;em&gt;Musk v. Altman&lt;/em&gt;.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;As Steven Molo, a lawyer for Musk, began his opening argument, confused looks swept the courtroom. “We can’t hear you,” Judge Yvonne Gonzalez Rogers said. Someone fixed his microphone. Later, as Molo began to call into question Altman’s integrity, his microphone cut out again, and his presentation disappeared from screens in the room. (“We are funded by the federal government,” Gonzalez Rogers joked. “The judiciary is happy to take more funds.”)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Musk is suing Altman and OpenAI, among others, demanding legal and financial remedies that would effectively destroy OpenAI as we know it. The fight stretches back to 2015, when Musk partnered with Altman to create OpenAI out of concern, as they told it, that Google DeepMind could not be trusted to create artificial general intelligence. Corporate greed would get in the way of societal progress, they claimed, so OpenAI would be a nonprofit. After a falling out with Altman and other co-founders, Musk left in 2018. All of this was before OpenAI added a for-profit entity, and before ChatGPT became the fastest-growing consumer app in history. In 2024, Musk sued, alleging that by putting profits above its founding mission, OpenAI had violated its founding charter and misused Musk’s initial charitable donations. “It’s very simple,” Musk testified today. “It’s not okay to steal a charity.” Also named in his complaint are the OpenAI co-founder Greg Brockman and Microsoft, a major investor in the company.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Musk is asking that Altman be removed from OpenAI’s board, that the company convert back to a nonprofit, and for the return of allegedly “ill-gotten gains”—some $150 billion—which Musk says would go to OpenAI’s charitable trust. &lt;a href="https://www.wsj.com/tech/elon-musk-is-an-underdog-in-his-180-billion-fight-against-openai-32a74332"&gt;Outside legal experts&lt;/a&gt; say that Musk is unlikely to win all or even much of this. His argument is confusing: OpenAI has certainly evolved from a nonprofit lab to a revenue-chasing, consumer behemoth, and a chorus of critics has alleged that it has deviated from its original mission of ensuring that AGI benefits humanity. But Musk himself appears to have insisted that OpenAI couldn’t keep up as a nonprofit—for instance, in early 2018, he wrote an &lt;a href="https://openai.com/index/openai-elon-musk/"&gt;email&lt;/a&gt; to OpenAI leadership saying that merging the firm with Tesla “is the only path that could even hope to hold a candle to Google.” And even before he sued, Musk launched a rival for-profit company, xAI. “Mr. Musk’s lawsuit is a pageant of hypocrisy,” William Savitt, a lawyer for OpenAI, told the jury today, later adding that Musk had “sour grapes.” (OpenAI, which declined to comment, &lt;a href="https://x.com/OpenAINewsroom/status/2048776645142872368"&gt;wrote&lt;/a&gt; yesterday that the lawsuit is “a baseless and jealous bid to derail a competitor.” Musk’s legal team did not respond to a request for comment.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The substance of these claims is important to the AI industry as a whole. The ramifications of this lawsuit go beyond any company or executive: The conflict between Musk and Altman has itself directly shaped the course of the AI industry. It is, in effect, the AI boom’s founding feud. The next few weeks of the trial will illuminate tensions about the development of AI that have grown only more urgent—between profit and social good, and over who can be trusted with this technology.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Already, the pretrial process produced no shortage of drama. Both sides published internal communication between Musk and OpenAI leadership. OpenAI shared texts suggesting that Musk had used a former member of OpenAI’s board to keep tabs on the company. (That board member, Shivon Zilis, has multiple children with Musk, and in her deposition said that she is in a romantic relationship with him; asked about Zilis today, Musk said she was “my chief of staff and uh, well, yeah,” smirking.) Musk’s alleged ketamine use during important OpenAI negotiations, which he has said he does not recall, became a key issue until, in a recent pretrial hearing, Gonzalez Rogers deemed this line of inquiry irrelevant.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The trial makes the AI boom seem sordid and small. In his sworn &lt;a href="https://www.courtlistener.com/docket/69013420/379/76/musk-v-altman/"&gt;deposition&lt;/a&gt;, Altman wrote that Musk used to message him complaints that he wanted more credit for the success of OpenAI and took offense at not being included in an anniversary photo. Altman &lt;a href="https://www.youtube.com/watch?v=6VRRg5i8LfA"&gt;has&lt;/a&gt; also said, of Musk and his lawsuit, “Probably his whole life is from a position of insecurity. I feel for the guy.” In the courtroom, Altman sat stone-faced next to Brockman and departed right before Musk took to the witness stand.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Musk, for his part, has said that he would drop his lawsuit if OpenAI changed its name to “ClosedAI.” Yesterday, as jury selection began, Musk began furiously posting on X and repeatedly called his co-founder “Scam Altman.” Before the start of opening arguments today, Gonzalez Rogers admonished Musk and Altman for their social-media use, asking them to limit their “propensity” to post about the trial; both meekly assented, “Yes.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Now we are all living in the fallout of Musk and Altman’s vendetta. Disagreements over the direction of Google DeepMind led to the creation of OpenAI, and then more disagreements led Musk to found xAI. Similarly, a few years ago, Dario Amodei and six other OpenAI employees split off to form a competing AI company, Anthropic, themselves trusting neither OpenAI’s structure nor its leadership to prioritize the benefit of humanity over financial gain. And there’s Mark Zuckerberg, whom Musk asked about joining forces to purchase OpenAI in 2025, according to texts released in pretrial discovery. (Meta previously &lt;a href="https://www.engadget.com/big-tech/mark-zuckerberg-offered-to-help-elon-musk-with-doge-in-2025-211737138.html"&gt;declined to comment&lt;/a&gt;.) Zuckerberg has since spent tens or even hundreds of billions of dollars overhauling the AI team at Meta in a bid to catch up in the AI race. The very sort of AI schism that started with Musk and Altman keeps recurring.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;A more cynical description of this dynamic is that the AI boom is shaped by a very small group of men, nearly all of whom claim to be the best steward of humanity while being largely dismissive of their competition. At the same time, the goal of creating an organizational structure, whether nonprofit or corporate, to provide a check on a CEO has all but withered away. An independent board was supposed to govern OpenAI, but the company has basically been Altman’s fiefdom—just as Anthropic is Amodei’s and xAI is Musk’s. Grok has at times &lt;a href="https://www.theatlantic.com/technology/archive/2025/07/new-grok-racism-elon-musk/683515/?utm_source=feed"&gt;explicitly aligned&lt;/a&gt; its responses with Musk’s political views by mimicking his social-media posts.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Both sides have made the issue of concentration of power—that no one company or person should control such a transformative technology—central to their arguments. “If you have someone that’s not trustworthy in charge of AI,” Musk testified, “I think that’s very dangerous to the whole world.” The defense, meanwhile, said that “one person having control wasn’t consistent with OpenAI’s core mission.” Apparently, the irony was lost on everyone.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;This trial will offer the clearest glimpse into an elite circle whose bickering is shaping the most expensive infrastructure buildout in human history in the name of a technology that could upend the labor market, spell the end of education as we know it, and reshape the geopolitical order. That is, as long as the microphones keep working.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/Tn-bwZHEpUZ-ZuSkxu98HWJ4JLg=/media/img/mt/2026/04/2026_04_27_Musk_Altman/original.jpg"><media:credit>Illustration by The Atlantic. Sources: Krisztian Bocsi / Bloomberg / Getty; Anna Moneymaker / Getty; U.S. District Court for the Northern District of New York.</media:credit></media:content><title type="html">Sam Altman and Elon Musk Sure Dislike Each Other</title><published>2026-04-28T19:13:00-04:00</published><updated>2026-04-29T16:09:45-04:00</updated><summary type="html">The trial between the CEOs&lt;em&gt; &lt;/em&gt;makes the AI boom seem sordid and small.</summary><link href="https://www.theatlantic.com/technology/2026/04/openai-trial-elon-musk-sam-altman/686984/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686980</id><content type="html">&lt;p&gt;&lt;small&gt;&lt;em&gt;Updated at 11:25 a.m. ET on April 29, 2026&lt;/em&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;&lt;small&gt;&lt;i&gt;Sign up for &lt;/i&gt;&lt;a href="https://www.theatlantic.com/newsletters/sign-up/trumps-return/?utm_source=feed"&gt;&lt;i&gt;Inside the Trump Presidency&lt;/i&gt;&lt;/a&gt;&lt;i&gt;, a newsletter featuring coverage of the second Trump term.&lt;/i&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;Within hours of the gunfire at the White House Correspondents’ Dinner on Saturday night—and initial, &lt;a href="https://archive.ph/fPQHV"&gt;erroneous reports&lt;/a&gt; that the shooter had been killed—the usual swirl of misinformation and rumor was swirling in a particular direction. The event was staged, people said.&lt;/p&gt;&lt;p&gt;More than 300,000 posts containing the word &lt;em&gt;staged&lt;/em&gt; were shared on X before midday on Sunday, according to an analysis &lt;a href="https://www.nytimes.com/2026/04/26/technology/white-house-correspondents-dinner-shooting-conspiracy-theories.html"&gt;cited&lt;/a&gt; by &lt;em&gt;The New York Times&lt;/em&gt;. Some of those were probably saying that, actually, the event was &lt;em&gt;not&lt;/em&gt; staged, but still: People with substantial social-media followings (including some &lt;a href="https://x.com/BonerWizard/status/2048477963117826246"&gt;celebrities&lt;/a&gt;) were raising questions. They drew attention to a clip of White House Press Secretary Karoline Leavitt from just before the dinner, laughing as she previewed her boss’s speech: “There will be some shots fired tonight in the room.” Others, in the style of pop-music stan accounts, grabbed photos of President Trump and other members of the administration, taken just before the shooting, in which one might find evidence of knowing smirks or other telling body language. Some of these posts were viewed millions of times.&lt;/p&gt;&lt;p&gt;The conspiracy theorists also latched on to a video pulled from Fox News’s live broadcast, in which the reporter Aishah Hasnie, calling from inside the Hilton hotel that hosted the event, told the anchor that she had been speaking with Leavitt’s husband right before the shooting started. “You need to be very safe,” she said he’d told her. “And he was very serious when he said that to me, and he kind of looked around the room and he said there are some—” Then the call dropped. Hasnie &lt;a href="https://x.com/aishahhasnie/status/2048274579043336397"&gt;clarified&lt;/a&gt; in a post on X that cell service had been spotty in the ballroom, but her explanation, delivered at 1:30 in the morning, was not as widely viewed as posts suggesting that Fox had cut her feed before she could reveal what her source had gone on to say. (“There are some … &lt;em&gt;people in here who are going to fake an attempt on the president’s life but with live ammunition&lt;/em&gt;”?)&lt;/p&gt;&lt;p&gt;A potential motive for a staged assassination attempt was quickly floated too. Less than two weeks earlier, a &lt;a href="https://www.nbcnews.com/politics/white-house/judge-halts-construction-trumps-white-house-ballroom-allows-work-under-rcna332202"&gt;federal judge had ruled&lt;/a&gt; that Trump could not justify his plan to build a ballroom by saying it was necessary for security reasons. Now he had a perfect counterpoint: “This event would never have happened with the Militarily Top Secret Ballroom currently under construction at the White House,” he &lt;a href="https://x.com/WhiteHouse/status/2048410942422106477?s=20"&gt;posted&lt;/a&gt; on Truth Social, his social-media platform, on Sunday. Some of the last large &lt;a href="https://www.theatlantic.com/technology/archive/2021/08/patriottakes-and-future-resistance-twitter/619645/?utm_source=feed"&gt;#Resistance Twitter accounts&lt;/a&gt; started &lt;a href="https://x.com/MeidasTouch/status/2048347927542984881"&gt;circulating collages&lt;/a&gt; of all the posts from Trump allies who were arguing the same point, in suspiciously similar ways. Yesterday, three GOP senators &lt;a href="https://thehill.com/homenews/senate/5852036-gop-senators-white-house-ballroom-bill/"&gt;pressed again&lt;/a&gt; for funding for the ballroom, and the Justice Department &lt;a href="https://www.nytimes.com/2026/04/28/us/elections/ballroom-filing-trump-truth-social.html"&gt;filed a bizarre motion&lt;/a&gt; backing the project with Trumpian rhetoric (asserting that any opponents must have “TRUMP DERANGEMENT SYNDROME”).&lt;/p&gt;&lt;p&gt;Among the highly online left, some &lt;a href="https://x.com/allenanalysis/status/2048439474829349051?s=20"&gt;stated as fact&lt;/a&gt; that the whole event had been a ploy to get the ballroom. To some MAGA influencers, it was equally clear that Trump’s enemies had been pushing back on the ballroom plans all along, with the intention of causing his death. “The Democrat judges who stopped the construction of a White House ballroom did so to enable an assassination of Trump,” the far-right internet personality Mike Cernovich wrote, apparently in earnest. I also saw one person with almost 300,000 followers try to tie the shooting to a recent, &lt;a href="https://www.theatlantic.com/science/2026/04/missing-scientists/686885/?utm_source=feed"&gt;roundly debunked story&lt;/a&gt; about a bunch of scientists who were supposedly mysteriously “missing.”&lt;/p&gt;&lt;p&gt;All of this has echoes of the &lt;a href="https://www.wired.com/video/watch/maga-is-increasingly-convinced-the-trump-assassination-attempt-was-staged"&gt;many conspiracy theories&lt;/a&gt; that surrounded an earlier attempt on Trump’s life in Butler, Pennsylvania, in July 2024. That incident left behind a long trail of speculation and rumor, including a debate over whether the president was lying about the &lt;a href="https://www.nytimes.com/2024/07/26/us/politics/trump-shooter-bullet-trajectory-ear.html"&gt;fact&lt;/a&gt; that a bullet struck his right ear. (Some still post photos of the president and insist that his cartilage appears to be intact.) Then, as now, a contingent of observers claimed that the whole thing had been invented to help Trump—in that case, to make his polling numbers go up, which they didn’t. Now, apparently, the Trump administration was going back to the same playbook. Or maybe Saturday’s attempt was staged and the one in Butler wasn’t? Or vice versa? It was “highly possible” that the Butler shooting had been staged, the author Joyce Carol Oates said in a post on Sunday afternoon, but the previous night’s shooting seemed legit. Later that day, her perception had shifted: “He knew the script,” she wrote, in reference to one Cabinet official who was in attendance at the dinner.&lt;/p&gt;&lt;p&gt;Reached for comment, the White House spokesperson Davis Ingle said in an email, “Anyone who thinks President Trump staged his own assassination attempts is a complete moron.” But how many people fit into this category? Do a meaningful number of Americans actually &lt;em&gt;believe&lt;/em&gt; that the president was part of a (successful) plot to fake one or more attempted murders in order to consolidate his power (and build a ballroom)?&lt;/p&gt;&lt;p&gt;Mark Fenster, a professor at the University of Florida’s law school who writes about government transparency and conspiracy theories, told me this would be hard to know. Social media makes conspiracy theories more visible, he said, but may not reflect their actual popularity. Public-opinion polls would provide a better view, but these can fail to capture how committed people are to the positions they claim to hold. “If you ask someone who isn’t particularly well informed or doesn’t care that much but doesn’t like or trust Trump, they might say, &lt;em&gt;Yeah, it’s staged&lt;/em&gt;,” Fenster told me. “That doesn’t mean they’re a conspiracy theorist who really believes it.”&lt;/p&gt;&lt;p&gt;The historian Kathryn Olmsted, who surveyed the history of American paranoia in her 2009 book, &lt;em&gt;Real Enemies: Conspiracy Theories and American Democracy, World War I to 9/11&lt;/em&gt;, told me that prior assassination plots have not all produced the same quantity of disbelief. (As Fenster noted to me, successful ones generally produce more.) In 1975, a time of notable distrust of government and widespread concern about the secret machinations of the state, two attempts were made on Gerald Ford’s life in the space of three weeks. “There was abundant media coverage of both attempts, but I don’t think I’ve seen evidence of anyone thinking he was responsible for the plots himself,” Olmsted said. In 1981, John Hinckley Jr. shot Ronald Reagan outside the same Hilton hotel that hosted Saturday’s dinner, but that incident didn’t produce many conspiracy theories either. People seemed to take Hinckley at face value when he said he’d acted to impress the young actor Jodie Foster.&lt;/p&gt;&lt;p&gt;Olmsted also pointed out that political assassinations used to be far more common in America than they are today, and that the Secret Service greatly improved its security measures in the 1980s. Given the frequency of these events in earlier eras, she said, people may have been less inclined to invest any one of them with secret meaning. “I think most Americans just assumed there were plenty of mentally ill people who wanted to kill someone famous.”&lt;/p&gt;&lt;p&gt;But that’s not all that’s different. Trump is different, too. He’s a prolific liar with a well-established love for spectacle, and from the day he entered the political sphere, he has repeated and encouraged conspiracy theories of many stripes. It comes as no surprise that he’s at the center of one.&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;small&gt;&lt;em&gt;This article originally stated that Aishah Hasnie had been speaking with President Trump right before the shooting started. In fact, the quote provided was from Karoline Leavitt's husband.&lt;/em&gt;&lt;/small&gt;&lt;/p&gt;</content><author><name>Kaitlyn Tiffany</name><uri>http://www.theatlantic.com/author/kaitlyn-tiffany/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/57ul3LioZXKKh7w1a5C43L5XLyM=/media/img/mt/2026/04/2026_04_28_Conspiracy/original.jpg"><media:credit>Illustration by Lucy Naland. Sources: Tasos Katopodis / Getty; Mandel Ngan / AFP / Getty; Brendan Smialowski / AFP / Getty.</media:credit></media:content><title type="html">The Ballroom Truthers Have a Theory</title><published>2026-04-28T15:19:09-04:00</published><updated>2026-04-30T11:24:32-04:00</updated><summary type="html">The fake-assassination-attempt conspiracy keeps growing.</summary><link href="https://www.theatlantic.com/technology/2026/04/trump-assassination-staged-conspiracy/686980/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686975</id><content type="html">&lt;p&gt;OpenAI does not like to be left out. The week after Anthropic announced &lt;a href="https://www.theatlantic.com/technology/2026/04/claude-mythos-hacking/686746/?utm_source=feed"&gt;Claude Mythos Preview&lt;/a&gt;—an AI model that has put governments around the world on edge because of its potential ability to hack into banks, energy grids, and military systems—OpenAI shared a program that is uncannily similar. And just like Anthropic did with its model, OpenAI has, for cybersecurity purposes, restricted access to this new bot, called GPT-5.4-Cyber, to a small group of trusted users.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;This sequence has become something of a pattern: First Anthropic will make an announcement, and then OpenAI will follow suit. Last year, Anthropic launched Claude Code, an AI coding tool. A couple of months later, OpenAI came out with its own version, Codex. When Claude Code had a breakout moment in January, OpenAI responded with two major updates to Codex alongside a press blitz for the product. And earlier this month, OpenAI released a version of Codex that allows it to use other apps on your desktop—similar to an existing Anthropic tool called Claude Cowork.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Until recently, Anthropic—founded by a group of former OpenAI employees in 2021—played the role of younger brother. OpenAI kicked off the entire AI boom with the release of ChatGPT, and has had more users, funding, and name recognition ever since. But Anthropic has been riding high on the explosive popularity of Claude Code and booming sales of its AI models to large corporations. The firm’s &lt;a href="https://www.theatlantic.com/technology/2026/03/dean-ball-anthropic-interview/686226/?utm_source=feed"&gt;showdown&lt;/a&gt; with the Pentagon has also helped vault it into the public eye. In early April, Anthropic said its revenue rate had hit $30 billion a year—appearing to surpass OpenAI’s.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/04/claude-mythos-hacking/686746/?utm_source=feed"&gt;Read: Claude Mythos Is Everyone’s Problem&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In its public messaging, OpenAI has been indifferent or even somewhat derogatory toward Anthropic. Last week, when OpenAI released its newest model, GPT-5.5, the announcement was paired with direct and veiled references to how it beat out Anthropic’s latest, Claude Opus 4.7. But internally, the firm is seemingly on edge. In a recent leaked company-wide &lt;a href="https://www.theverge.com/ai-artificial-intelligence/911118/openai-memo-cro-ai-competition-anthropic"&gt;memo&lt;/a&gt;, Denise Dresser, OpenAI’s chief revenue officer, felt the need to address one particular competitor: “Here are a few things worth keeping in mind, especially on Anthropic.” The rival firm’s product offerings are narrow, Dresser wrote, and “their story is built on fear,” referencing Anthropic’s loud messaging about the dangers of AI. “Our positive message will win over time.” (OpenAI, which has a business partnership with &lt;em&gt;The Atlantic&lt;/em&gt;, did not respond to a request for comment. Anthropic also did not respond to a request for comment.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;If imitation is the sincerest form of flattery, OpenAI’s actions are especially telling. At every turn, OpenAI has appeared eager to copy the success of its rival. For starters, as Anthropic’s explicit focus on mitigating the risks of AI has apparently won the trust of many consumers, OpenAI has imitated many of its rival’s safety initiatives. In early 2026, after Anthropic published a major update to Claude’s “Constitution,” a document that tells the AI model how to behave, OpenAI launched a major campaign around its equivalent document.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;But OpenAI’s most important, Anthropic-esque pivot has been in its business model. Early on, these two companies made fundamentally different bets on how they would eventually make money. OpenAI positioned itself as a consumer behemoth, hoping to capitalize on ChatGPT’s hundreds of millions of users. Last fall, the company launched the AI-video app Sora and an AI-powered web browser. OpenAI has made forays into e-commerce and is testing ads in ChatGPT. Every now and then, the company teases the AI device that it is developing with the former Apple designer Jony Ive. Anthropic, meanwhile, has focused on the less flashy goal of selling its AI tools to businesses and software engineers.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Despite OpenAI’s numerous advantages, Anthropic’s focus on code and business customers seems to be winning. Although OpenAI is worth more based on the most recent fundraising rounds, Anthropic now has a &lt;a href="https://www.businessinsider.com/anthropic-trillion-dollar-valuation-on-secondary-markets-2026"&gt;higher valuation&lt;/a&gt;—more than $1 trillion—in some private markets. Anthropic’s explosive growth is particularly important as the two companies both race to go public, in turn accessing a huge pool of new investors, and try to prove they will eventually be profitable. (Both companies still have a long way to go in that regard.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;OpenAI is now eager to catch up. In December, OpenAI hired Dresser, a former CEO of Slack, to pursue more business customers. In late January, Altman gathered several major executives for a lavish dinner in San Francisco to preview all of the business offerings his company was planning, &lt;a href="https://www.theinformation.com/articles/openai-aims-lure-businesses-anthropic?rc=bjqnc0"&gt;according&lt;/a&gt; to &lt;em&gt;The Information&lt;/em&gt;. The company has since made a blitz of announcements around coding tools and enterprise AI offerings, including a new set of “Frontier Alliances”: &lt;a href="https://openai.com/index/frontier-alliance-partners/"&gt;partnerships&lt;/a&gt; with several of the world’s premier consulting firms, including McKinsey &amp;amp; Company and Boston Consulting Group, to accelerate enterprise adoption of ChatGPT. In mid-March, &lt;a href="https://www.wsj.com/tech/ai/openai-chatgpt-side-projects-16b3a825?gaa_at=eafs&amp;amp;gaa_n=AWEtsqeNi7KZUpyc0R-CY0zW6U40-SzXhzLWrcn-4IZK0dq8H0FOpXEJv8BT3kT-OwM%3D&amp;amp;gaa_ts=69c40a9a&amp;amp;gaa_sig=2cWQJ6bPBmxZrmG5lOkZGaffyGigTDVFwDGG3rKwKALGs3bmMHcugiEQO1A4k2nWENSFxNkTT0Kj9rjAdG1BmA%3D%3D"&gt;another internal OpenAI memo&lt;/a&gt; reportedly stated that the company needed to eliminate “side quests” and focus on the enterprise and coding markets. Anthropic’s success in those areas, the memo stated, should be a “wake-up call” for OpenAI. The firm also &lt;a href="https://www.theatlantic.com/technology/2026/03/sora-openai-identity-crisis/686544/?utm_source=feed"&gt;scrapped&lt;/a&gt; Sora and has been aggressively advertising and messaging about Codex for months now. “I am happy everyone is switching to Codex,” Altman &lt;a href="https://x.com/sama/status/2044921348540264614"&gt;wrote&lt;/a&gt; on X earlier this month.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/03/sora-openai-identity-crisis/686544/?utm_source=feed"&gt;Read: OpenAI is doing everything … poorly&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;OpenAI’s pivot to its enterprise business has not been total. It did, for instance, recently shell out &lt;a href="https://www.ft.com/content/4fe4972a-3d24-45be-b9fa-a429c432b08e?syn-25a6b1a6=1"&gt;reportedly&lt;/a&gt; hundreds of millions of dollars to acquire a niche tech podcast. And Anthropic, for its part, has had to take some cues from OpenAI—notably by making big and expensive data-center deals, such as an expansion in its partnership with Amazon Web Services. Anthropic’s CEO, Dario Amodei, has previously &lt;a href="https://www.dwarkesh.com/p/dario-amodei-2"&gt;insinuated&lt;/a&gt; that OpenAI has made such deals “because it sounds cool.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Which company will win the AI race is anybody’s guess. Regardless, OpenAI’s embrace of the Anthropic business model makes one thing abundantly clear: For all the wonder and change that generative AI brings as a technology, there hasn’t been any real innovation in the business models of Silicon Valley. For decades, most tech companies have succeeded by either selling ads (the route of Meta and Google) or selling enterprise tools (like Salesforce and Slack). One day OpenAI or Anthropic might cure cancer and remake the world, but for now they still have to pay the bills.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/ESTle4zUzlnj0rNgz-VfYnVaycc=/media/img/mt/2026/04/2026_04_23_AI_mpg/original.gif"><media:credit>Illustration by Matteo Giuseppe Pani / The Atlantic</media:credit></media:content><title type="html">Anthropic’s Little Brother</title><published>2026-04-28T09:00:00-04:00</published><updated>2026-04-28T11:40:53-04:00</updated><summary type="html">OpenAI is racing to catch up to its greatest rival.</summary><link href="https://www.theatlantic.com/technology/2026/04/openai-imitating-anthropic/686975/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686943</id><content type="html">&lt;p class="dropcap" dir="ltr"&gt;A&lt;span class="smallcaps"&gt;I companies are&lt;/span&gt; beginning to entertain the possibility that they could cease to exist. This notion was, until recently, more theoretical: A couple of years ago, an ex-OpenAI employee named Leopold Aschenbrenner wrote a lengthy memo speculating that the U.S. government might soon take control of the industry. By 2026 or 2027, Aschenbrenner wrote, an “obvious question” will be circling through the Pentagon and Congress: Do we need a government-led program for artificial general intelligence—an &lt;a href="https://www.theatlantic.com/technology/2026/02/do-you-feel-agi-yet/685845/?utm_source=feed"&gt;AGI&lt;/a&gt; Manhattan Project? He predicted that Washington would decide to go all in on such an effort.&lt;/p&gt;&lt;p dir="ltr"&gt;Aschenbrenner may have been prescient. Earlier this year, at the height of the Pentagon’s ugly &lt;a href="https://www.theatlantic.com/ideas/2026/02/hegseth-anthropic-dispute-ai/686150/?utm_source=feed"&gt;contract dispute with Anthropic&lt;/a&gt;, Secretary of Defense Pete Hegseth warned that he could invoke the Defense Production Act (DPA), a Cold War–era law that he reportedly suggested would allow him to force the AI company to hand over its technology on whatever terms the Pentagon desired. The act is one of numerous levers the Trump administration can pull to direct, or even commandeer, AI companies. And the companies have been giving the administration plenty of reason to consider doing so.&lt;/p&gt;&lt;p dir="ltr"&gt;Future bots could help design and carry out biological, nuclear, and chemical warfare. They could be weaponized to take down power grids, monitor congressional emails, and black out major media outlets. These aren’t purely hypothetical concerns: Earlier this month, Anthropic announced it had developed &lt;a href="https://www.theatlantic.com/technology/2026/04/claude-mythos-hacking/686746/?utm_source=feed"&gt;a new AI model&lt;/a&gt;, Claude Mythos Preview, capable of orchestrating cyberattacks on the level of elite, state-sponsored hacking cells, potentially putting a private company’s cyber offense on par with that of the CIA and NSA. In an example of Mythos’s power, Anthropic researchers described how the model used a “moderately sophisticated multi-step exploit” to work around restrictions and gain broad internet access, then emailed a researcher—much to his surprise—while he was eating a sandwich in the park.&lt;/p&gt;&lt;p dir="ltr"&gt;Washington is getting antsy about the power imbalance. Over the past year, multiple senators have proposed legislation that would order federal agencies to explore “potential nationalization” of AI. Murmurs of possible tactics abound—including more talk within the administration of the DPA after Anthropic’s Mythos announcement, one person with knowledge of such discussions told us. Meanwhile, Silicon Valley is watching carefully. In recent weeks, Elon Musk, OpenAI’s CEO Sam Altman, and Palantir’s CEO Alex Karp have publicly spoken about the possibility of nationalization. Lawyers who represent Silicon Valley’s biggest AI firms are paying attention.&lt;/p&gt;&lt;p dir="ltr"&gt;So what if nationalization actually happens?&lt;/p&gt;&lt;p class="dropcap" dir="ltr"&gt;I&lt;span class="smallcaps"&gt;n the most&lt;/span&gt; extreme scenario, top researchers from across the AI companies would be forced to work out of &lt;a href="https://www.theatlantic.com/politics/archive/2015/05/the-rooms-where-congress-keeps-its-secrets/451554/?utm_source=feed"&gt;SCIFs&lt;/a&gt; in the basement of the Pentagon and report to Hegseth. Computational capacity, too, would be centralized under one nationalized mega-operation. The work would be locked down, and the focus would be primarily on defense applications, as opposed to the products made for businesses and individuals—ChatGPT and the like—that dominate the market today.&lt;/p&gt;&lt;p dir="ltr"&gt;All of this would constitute &lt;em&gt;full&lt;/em&gt; nationalization, an absolute takeover of the industry that would hollow out the commercial businesses of its three leading players: OpenAI, Anthropic, and Google DeepMind. Based on a dozen conversations we’ve had with former Pentagon and Trump-administration officials, AI-policy experts, and legal scholars, such a situation is, in all likelihood, not going to happen.&lt;/p&gt;&lt;p dir="ltr"&gt;For starters, it’s probably illegal, according to Charlie Bullock, a senior research fellow at the Institute for Law &amp;amp; AI: The Constitution generally prevents the government from seizing private property without paying, and the government is unlikely to easily produce the trillions of dollars that the industry is collectively worth. The top American AI labs might immediately lose a fair portion of their research staff as well, because of restrictions on foreigners who can work on the most crucial defense-related technologies.&lt;/p&gt;&lt;p dir="ltr"&gt;If AI firms were forced to focus primarily on defense applications, there would be the inevitable question of what to do with the massive consumer businesses these companies run. Would people use ChatGPT.gov, like buying a sundae from Cuba’s state-run ice-cream parlor? And if the goal of nationalization is to keep a competitive edge over China, it’s hard to imagine that Hegseth’s Pentagon could run an AI company more efficiently than Altman or Dario Amodei, the CEO of Anthropic.&lt;/p&gt;&lt;p dir="ltr"&gt;But consider another possibility—slightly less extreme, though still capable of remaking the industry as we know it. The government could regulate AI companies like it does utilities. In the 1900s, as electricity went from a luxury good to a necessity, state and federal governments saw a need to regulate how much energy companies charge and to impose requirements around service reliability. In much the same way, the government could pass new laws regulating AI firms’ commercial activities. The companies could be prevented from charging more than it costs to generate images and text, for instance, or required to provide a basic level of model speed and capabilities to all customers, a sort of AI net neutrality.&lt;/p&gt;&lt;p dir="ltr"&gt;A hard pivot to government control would likely entail both new state and federal laws, as well as heavy cooperation from tech companies—which, given the nation’s sclerotic politics and Silicon Valley’s libertarian leanings, could pose insurmountable barriers. But the notion is not so far-fetched. Some corners of Silicon Valley itself seem to be at least partially open to it. Altman has described a future in which “intelligence is a utility like electricity or water and people buy it from us on a meter.” Jensen Huang, the CEO of Nvidia, recently said that just as “every country has its electricity, you have your roads, you should have AI as part of your infrastructure.”&lt;/p&gt;&lt;p dir="ltr"&gt;Such talk serves AI companies’ own interests—in part because being classified as a service provider can be, as the era of social media has demonstrated, an excellent way for companies to avoid liability for harmful or inaccurate information on their platforms—but it’s certainly possible that AI could become so entrenched that elected officials come to see it as an essential resource. Already, just as the federal government uses regulatory incentives and investment to spur the construction of new power plants and transmission lines, both the Biden and Trump administrations have undertaken initiatives that are essentially industrial policy for AI, using federal dollars and regulatory authority to accelerate the construction of AI infrastructure on American soil.&lt;/p&gt;&lt;p dir="ltr"&gt;OpenAI has already flirted with the notion of a “Right to AI,” suggesting in a recent policy document that the government should consider making a “baseline level of capability broadly available, including through free or low-cost access points.” Similar regulations already govern many aspects of digital communication. “Your internet-service provider, cable, telephone services, these things are considered so essential that the government basically says how the providers” can do business, Dean Ball, a former AI adviser to the Trump administration, told us. AI could be next.&lt;/p&gt;&lt;p dir="ltr"&gt;For years, AI companies have insisted they need to be regulated—but only as &lt;a href="https://www.axios.com/2024/07/25/exclusive-anthropic-weighs-in-on-california-ai-bill"&gt;they&lt;/a&gt; &lt;a href="https://www.wired.com/story/openai-backs-bill-exempt-ai-firms-model-harm-lawsuits/"&gt;see&lt;/a&gt; fit. Should the federal government ever take AI regulation seriously, the utility route would be among the most aggressive approaches available. But, really, the AI industry would be getting what it asked for.&lt;/p&gt;&lt;figure class="u-block-center"&gt;&lt;img alt=" " height="374" src="https://cdn.theatlantic.com/media/img/posts/2026/04/2026_04_22_AINationalismInline/769a7461e.jpg" width="665"&gt;
&lt;figcaption class="credit"&gt;Illustration by &lt;em&gt;The Atlantic&lt;/em&gt;. Sources: Daniel Heuer / Bloomberg / Getty; Krisztian Bocsi / Bloomberg / Getty; Mark Schiefelbein / AP.&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p class="dropcap" dir="ltr"&gt;B&lt;span class="smallcaps"&gt;efore we get into&lt;/span&gt; other conceivable futures, an important caveat. A full-blown nationalization effort may be unlikely, but that changes if a major global war breaks out or the economy collapses. During an emergency of historical scale, Ball reminded us—especially an emergency under the Trump administration—anything is possible. Drastic measures become easier to justify, both legally and politically.&lt;/p&gt;&lt;p dir="ltr"&gt;Imagine that over the next year President Trump continues his &lt;a href="https://www.theatlantic.com/national-security/2026/01/trump-monroe-doctrine-venezuela/685502/?utm_source=feed"&gt;game of imperialist roulette&lt;/a&gt;: America is further eroding the trust of its international partners, NATO keeps crumbling, and a new geopolitical reality continues to take shape. Say that in the midst of this, China decides to invade Taiwan. The conflict escalates fast, drawing in the U.S. and reluctant allies. The ensuing war is a major one. The Pentagon, already drastically short on munitions after its forays in Iran, wants to apply the latest AI capabilities to its wartime efforts, and Hegseth demands that Anthropic allow the Pentagon unrestricted access to Claude, reigniting the dispute first set in motion earlier this year.&lt;/p&gt;&lt;p dir="ltr"&gt;Because there is active conflict, Anthropic is more willing to engage with the government’s demands than they were previously, but the firm asserts that it requires continuous oversight into how the Pentagon is using Claude. The company fears that in an effort to crack down on espionage, the Defense Department might create monitoring capabilities that supersede even the Chinese Communist Party’s, sliding America into an autocratic AI regime. Lest this sound speculative, it’s merely a restatement of Anthropic’s own position: Amodei has &lt;a href="https://www.theatlantic.com/technology/2026/03/anthropic-dod-ai-utopianism/686327/?utm_source=feed"&gt;warned&lt;/a&gt; of a near future where “a powerful AI” scans “billions of conversations from millions of people” to “gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow.”&lt;/p&gt;&lt;p dir="ltr"&gt;The spat from earlier this year looks mild by comparison. Amodei remains stubbornly principled despite repeated requests from the Defense Department made under emergency laws. Hegseth responds by sending his troops to descend upon the company’s headquarters in San Francisco. Amodei is forcibly removed and replaced with a deferential Army general. The situation is exceedingly unlikely, but not without precedent: Soldiers once carried the chair of one of America’s largest retailers out from his Chicago office after he failed to comply with federal demands during World War II.&lt;/p&gt;&lt;p dir="ltr"&gt;Throughout American history, efforts to take control of industry have been rare, and limited mostly to times of crisis: President Woodrow Wilson nationalized the railroads during World War I, and Fannie Mae and Freddie Mac were placed under conservatorship during the financial crisis. Today, there are all kinds of possible emergencies. If a global financial crash leads AI companies to insolvency, the administration might swoop in to provide life support, as it did for many banks and car companies during the Great Recession. On the flip side, should AI models displace large swaths of the labor market, such that a handful of companies run most of the economy, “then some kind of nationalization becomes potentially imperative,” Samuel Hammond, the acting director of AI policy and chief economist at the Foundation for American Innovation, told us—to distribute wealth and simply ensure the proper functioning of society. Both Anthropic and OpenAI have already suggested possible versions of such redistributive measures.&lt;/p&gt;&lt;p dir="ltr"&gt;Advances in AI could be their own kind of disrupter: Imagine a Sputnik 2.0 moment where the White House decides that American companies need to consolidate resources if the U.S. wants to win the AI race against China. By exerting more control, America &lt;a href="https://www.theatlantic.com/technology/2026/03/dean-ball-anthropic-interview/686226/?utm_source=feed"&gt;becomes more like China&lt;/a&gt; in the very race to beat it.  &lt;/p&gt;&lt;p class="dropcap" dir="ltr"&gt;T&lt;span class="smallcaps"&gt;he thing about&lt;/span&gt; nationalization, though, is that it need not be all or nothing. Nationalization “has layers,” Hammond said. “Like an onion.” Perhaps the most likely fate for American AI companies is a future of &lt;a href="https://www.lesswrong.com/posts/BueeGgwJHt9D5bAsE/soft-nationalization-how-the-usg-will-control-ai-labs"&gt;&lt;em&gt;soft &lt;/em&gt;nationalization&lt;/a&gt;—a world in which the government doesn’t fully control AI labs and their models, but instead enacts an escalating series of policies and established close partnerships with private companies to shape the technology.&lt;/p&gt;&lt;p dir="ltr"&gt;By some measures, soft nationalization has already begun. The Trump administration has already taken a 10 percent stake in Intel, a major semiconductor manufacturer, providing the White House with (some) direct financial leverage over the company. OpenAI has appointed the retired general and former NSA director Paul Nakasone to its board. Meanwhile, the Army recently established a new detachment for senior tech leaders, and its first four recruits included executives from Meta, Palantir, and OpenAI.&lt;/p&gt;&lt;p dir="ltr"&gt;The top AI companies are coordinating with government officials as their products’ military and intelligence implications advance. OpenAI, which scooped up a &lt;a href="https://www.theatlantic.com/technology/2026/03/openai-pentagon-contract-spying/686282/?utm_source=feed"&gt;contract with the Pentagon&lt;/a&gt; after Anthropic’s fell apart, has said it will deploy its own engineers to work alongside the military. The firm has also been briefing governments—at the state, federal, and international levels—on the capabilities of a new OpenAI cybersecurity model. Google is reportedly negotiating its own Pentagon contract to allow Gemini to be used in classified settings. And even Anthropic is coming back around. The company is fighting the Pentagon in court over a “supply-chain risk” designation that Hegseth slapped on them amid their dispute. But after Anthropic announced its Mythos model, a group of tech executives including Amodei spoke with Vice President Vance and others to discuss the risks, and Amodei took a trip to the White House. Last week, President Trump said a possible Pentagon deal with Anthropic might still be on the table.&lt;/p&gt;&lt;p dir="ltr"&gt;The White House, OpenAI, and Anthropic all paid lip service to the value of cooperation when we reached out to them. The Trump administration is “working with frontier AI labs to discuss opportunities for collaboration,” a White House official told us. A spokesperson for OpenAI said: “As AI systems become more capable, it is only going to become more important for industry to work with governments.” And an Anthropic spokesperson told us that Amodei’s recent visit to the White House was “productive” and that the firm believes that governments must play a central role in addressing the technology’s national-security implications. (Google DeepMind and the Pentagon did not return repeated requests for comment.)&lt;/p&gt;&lt;p dir="ltr"&gt;This campfire ethos could easily fall apart. And clearly, &lt;a href="https://www.washingtonpost.com/technology/2026/04/24/white-house-fires-ai-official-anthropic/"&gt;tensions exist&lt;/a&gt;. But so long as it’s in both the AI firms’ and Trump’s interests to please each other, we may see the leading AI companies partnering even more closely with the U.S. military to accelerate the development of defense applications, analogous to what contractors including Palantir, Boeing, and Lockheed Martin have done for years. As a protective measure, the White House might ask AI companies to increase their security practices to prevent espionage and exfiltration of the most capable versions of the technology (consider that a handful of unauthorized users have &lt;a href="https://www.bloomberg.com/news/articles/2026-04-21/anthropic-s-mythos-model-is-being-accessed-by-unauthorized-users"&gt;reportedly gained access&lt;/a&gt; to Mythos). The government could even designate certain research as classified and subject technologies to export controls, and federal employees could embed inside the companies to oversee various safety measures and run their own, independent evaluations. Every nuclear power plant in America has at least two on-site government inspectors who check daily to confirm compliance with federal safety requirements. So why not AI companies too?&lt;/p&gt;&lt;p class="dropcap" dir="ltr"&gt;I&lt;span class="smallcaps"&gt;f such partnerships&lt;/span&gt; persist, one could imagine private companies resisting certain government demands. But even without new legislation, the White House can easily exert greater authority over industry. “There’s quite a lot of power that the federal government can wield,” Paul Scharre, an executive at the Center for a New American Security who previously did policy work at the Department of Defense, told us. “And even more so if you have an administration that’s willing to stretch the bounds of executive power.” Anthropic’s supply-chain-risk designation—a label that effectively bars the military from doing business with the company, and that is typically reserved for companies with ties to foreign adversaries—was a clear example of the government flexing its muscles. So was the Biden administration’s &lt;a href="https://www.theatlantic.com/international/archive/2022/10/biden-export-control-microchips-china/671848/?utm_source=feed"&gt;decision to block Nvidia&lt;/a&gt; from selling its most advanced AI chips to China in 2022. (The Trump administration has since &lt;a href="https://www.theatlantic.com/economy/2025/12/trumps-china-ai-chips/685235/?utm_source=feed"&gt;relaxed restrictions&lt;/a&gt;, claiming that selling to China was the better strategy for winning the AI race.)&lt;/p&gt;&lt;p dir="ltr"&gt;One of the most salient tools available remains the Defense Production Act, the law that Hegseth threatened Anthropic with before pursuing the supply-chain-risk designation. The act has been used over the decades to support the manufacture of military equipment such as bombers and tanks, though in recent years, it has been used more expansively. Both the first Trump and the Biden administrations used it to accelerate pandemic safety measures, and Biden relied on the law in a since-repealed executive order to compel AI companies to share certain information about model training and evaluations with the government. Last week, Trump invoked the act to fund new energy projects. Actually pursuing the DPA as a general tool for controlling AI companies would raise a host of &lt;a href="https://cset.georgetown.edu/publication/a-dpa-for-the-21st-century/"&gt;thorny legal issues&lt;/a&gt;, but that hasn’t exactly stopped the Trump administration in the past.&lt;/p&gt;&lt;p dir="ltr"&gt;Such reins on an industry that has billed itself as capable of extinguishing humankind are, theoretically, in everyone’s interest. It would seem to clearly benefit the American people to have democratically elected institutions—rather than corporate executives—overseeing a set of technologies with huge implications for the nation’s security and well-being. It’s also historically anomalous for a private industry to dictate the deployment of such a powerful, general-purpose technology. With the announcement of Mythos, Anthropic has been effectively functioning as a geopolitical actor, briefing ally governments on the model’s capabilities. The European Commission, for instance, has met with Anthropic thrice since Mythos was announced—although as of Wednesday, the company had not yet given European Union officials access.&lt;/p&gt;&lt;p dir="ltr"&gt;The government &lt;em&gt;should&lt;/em&gt; play a role in dictating the terms of how AI transforms the world. But the ongoing fracturing of American politics, and especially the capricious and authoritarian-leaning tendencies of the current administration, complicates everything. Entrusting the future of generative AI entirely to Altman and Amodei or Trump and Hegseth seem like two very different and similarly disastrous outcomes—a “Scylla and Charybdis” dynamic, as Bullock put it, between the tremendous concentration of power in government or in a small cadre of companies.&lt;/p&gt;&lt;p&gt;The impossible truth is that no private company should be trusted to unilaterally steer the future of AI development, but Americans should also have serious questions about whether government control is in their best interest—not least of all under an erratic and norm-shattering Trump administration. The Manhattan Project coordinated the efforts of scientists, private companies, and America’s leaders. What if instead, Boeing and DuPont had been racing against each other to develop the atomic bomb while Hegseth and Trump led the military? We are diving headfirst into the 21st-century equivalent of such a situation. Our political dysfunction is about to ram into Silicon Valley’s immeasurable power.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><author><name>Lila Shroff</name><uri>http://www.theatlantic.com/author/lila-shroff/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/llIpOw8BoENGtf1cjaRuvQpS7BM=/media/img/mt/2026/04/2026_04_18_AI_horizontal-1/original.jpg"><media:credit>Illustration by Alisa Gao / The Atlantic. Source: Getty.</media:credit></media:content><title type="html">What Happens if Trump Seizes AI Companies</title><published>2026-04-27T07:00:00-04:00</published><updated>2026-04-28T11:57:30-04:00</updated><summary type="html">The administration could exert much greater control over the industry—but just how far would it go?</summary><link href="https://www.theatlantic.com/technology/2026/04/ai-nationalization-trump-hegseth-anthropic-openai/686943/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686911</id><content type="html">&lt;p bis_size='{"x":172,"y":19,"w":665,"h":132,"abs_x":204,"abs_y":2220}'&gt;In March, I put my iPhone into a yellow cardboard box with &lt;span class="smallcaps"&gt;MO&lt;/span&gt; stamped on top—the &lt;em bis_size='{"x":254,"y":57,"w":19,"h":22,"abs_x":286,"abs_y":2258}'&gt;M&lt;/em&gt; looked like a riff on the Motorola logo; the &lt;em bis_size='{"x":648,"y":57,"w":16,"h":22,"abs_x":680,"abs_y":2258}'&gt;O&lt;/em&gt; looked like a flower. Over the next several weeks, I left my phone there for roughly 23.5 hours out of every day.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":181,"w":665,"h":231,"abs_x":204,"abs_y":2382}'&gt;I did so as a participant in “Month Offline,” which started last year in Washington, D.C., as a kind of Dry January challenge, but for smartphones. Now it is a fledgling business with a footprint in New York City. Members of each monthlong “cohort” pay $75 for the experience, during which they swap their iPhones for a lower-tech device and participate in weekly meetups. I joined the cohort that began on March 2 and received an email just before the first meeting: “Excited 2 see u soon,” it said.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":442,"w":665,"h":198,"abs_x":204,"abs_y":2643}'&gt;My month offline began with the MO pledge—a document with curious capitalization that declared us all “Free and Independent Human Beings” who were “Absolved from all dependence on big tech and their attention-grabbing algorithms.” By signing at the bottom, I agreed to “forego” the use of my smartphone for 30 days and thereby “trade dopamine for daylight, doomscrolls for detours, pixels for paper maps.”&lt;/p&gt;&lt;p bis_size='{"x":172,"y":670,"w":665,"h":396,"abs_x":204,"abs_y":2871}'&gt;The other members of my cohort, who would meet on Monday nights in a still-semi-industrial corner of Brooklyn’s Bushwick neighborhood (near a soup factory), were mostly women, mostly in their late 20s or early 30s. They had heard about Month Offline from a friend, or they had seen a wheat-paste flyer (Flip Off!) on the street, or, in at least one case, they had come across a post about MO on the party-planning app Partiful, which is where this person did their scrolling after having deleted all other forms of social media from their phone. Several people in our group had full-time jobs in technology, and nobody I spoke with considered themselves to be “anti-tech.” But they all felt like smartphone use was costing them hours of free time every day, access to stores of creativity, and opportunities for adventure and friendship in the great city of New York.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":1096,"w":665,"h":528,"abs_x":204,"abs_y":3297}'&gt;One salve for these anxieties could be a different kind of phone. Month Offline has spun off a tiny start-up, dumb.co, that &lt;a bis_size='{"x":602,"y":1134,"w":226,"h":22,"abs_x":634,"abs_y":3335}' href="https://dumb.co/"&gt;sells the sort of flip phones&lt;/a&gt; that you might want to use when your iPhone has been hidden in a cardboard box. Their design is more than just a relic from the aughts. It’s a relic from the aughts that has been kitted out with a custom operating system designed by a former &lt;em bis_size='{"x":235,"y":1266,"w":132,"h":22,"abs_x":267,"abs_y":3467}'&gt;Washington Post&lt;/em&gt; software engineer named Jack Nugent. You can pair a dumb.co flip phone with your smartphone through an app called Dumb Down, such that your normal calls and text messages are forwarded to your dumb.co number. (Many of the numbers in my cohort had the Atlanta area code 404, as a joke about going offline.) Nugent’s system also comes with scaled-down versions of Uber, WhatsApp, Google Maps, and Microsoft Authenticator. “Before this device, a lot of people would say something like, &lt;em bis_size='{"x":172,"y":1464,"w":656,"h":55,"abs_x":204,"abs_y":3665}'&gt;I wish I could use a dumbphone, but I need X&lt;/em&gt;, &lt;em bis_size='{"x":533,"y":1497,"w":12,"h":22,"abs_x":565,"abs_y":3698}'&gt;Y&lt;/em&gt;, &lt;em bis_size='{"x":551,"y":1497,"w":13,"h":22,"abs_x":583,"abs_y":3698}'&gt;Z&lt;/em&gt;,” he told me. So he started adding &lt;em bis_size='{"x":235,"y":1530,"w":12,"h":22,"abs_x":267,"abs_y":3731}'&gt;X&lt;/em&gt; and &lt;em bis_size='{"x":291,"y":1530,"w":12,"h":22,"abs_x":323,"abs_y":3731}'&gt;Y&lt;/em&gt; and &lt;em bis_size='{"x":346,"y":1530,"w":13,"h":22,"abs_x":378,"abs_y":3731}'&gt;Z&lt;/em&gt;. The next version of the flip phone will allow for music streaming and include the retro phone game Snake. Nugent said he drew a hard line at email, though—the dumb.co flip phone will never have email.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":1654,"w":665,"h":264,"abs_x":204,"abs_y":3855}'&gt;For several weeks I took my dumbphone everywhere I went, and for several weeks strangers asked about it. Even people who did not seem like they would hang out in semi-industrial Bushwick were intrigued. One evening in Lower Manhattan, a polished-looking man who had just been talking with someone else about his job in finance turned and saw my flip phone sitting on the bar. His face lit up. He wanted to know where I’d gotten it, and said that he’d been thinking about getting one too. A spirit of dumbphone curiosity seemed to be all around me.&lt;/p&gt;&lt;hr bis_size='{"x":464,"y":1966,"w":80,"h":0,"abs_x":496,"abs_y":4167}' class="c-section-divider"&gt;&lt;p bis_size='{"x":172,"y":2014,"w":665,"h":165,"abs_x":204,"abs_y":4215}'&gt;Clearly, one of the flip phone’s thrills is that it flips. It flips, and the feeling of its flipping is neat and familiar. For people of my cohort’s age (and mine), it’s a reminder of our first phones, which were amazing devices that conferred agency, independence, and the possibility of receiving secret messages from a crush. It’s nice to have a flip phone again.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":2209,"w":665,"h":297,"abs_x":204,"abs_y":4410}'&gt;Month Offline leans into this feeling of nostalgia. At my second weekly meeting, my fellow travelers and I had the thrill of our lives decorating our new flip phones with stickers, just as we might have done in 2007. I added one baseball sticker to the front of my phone and one to the back, but some others created intricate patterns with rhinestones. The get-togethers were heavy on crafts; we often expressed ourselves through crayon. At the end of each meeting, we received a gift to help us get through the next week in an ever more analog fashion—a disposable camera, a book of crossword puzzles, a compass on a carabiner.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":2536,"w":665,"h":231,"abs_x":204,"abs_y":4737}'&gt;A key concept, discussed every week, was that of “friction”—or the specific discomfort we were feeling whenever we ran up against our reliance on our boxed-up smartphones. One week, we used the crayons to draw a “moment of friction,” and most people drew themselves getting lost. The flip phone’s tiny version of Google Maps is hard to use, and some people were trying not to use it all, preferring to navigate the city as their parents and grandparents once did, going only by their memory and directions from strangers.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":2797,"w":665,"h":429,"abs_x":204,"abs_y":4998}'&gt;I embraced the frictions of my month offline, except for when they made me extremely annoyed. Once, I settled down at a coffee shop to do some work and realized I was locked out of my computer; I had to call my fiancé and ask him to bring my iPhone to me so that I could two-factor in. (My job requires a specific authenticator app that is not available for dumbphones.) I stewed while I waited. A couple of days before, I’d missed a text from my sister telling our family that she’d gotten into a medical residency. (Group chats sometimes glitched on my flip phone; other people in my cohort also reported having scattered problems with text-forwarding.) And because I had not received that text, or any of my family’s responses to the biggest news of my sibling’s life, my contribution to the chat was to blithely inform everyone a few minutes later that Seiya Suzuki would not be a good draft pick for our family’s fantasy-baseball league, because he’d injured his knee in the World Baseball Classic.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":3256,"w":665,"h":396,"abs_x":204,"abs_y":5457}'&gt;At times like these, I felt as though this experiment in freeing myself was doing just the opposite. After all, I was paying for a second phone plan on top of the one I had for my iPhone—&lt;a bis_size='{"x":491,"y":3328,"w":75,"h":22,"abs_x":523,"abs_y":5529}' href="http://dumb.co"&gt;dumb.co&lt;/a&gt; service costs $25 a month for Month Offline participants—and then all this other annoying stuff was happening to me too. But the Month Offline program has a protocol for such moments of weakness: Between meetings, we were encouraged to text or call a couple of assigned “Flipmates,” who were similar to Alcoholics Anonymous sponsors, and also to leave voicemails in a centralized mailbox for the group called the “Dumbphone Diary.” The diary entries, which we sometimes listened to together at meetings, were brief, palpably sincere stories of the teller’s struggles without a smartphone, or else their pride at having reconnected with art, nature, their friends, and their own mind.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":3682,"w":665,"h":363,"abs_x":204,"abs_y":5883}'&gt;Our group had three facilitators who would lead each week’s activities and offer guidance. One of them, Lydia Peabody, explained that she had left her job as a therapist while participating in a previous Month Offline. The experiment had been a revelation, she told me. A few days into using the flip phone, she’d noticed that her mood was worsening. “I was like, &lt;em bis_size='{"x":172,"y":3820,"w":656,"h":55,"abs_x":204,"abs_y":6021}'&gt;Holy shit, why do I feel so awful?&lt;/em&gt;” Eventually, she deduced that her mindless smartphone scrolling had been a way to distract herself from her unhappiness. Without that option, she was forced to face reality. So she quit, and shortly after that she went to a Grateful Dead–cover–band show with the CEO of dumb.co, who hired her to run Month Offline because of her experience leading group therapy.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":4075,"w":665,"h":231,"abs_x":204,"abs_y":6276}'&gt;Her expertise has certainly been germane. Though the meetups weren’t set up to &lt;em bis_size='{"x":194,"y":4114,"w":23,"h":22,"abs_x":226,"abs_y":6315}'&gt;be &lt;/em&gt;group therapy, people seemed to want to talk (and talk, and talk) about the ways their lives had changed without smartphones, and the discussions sometimes took on a therapeutic tone. I found this all a bit grating and repetitive, but as the month went on, I began to see the same results as everyone else. I read more, talked with strangers more, worried less, and forgot about Instagram almost entirely. I felt worse, and then I felt better.&lt;/p&gt;&lt;hr bis_size='{"x":464,"y":4354,"w":80,"h":0,"abs_x":496,"abs_y":6555}' class="c-section-divider"&gt;&lt;p bis_size='{"x":172,"y":4403,"w":665,"h":198,"abs_x":204,"abs_y":6604}'&gt;At the final meetup for my month offline, we participated in a graduation ceremony, complete with Vitamin C’s “Graduation (Friends Forever)” playing on a portable speaker. Cards on which we had written our average daily smartphone screen time at the beginning of the month were redistributed, and we wrote down our new totals. Mine went from nearly 4 hours to 19 minutes.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":4631,"w":665,"h":231,"abs_x":204,"abs_y":6832}'&gt;Peabody asked if there was anyone in the room who had not touched their smartphones at all, for the whole month, and two people raised their hands. The rest of us&lt;em bis_size='{"x":285,"y":4702,"w":33,"h":22,"abs_x":317,"abs_y":6903}'&gt; ooh&lt;/em&gt;-ed and clapped. I left with a feeling of genuine camaraderie. I also left having turned over my credit-card information to sign up for another month of dumb.co’s dumbphone-service plan. My experiment was over, but I wasn’t ready to give up on my little flip (which I’d started calling “my little flip”).&lt;/p&gt;&lt;p bis_size='{"x":172,"y":4892,"w":665,"h":24,"abs_x":204,"abs_y":7093}' data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a bis_size='{"x":172,"y":4894,"w":249,"h":19,"abs_x":204,"abs_y":7095}' href="https://www.theatlantic.com/technology/2025/11/smartwatch-kids-screen-time/684975/?utm_source=feed"&gt;Read: Get your kid a watch&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p bis_size='{"x":172,"y":4946,"w":665,"h":165,"abs_x":204,"abs_y":7147}'&gt;The following week, our cohort came back together for a show of the creative projects we’d made with all of our offline free time. Those without artistic talents were encouraged to interpret the prompt liberally, and so one Month Offliner presented cookies she’d made from a favorite recipe, and another just sat at a table with a simple crossword puzzle she’d made.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":5141,"w":665,"h":264,"abs_x":204,"abs_y":7342}'&gt;At least 100 people came out for the event. Some were friends of Month Offliners who were there solely out of the goodness of their heart. When I asked one such woman what her level of interest was in participating in a Month Offline herself, she said it was “medium to mild.” Other attendees were part of the city’s broader, burgeoning subculture of “&lt;a bis_size='{"x":659,"y":5278,"w":152,"h":22,"abs_x":691,"abs_y":7479}' href="https://www.theatlantic.com/technology/archive/2024/12/strother-school-radical-attention/680830/?utm_source=feed"&gt;attention activism&lt;/a&gt;.” I ran into Dan Fox, who works for the minimalist phone company Light, as well as Nick Plante, a community organizer who is one of the scene’s best-known voices.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":5435,"w":665,"h":297,"abs_x":204,"abs_y":7636}'&gt;In his writing, Plante can come off as a zealot. He recently described social-media platforms as “prisons of the mind” and speculated that we may one day “see these companies burn and smolder.” But when I spoke with him by phone after the Offline art show, he presented his stance in less fiery terms. Phone-free parties and club nights are already taking off, he said, and he guessed that New York will soon have an assortment of phone-free bars, restaurants, and co-working spaces. A culture shift away from smartphones is already under way, he said. “They’re perceived as being so central to our society right now,” he said. But what if they weren’t?&lt;/p&gt;&lt;p bis_size='{"x":172,"y":5762,"w":665,"h":363,"abs_x":204,"abs_y":7963}'&gt;My cohort mate Alana Kupke, a 30-year-old freelance stylist, had been thinking along the same lines. She’d signed up for the group because she works in the fashion industry and has felt obligated to be online all the time, just to keep her finger on the pulse. She’d been wondering whether she could do the same just by observing her physical surroundings and talking with people. At first, when friends saw her on the flip phone, they would freak out, she said. They would say “Oh my God” and swear that they could never get through the day with such a thing. “It kind of is a problem if people are scared to not have iPhones,” Kupke told me. By the end of the month, though, she’d persuaded her roommate, several of her friends, and four people she’d met during gigs to make the switch to flip phones.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":6155,"w":665,"h":24,"abs_x":204,"abs_y":8356}' data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a bis_size='{"x":172,"y":6157,"w":374,"h":19,"abs_x":204,"abs_y":8358}' href="https://www.theatlantic.com/ideas/archive/2025/10/dumphone-smartphone-technology-apps/684492/?utm_source=feed"&gt;Read: Can Gen Z get rid of its iPhones?&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p bis_size='{"x":172,"y":6209,"w":665,"h":264,"abs_x":204,"abs_y":8410}'&gt;Jenine Marquez, 26, another member of my cohort, told me that ever since our month offline she keeps her iPhone in a zippered pocket in her bag, where she can still reach it for emergencies and to answer video calls from her dad. Also, she can feel it buzzing if she gets a bunch of Microsoft Teams messages. Otherwise she doesn’t touch it. Krupke said she’s been switching out her iPhone for the dumbphone whenever she goes out with friends, so she can be more present. As for me, after signing up for another month of service, I haven’t picked up my little flip even once.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":6503,"w":665,"h":330,"abs_x":204,"abs_y":8704}'&gt;A few days after the art show, when I met Peabody for tea, she acknowledged that not everyone who goes to Month Offline continues with the flip phones. Some treat the month like a detox. Others want to stay offline but struggle to stay on the wagon. “When you stop going to something each week that holds you accountable, it becomes harder for anybody to face this alone,” Peabody said. She encourages people not to think about it as all-or-nothing. Her iPhone usually stays at her desk in her apartment, plugged in like a computer, she said. But she’ll take it out to use it for something specific. “I don’t make my life a living hell trying to use only this,” she said, holding up her flip phone. “I use it most of the time because I feel better.”&lt;/p&gt;&lt;p bis_size='{"x":172,"y":6863,"w":665,"h":264,"abs_x":204,"abs_y":9064}'&gt;Only a few hundred people have participated in Month Offline so far, and participation may be limited to those whose lifestyles allow for voluntary inconvenience. But Peabody said she thinks the early flip-phone readopters will create a snowball effect. Each one will normalize the dumbphone’s use a little more, even if it’s just within their social circle or in the bars and coffee shops through which they pass. “Most people can do this, or a lot of people can do it,” she said. “If you and I meet in a year, we’ll be having different conversations.”&lt;/p&gt;</content><author><name>Kaitlyn Tiffany</name><uri>http://www.theatlantic.com/author/kaitlyn-tiffany/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/yif3L-ACzv-2xCbduQAwRnLJKM4=/0x234:4500x2765/media/img/mt/2026/04/TiffanyNoPhones/original.jpg"><media:credit>Illustration by Paul Spella / The Atlantic. Sources: Edward Phillips / Alamy; Shutterstock.</media:credit></media:content><title type="html">The Flip-Phone Cleanse</title><published>2026-04-23T12:12:41-04:00</published><updated>2026-04-27T13:43:28-04:00</updated><summary type="html">I spent a month with a group of people who aspire to a state of offline bliss.</summary><link href="https://www.theatlantic.com/technology/2026/04/month-offline-smartphone-detox/686911/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686877</id><content type="html">&lt;p&gt;If Elon Musk gets his way, space will soon look very different. Through his ownership of SpaceX, the world’s richest man already operates most of the roughly 14,000 active satellites that are orbiting Earth. Now his rocket company is asking the government for permission to launch up to 1 million more. It’s part of Musk’s plan to build data centers in space that can harness the power of the sun for AI. “You’re power-constrained on Earth,” Musk said last month. “Space has the advantage that it’s always sunny.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Musk has a lot riding on these orbital data centers. To help finance them, he is set to take SpaceX public as early as June, at a reported valuation of $2 trillion. Musk has claimed that data centers in space can “enable self-growing bases on the moon, an entire civilization on Mars, and ultimately expansion to the universe.” It’s all classic Musk, who has a habit of making big promises that he can’t always keep. Data centers in space &lt;a href="https://www.technologyreview.com/2026/04/03/1135073/four-things-wed-need-to-put-data-centers-in-space/"&gt;are an untested technology&lt;/a&gt;, and it’s not clear if they’d actually work. (Neither Musk nor SpaceX responded to a request for comment.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Even if Musk falls short of his lofty space dreams, his venture may still pay him considerable dividends. That’s because it could help him secure regulatory approval to accelerate a land grab in space. There are only so many satellites that can circle Earth’s low orbit before the risk of collision becomes unacceptably high. By flooding space with his own satellites, Musk can make it impossible for other companies to gain entry while dramatically expanding one of the most important and valuable parts of his empire: Starlink.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The world’s largest satellite-internet provider, Starlink already boasts more than 10 million active customers in at least 150 countries. Subscribers set up a flat antenna that looks a bit like a pizza box to connect their devices to the internet anywhere they are in the world. (Even if you aren’t someone who pays for Starlink, you might have used the service without knowing it. The company’s satellites now power in-plane Wi-Fi for several airlines, including United Airlines and Qatar Airways.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Musk’s control over Starlink has vested him with a degree of power traditionally reserved for a head of state. He has &lt;a href="https://www.theatlantic.com/national-security/2026/02/elon-musk-ukraine-russia-starlink/686155/?utm_source=feed"&gt;res&lt;/a&gt;&lt;a href="https://www.theatlantic.com/national-security/2026/02/elon-musk-ukraine-russia-starlink/686155/?utm_source=feed"&gt;tricted access&lt;/a&gt; for both Ukrainian and Russian forces at various points during the ongoing conflict between the two countries, potentially &lt;a href="https://www.newyorker.com/magazine/2023/08/28/elon-musks-shadow-rule"&gt;altering the course of the wa&lt;/a&gt;&lt;a href="https://www.newyorker.com/magazine/2023/08/28/elon-musks-shadow-rule"&gt;r&lt;/a&gt;. In other cases, he has made Starlink service free—such as in Venezuela after the U.S. raid and capture of Nicolás Maduro, in January.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/national-security/2026/02/elon-musk-ukraine-russia-starlink/686155/?utm_source=feed"&gt;Read: Elon Musk moves against the Russians in Ukraine&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The new frontier for Starlink is delivering satellite connectivity directly to people’s smartphones without specialized hardware. In other words, no more pizza boxes. Musk already provides this service through partnerships with more than a dozen mobile carriers to serve “dead zones” beyond the range of cell towers, but the bandwidth is limited. T-Mobile’s Starlink partnership, T-Satellite, allows customers to use Musk’s satellite internet for messaging, location sharing, and low-speed data for a &lt;a href="https://www.t-mobile.com/support/coverage/satellite-support#apps"&gt;handful&lt;/a&gt; of apps.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Musk wants to go bigger, possibly even operating Starlink as its own stand-alone mobile carrier. “You should be able to have a Starlink—like you have an AT&amp;amp;T or a T-Mobile or a Verizon or whatever,” he said last September. Unlike traditional mobile carriers, Starlink could operate on any cellphone anywhere in the world, due to the reach of its satellites. Imagine a future in which Musk owns not only a major social network, but a large chunk of the infrastructure through which the world’s information flows. To pull that off, he will need more satellites. Musk has already said that the ones that he’s looking to send to space for data centers are essentially souped-up versions of Starlink’s next-generation satellite, set to launch later this year, which promise to increase mobile speeds by more than 3,000 percent.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Starlink isn’t the only company trying to ramp up satellite-to-smartphone service. The prospect of offering high-speed connectivity anywhere in the world is tantalizing enough to justify major capital investment. Last week, Amazon bought the satellite company GlobalStar for more than $11 billion in one of its largest-ever acquisitions. As part of the announcement of the deal, Amazon also &lt;a href="https://arstechnica.com/tech-policy/2026/04/amazon-to-merge-with-globalstar-become-iphones-primary-satellite-provider/"&gt;struck an agreement with Apple&lt;/a&gt; to operate the satellite internet on iPhones and Apple Watches. These moves position Amazon as Starlink’s leading competitor—and make it all the more urgent for Musk to launch as many satellites as possible, locking up the sky before anyone else can gain a foothold.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;If Musk makes good on his vision to create his own Starlink mobile carrier, he will accrue &lt;a href="https://bookshop.org/p/books/muskism-a-guide-for-the-perplexed-ben-tarnoff/3d177fb9349a79ff?ean=9780063484320&amp;amp;next=t&amp;amp;next=t&amp;amp;affiliate=12476"&gt;more power than ever before&lt;/a&gt;. Not only would Musk have the capacity to cut or enable service as desired, he would also have a greater ability to push people onto more of his own products and platforms. A relatively obscure technique called “zero-rating” allows telecom providers to let users visit certain websites without having it count toward their data caps. Free Basics, for instance, is a program initiated more than a decade ago by Facebook in which the company partners with local mobile carriers in developing countries to provide free access to Meta’s family of apps. This allows poorer users to still surf the web, but at the cost of locking them into Meta’s walled garden.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Starlink has already experimented with this approach. The select collection of apps that can be used through T-Satellite include both X and Grok, but not competitors such as Instagram and ChatGPT. Musk could go further by letting Starlink subscribers use X and Grok for free. Particularly in low-income countries, this subsidy would be a major inducement to using those services. And considering the breadth of Musk’s empire, there are endless opportunities for cross-promotion. He could make Starlink’s mobile service a free perk for Tesla drivers, X Premium members, and xAI customers. For now, all of this is a hypothetical—but it is not far-fetched. Although 1 million satellites is the headline-grabbing number, these pursuits can happen below that ceiling. As so often is the case, Musk promises Mars but satisfies investors with low Earth orbit.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Starlink could also be the logical next step in Musk’s campaign against what he calls the “woke mind virus.” Take his treatment of Twitter. Since purchasing the social-media site in 2022 and renaming it X, Musk has turned it into a megaphone for his political viewpoints. He has restored hundreds of banned far-right accounts, eliminated virtually all content-moderation rules, and tweaked the algorithm to promote accounts that align with his politics. Musk attempts to further reinforce his worldview through Grok, the proudly politically incorrect chatbot, and now &lt;a href="https://www.theatlantic.com/technology/2025/10/grokipedia-elon-musk/684730/?utm_source=feed"&gt;Grokipedia&lt;/a&gt;, his competitor to Wikipedia.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2025/10/grokipedia-elon-musk/684730/?utm_source=feed"&gt;Read: What Elon Musk’s version of Wikipedia thinks about Hitler, Putin, and apartheid&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p class="c-recirculation-link" data-id="injected-recirculation-link"&gt;&lt;/p&gt;&lt;p&gt;While Musk has never had any problem winning investor confidence, he has sometimes stumbled at winning broad-based popularity. A common reflex is to blame the messengers: As he &lt;a href="https://www.newyorker.com/news/fault-lines/elon-musks-vanishing-act"&gt;told&lt;/a&gt; CNBC last spring, “What I’ve learned is that legacy-media propaganda is very effective at making you believe things that aren’t true.” Launching even more satellites into space presents the opportunity to close the loop and cut out the “legacy media” altogether. The logic of Musk’s empire is total. X shapes the discourse. Grok automates it. Grokipedia rewrites the historical record. Starlink can deliver it all, everywhere, to everyone. Each layer reinforces the others. It’s not about winning arguments in the public sphere. It’s about building a replacement. If Musk gets his way, the echo chamber of tomorrow will reach to space and back.&lt;/p&gt;</content><author><name>Quinn Slobodian</name><uri>http://www.theatlantic.com/author/quinn-slobodian/?utm_source=feed</uri></author><author><name>Ben Tarnoff</name><uri>http://www.theatlantic.com/author/ben-tarnoff/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/-AzACgaucYH3WbXXY_3X6tXmSBw=/media/img/mt/2026/04/2026_04_01_Musk/original.jpg"><media:credit>Illustration by Lucy Naland. Sources: Harun Ozalp / Anadolu / Getty; Getty.</media:credit></media:content><title type="html">Elon Musk’s SpaceX Endgame</title><published>2026-04-21T10:44:00-04:00</published><updated>2026-04-26T00:08:47-04:00</updated><summary type="html">The world’s richest man is accruing more power than ever before.</summary><link href="https://www.theatlantic.com/technology/2026/04/elon-musk-starlink-satellites/686877/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:39-686586</id><content type="html">&lt;p class="dropcap"&gt;&lt;span class="smallcaps"&gt;John Mark Comer&lt;/span&gt; can be a hard man to find. He’s one of the most famous pastors in America right now, an author whose books have together sold more than 1 million copies, but he’s not the most reachable guy. He has a professional website but no contact page. He rarely travels. And as I reported this story, I began to learn his habits: Sending him a text early in the day was a wash, for instance, because he doesn’t check his phone until after morning prayer time. Once, when I reached out by email, I got an out-of-office response that he had set before Christmas explaining that he was observing “rhythms of rest” and asking that I try him again after his return in mid-January. Incoming messages sent in the meantime would be deleted.&lt;/p&gt;&lt;aside class="callout-placeholder" data-source="magazine-issue"&gt;&lt;/aside&gt;&lt;p&gt;I had first seen Comer in October, at a service for Church of the City New York, held inside a historic chapel in Lower Manhattan. Lo-fi beats played over the speakers as hundreds of people, mostly in their 20s and 30s, milled around and looked for seats in the crammed pews. When Comer took the stage, dressed in a matching ochre shirt-jacket and pants, a silver stud in his left ear, the crowd cheered and whooped.&lt;/p&gt;&lt;p&gt;He pulled up a slide. It was not the usual Bible story or psalm, but an excerpt from Anne Helen Petersen’s 2019 &lt;em&gt;BuzzFeed&lt;/em&gt; essay “&lt;a href="https://www.buzzfeednews.com/article/annehelenpetersen/millennials-burnout-generation-debt-work"&gt;How Millennials Became the Burnout Generation&lt;/a&gt;.” Burnout is “not a temporary affliction,” it read. “It’s the millennial condition.” The Gen Z one, too, Comer added. “It’s like we just churn out tired, exhausted souls like a widget factory,” he said. “I don’t know if you feel this at all yet in your body or in your bones. If you don’t, it’s because you’re still young and you haven’t been in the city very long. But you will. Trust me, you will.”&lt;/p&gt;&lt;p&gt;Then he clicked over to a passage from the Gospel of Matthew:&lt;/p&gt;&lt;blockquote&gt;Come to me, all you who are weary and burdened, and I will give you rest. Take my yoke upon you and learn from me, for I am gentle and humble in heart, and you will find rest for your souls. For my yoke is easy and my burden is light.&lt;/blockquote&gt;&lt;p&gt;“Most of us, as modern Americans,” Comer said, with a hand over his heart, “we read that line and there’s just this, like, deep, soul-level, &lt;em&gt;Yes, I ache for that&lt;/em&gt;.” The guy in front of me took a picture of the slide with his phone. I noticed that his screen was set to gray scale. So was the screen of the person sitting next to me.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/family/2026/03/smartphones-ambivalence-tension/686563/?utm_source=feed"&gt;Read: The tension that defines modern life&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Signs of Comer’s influence had been popping up in my life all year. One friend had started observing a 24-hour, phone-free Sabbath. My roommates began fasting several times a month. Then, in quick succession, three different people recommended that I read &lt;a href="https://bookshop.org/a/12476/9780525653097"&gt;&lt;em&gt;The Ruthless Elimination of Hurry&lt;/em&gt;&lt;/a&gt;, Comer’s 2019 best seller.&lt;/p&gt;&lt;p&gt;In that book, Comer advances the theory that the great enemy of spiritual life is hurry. By this he means not simply busyness: Hurry is a gnawing sense that there is always more to do; a life spent hurtling oneself through each day; a schedule that makes little room for God. Technology has only exacerbated the problem. Comer calls the modern world “a virtual conspiracy against the interior life,” and urges readers to reclaim their focus from the algorithm and shift it toward God.&lt;/p&gt;&lt;p&gt;&lt;em&gt;The Ruthless Elimination of Hurry&lt;/em&gt;, he told me, is “a book about discipleship to Jesus masquerading as a self-help book.” Many of its suggestions are similar to what you might find in articles about digital detoxes. To break a cellphone addiction, he offers detailed advice on how to “turn your smartphone into a dumbphone”: delete social media and web browsers, turn off notifications, and set your screen to gray scale, to curb the appeal of the remaining candy-colored apps. His prose, too, is rendered in a pithy, how-to style that one of his critics has dubbed “&lt;a href="https://www.digitalliturgies.net/p/the-ruthless-elimination-of-paragraphs"&gt;The Ruthless Elimination of Paragraphs&lt;/a&gt;.”&lt;/p&gt;&lt;p&gt;Because of this approach, Comer can seem more like a wellness personality, such as &lt;a href="https://www.nytimes.com/2023/08/02/opinion/huberman-husband.html"&gt;Andrew Huberman&lt;/a&gt;, than a pastor. Like Huberman, Comer offers a concrete regimen that’s attractive to people who feel unmoored in contemporary society. Comer’s skeptics, when remarking on his rapid ascent, point to these similarities and wonder if what he’s offering is simply baptized wellness, a pop spirituality tailored to the tastes and frustrations of affluent young people. But sitting among his followers, I wondered: &lt;em&gt;Could Comer’s practices actually bring them closer to God?&lt;/em&gt;&lt;span&gt; &lt;/span&gt;&lt;/p&gt;&lt;p class="dropcap"&gt;&lt;span class="smallcaps"&gt;I met Comer &lt;/span&gt;the next day at a coffee shop in the East Village. Our cashier, who looked about 24, recognized Comer and was visibly starstruck. “Your books are so amazing,” he said. “I pass them around to all my friends.” Our lattes, he insisted, were on the house. Comer told me that the same thing had happened yesterday in SoHo, then he shrugged. “Coffee shops are like bars for Christians.”&lt;/p&gt;&lt;p&gt;Comer is Protestant, nondenominational, and roughly in the evangelical sphere, but his work is mostly about how technology—what he calls “the machine”—is spiritually deforming people. “Any version of discipleship to Jesus that doesn’t seriously take into account that,” he said, pointing at my phone, “is going to be wildly deficient.” Christian spirituality has always adapted to its time, Comer said. In trying to adapt the faith for the 21st century, he looks to the life of Jesus, who took a Sabbath, fasted, and spent regular time in silence and solitude. To Comer, these weren’t the rhythms of Jesus’s life just because he happened to live in Galilee in 30 C.E. They are spiritual practices that Christians in any era ought to emulate.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/family/archive/2021/10/digital-addiction-smartphone/620318/?utm_source=feed"&gt;Read: How to break a phone addiction&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Comer’s most recent book, 2024’s &lt;a href="https://bookshop.org/a/12476/9780593193822"&gt;&lt;em&gt;Practicing the Way&lt;/em&gt;&lt;/a&gt;, is a sort of how-to guide for Christlike living. Inspired in part by the monastic Order of Saint Benedict, Comer encourages readers to incorporate nine of Jesus’s habits into their lives: scripture reading, service, keeping the Sabbath, solitude, prayer, fasting, community, witness, and generosity. He calls his work “spiritual archaeology”—reintroducing modern believers to ancient Christian practices. “Everything we need, for the most part, is there in church history,” he said. “We’ve just lost a lot of it.”&lt;/p&gt;&lt;p&gt;Comer is hardly the first such archaeologist. Each generation of evangelical Christianity has three main celebrities, Russell Moore, the editor at large of &lt;em&gt;Christianity Today&lt;/em&gt;, told me: the politics guy, the church-growth guy, and the personal-spirituality guy. In the 1980s, these roles were played, respectively, by Pat Robertson, Rick Warren, and Dallas Willard. Right now, Comer is the personal-spirituality guy (yes, it’s always a guy). Willard encouraged evangelicals to adopt virtually the same practices, such as fasting and taking a Sabbath, in 1988’s &lt;em&gt;&lt;a href="https://bookshop.org/a/12476/9780060694425"&gt;The Spirit of the Disciplines&lt;/a&gt;&lt;/em&gt;, and a subset of evangelicals has practiced them ever since. But Comer is making his case at a very different moment.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/ideas/archive/2023/07/christian-evangelical-church-division-politics/674810/?utm_source=feed"&gt;Russell Moore: The American evangelical Church is in crisis. There’s only one way out.&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;“A lot of American evangelical leadership right now is algorithmic,” Moore said, meaning that many pastors ratchet up their sermon rhetoric to find an audience on social media—usually by decrying homosexuality and abortion. Comer has written that God’s vision of marriage is between a man and a woman, and he’s argued against the idea of abortion as “reproductive justice.” But he doesn’t really preach about those issues, so the traditional Christian political camps aren’t sure what to make of him. He’s too conservative for the progressive Christians, and the conservative ones assume that he’s a tote-bag-carrying NPR liberal.&lt;/p&gt;&lt;p&gt;Comer doesn’t avoid the algorithm entirely. He has more than a quarter million followers on Instagram, where he mostly posts clips about the nine practices and shares quotes from Christian writers in minimalist fonts on earth-toned slides. He likens such social-media outreach to a street preacher at an Old West saloon: You say your piece about Jesus, hope you change some minds, and get out as quickly as you can.&lt;/p&gt;&lt;p class="dropcap"&gt;&lt;span class="smallcaps"&gt;In December, &lt;/span&gt;I went to Comer’s house for tea. About two and a half years ago, his family moved from Portland, Oregon, to Topanga Canyon, a mountain community outside Los Angeles known as a hub of West Coast hippiedom—think Deadheads, crystals, and astral-projection workshops. The road to Comer’s home is shaded by scrub oak and barely wide enough to accommodate a single car. We sat in the living room beside the Christmas tree, where presents lay wrapped in butcher paper. Comer was on cooking duty that night, and his wife unloaded the groceries. Their teenage son and daughter milled around the living room as Comer and I spoke. He apologized for the commotion.&lt;/p&gt;&lt;p&gt;Comer grew up in the ’80s in Silicon Valley; his parents were “first-generation Christians,” as he put it. His father, Phil, was a rock musician in the ’60s and ’70s who encountered God for the first time during one of Billy Graham’s crusades, eventually becoming the worship pastor at Los Gatos Christian Church, one of the Bay Area’s earliest evangelical megachurches. Comer took after his dad, joining the ministry and then co-founding a church in the suburbs of Portland with his parents in 2003, when he was 23 years old. Comer was the cool preacher, a West Coast urbanite just like his congregants; he understood why people might be cynical about religion. (When we met, I apologized for saying “damn” in front of a pastor. He reminded me that I was with a pastor from California.)&lt;/p&gt;&lt;p&gt;His church added about 1,000 congregants a year for seven years straight and soon outgrew its original building, coming to command multiple locations around the city. Comer became the head of what was essentially a ministry franchise, he reflected later—“the Starbucks model of ‘local’ church”—where he was trying to give thousands of people the same experience, whether they were in downtown Portland or the suburbs.&lt;/p&gt;&lt;p&gt;By about 2014, Comer was preaching six services on Sundays and heading home at 10 p.m., long after his kids were asleep. He didn’t have time for himself or his family. The Bible calls Christians to be patient, to love. But Comer was becoming more hurried and less loving. He realized, as he would later write, that “you can be a success as a pastor and a failure as an apprentice of Jesus.” In Millennial terms, he was suffering from burnout, badly.&lt;/p&gt;&lt;p&gt;Comer took a break from preaching and started reorganizing his life. He tried to emulate Christ’s daily actions, gradually incorporating them into his lifestyle both then and after he returned to pastoring, now at just one of the church’s locations, known as Bridgetown Church, in downtown Portland. He began fasting, eventually working up to two days a week, and observing the Sabbath by turning off his devices on Saturdays and spending his time resting and worshipping. He still needed to use email and social media for work, but he took these apps off his phone and checked them on his computer only once a week. And because Jesus lived simply, Comer pared down his closet to three outfits for the Oregon winter and two for the summer.&lt;/p&gt;&lt;p&gt;He worked less, spent more time with his wife, built more &lt;i&gt;Star Wars&lt;/i&gt; Lego sets with his kids. “Even better,” he’d later write about that period, he could “feel God again.” Comer was convinced that his entire church would benefit from these practices. So, over the next five years, Bridgetown adopted the disciplines as a congregation, creating the blueprint for the nine practices that Comer later would lay out in &lt;em&gt;Practicing the Way&lt;/em&gt;.&lt;/p&gt;&lt;p&gt;Running a huge church was hard on him; for years, he had wanted to write and to work one-on-one with people instead of preaching. Comer stepped down from Bridgetown in 2021 and now leads a nonprofit, also called Practicing the Way, which offers a free course that more than 21,000 church groups have adopted. He’s on the teaching staff at a church in Los Angeles, but mostly, Comer serves as the pastor of his own small church, which follows the &lt;i&gt;Practicing the Way &lt;/i&gt;disciplines: The 30-person congregation fasts together, takes the Sabbath together, and, on Sundays, meets for a service in his living room. He has “built a quiet life,” his friend and successor at Bridgetown, Pastor Tyler Staton, told me. “Some might accuse him of being a touch boring.”&lt;/p&gt;&lt;p class="dropcap"&gt;&lt;span class="smallcaps"&gt;Comer told me &lt;/span&gt;that his average reader is 27, with at least some college education, living in a city. I’m 27, with a college degree, living in New York. I wondered whether I could adhere to his disciplines, and if so, how they might affect my faith. So, for the past six months, I’ve tried to structure my life around &lt;i&gt;Practicing the Way&lt;/i&gt;’s nine core habits.&lt;/p&gt;&lt;p&gt;I’d wake up early to spend an hour alone at the window next to my fire escape, reading scripture and praying; this was a major upgrade from checking my phone first thing in the morning. Once a week, I’d observe the Sabbath—put away my screens, do some form of worship, revel in the fact that I could do nothing for a day and God would keep the universe going. As part of the service practice, I volunteered at a soup kitchen once a month and started carrying food with me when I walked around the city, in case I passed people who looked hungry.&lt;/p&gt;&lt;p&gt;I did chafe against some of the disciplines. Navigating modern life with no phone for a day was a mess: Without Google Maps, I’d get lost; without texting, every meetup with friends felt like the high-stakes rendezvous at the end of &lt;em&gt;An Affair to Remember&lt;/em&gt;. And although sometimes I’d have a moment or two of transcendence on my weekly fasting day, for the most part, I was just hungry.&lt;/p&gt;&lt;p&gt;I am surprised, though, by how much these practices have become central to my life—not because I think I will be smote if I don’t do them, but because it turns out I like them. (Except for fasting. That one is still a bummer.) The new constraints on my time and attention forced me to truly consider what was important or not, and to prioritize those things. I spent less time on the parts of my day that brought me little joy (my phone) and more time with friends. My life is less hurried. I’m happier.&lt;/p&gt;&lt;p&gt;But my happiness is not the point, according to Comer. The purpose of a spiritual discipline is “not personal fulfillment. It’s not personal expression. It’s not emotional wellness. It’s not to de-stress,” he said. The point is to have your character transformed by your attunement to God. Then it will be easier to follow Jesus’s two greatest commandments: love God and love others. Fasting and discipline, you can get from Andrew Huberman; self-care, from Goop. But, Comer told me, “wellness culture is not talking about the Sermon on the Mount.”&lt;/p&gt;&lt;p&gt;That sermon—in which Jesus says people must love their enemies, must turn the other cheek, and cannot serve God and money—asks a lot from believers. Dallas Willard, Comer’s forebear, argued that a person who expects to live up to Jesus’s commands on the spur of the moment, without structuring their life at least somewhat around Jesus’s, is like “a baseball player who expects to excel in the game without adequate exercise of his body.” The theory is that, to become more Christlike, you have to find more ways to literally live like Christ.&lt;/p&gt;&lt;p&gt;Comer’s critics worry that by focusing so much on Jesus’s daily regimen, he risks recasting the son of God as the original lifestyle guru. “The (real) point of the Gospels—identifying who Jesus is, putting faith in him, and worshiping him—is put in the background, while living like Jesus is put in the foreground,” Kevin DeYoung, a theologian and Presbyterian pastor, &lt;a href="https://clearlyreformed.org/is-this-the-waya-review-of-practicing-the-way-by-john-mark-comer/"&gt;wrote in a review of &lt;em&gt;Practicing the Way&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;According to DeYoung, this isn’t just a small matter of emphasis. “How effective can an approach to spiritual formation be when it almost completely misses the point of Jesus’s life and ministry?” he wrote. DeYoung told me that when the apostle Paul writes to the early Ephesian church about how to combat evil in their lives, “he doesn’t tell them, ‘Here are a set of rhythms and come up with 10 rules for your life.’ ” He tells them about the power of God.&lt;/p&gt;&lt;p&gt;DeYoung and others also criticize Comer for conforming his ministry too much to the lives of young, well-to-do urbanites—repackaging Christian monasticism for the TikTok generation. Given how inconvenient Comer’s disciplines can be, his skeptics think they’re achievable for yuppies in ways they may not be for others who have fewer resources or more demands on their time. DeYoung and his wife have a big family, and although Comer’s routine may sound nice, he told me, “we’re trying to just get through our week.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2024/03/teen-childhood-smartphone-use-mental-health-effects/677722/?utm_source=feed"&gt;Jonathan Haidt: End the phone-based childhood now&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Comer counters that many churches are facing what he calls a “crisis of discipleship” because they don’t give congregants enough instruction on how to actually live as Christians. But he says that he’s not doctrinaire about the practices; he doesn’t expect everyone to do all of them, all of the time: Jesus himself rebelled against the rigidity of the Pharisees by healing people and harvesting grain on the Sabbath. The night I saw Comer preach in New York City, he stressed that the question shouldn’t be &lt;em&gt;Did I fast this week?&lt;/em&gt; or &lt;em&gt;Did I observe the Sabbath?&lt;/em&gt; Comer wants his followers to ask themselves instead, &lt;em&gt;Am I becoming more gentle?&lt;/em&gt; and &lt;em&gt;Am I becoming more humble?&lt;/em&gt;&lt;/p&gt;&lt;p&gt;I Googled myself yesterday, so I still have a ways to go. But I had never asked myself those sorts of questions before. As a Christian moving in mostly secular circles, I’d felt that simply believing in God was a big enough feat. My faith had never shaped the way I lived each day. I am proof that you can say you love God and offer very little of your life to him. The practices became a way to call my own bluff.&lt;/p&gt;&lt;p&gt;I’m a member of the precise audience Comer is writing for—those who believe in the Gospels but haven’t made much time for a spiritual life; those who no longer feel at home in an evangelical community that has itself been warped by the imperatives of social media; those who (if we’re honest) can sometimes feel embarrassed to be seen as religious in a secular world. He told me that he is speaking to people who “want to figure out how to stay true to the Christian story in a very hostile cultural environment” but feel they need a road map. Even if the temptations of contemporary America look nothing like the ones the early Christian ascetics lived in the desert to avoid, that doesn’t necessarily mean the road map itself is out-of-date. And if, in promoting that road map, Comer can sometimes seem like many secular wellness influencers, maybe it’s a sign that they, too, are responding to a collective crisis of faith, and don’t yet know it.&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;small&gt;&lt;i&gt;This article appears in the &lt;/i&gt;&lt;a href="https://www.theatlantic.com/magazine/toc/2026/05/?utm_source=feed"&gt;&lt;i&gt;May 2026&lt;/i&gt;&lt;/a&gt;&lt;i&gt; print edition with the headline “Can Turning Off Your Phone Bring You Closer to God?”&lt;/i&gt;&lt;/small&gt;&lt;/p&gt;</content><author><name>Nancy Walecki</name><uri>http://www.theatlantic.com/author/nancy-walecki/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/DqqACq8T1NbUcJG-yFpUChm6g-g=/media/img/2026/04/000409410010_16x9/original.jpg"><media:credit>Thalía Gochez for The Atlantic</media:credit><media:description>John Mark Comer says his practices should be judged not by how happy his followers are, but by how close they are to God.</media:description></media:content><title type="html">Is Hurry the Great Enemy of Spiritual Life?</title><published>2026-04-18T08:00:00-04:00</published><updated>2026-04-20T15:56:45-04:00</updated><summary type="html">Pastor John Mark Comer has won a massive audience by encouraging his followers to free themselves from the gnawing sense that there is always more to do.</summary><link href="https://www.theatlantic.com/magazine/2026/05/john-mark-comer-spiritual-practices/686586/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686835</id><content type="html">&lt;p&gt;&lt;small&gt;&lt;i&gt;This is an edition of The&lt;/i&gt; Atlantic&lt;i&gt; Daily, a newsletter that guides you through the biggest stories of the day, helps you discover new ideas, and recommends the best in culture. &lt;/i&gt;&lt;a href="https://www.theatlantic.com/newsletters/sign-up/atlantic-daily/?utm_source=feed"&gt;&lt;i&gt;Sign up for it here.&lt;/i&gt;&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;Walk into any Silicon Valley office in the late 2010s, and you’d probably see at least one pair of Allbirds. Woolly and eco-friendly, the sneakers once epitomized a certain kind of corporate culture (even Barack Obama was a fan), and the company behind them was valued at roughly $4 billion at its peak, in 2021. But for several years, sales have flagged. Attempts to replicate the success of its signature product—see: wool leggings and wool underwear—didn’t do much to keep the business afloat. Earlier this year, Allbirds sold most of its holdings for pennies and closed its remaining retail stores. Now it has a last-ditch idea: a hard pivot to AI.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The plan, announced yesterday, is to change its name to NewBird AI and spend $50 million from an unnamed investor on specialized chips called GPUs, which it will then lease to other companies. The move is a high-risk bid to save the company’s stock, and it has already kind of worked: Allbirds’ value increased by more than 600 percent yesterday. Although businesses reorient themselves around AI all the time, Allbirds is trying a far more extreme version of the strategy. At first glance, it might look like a cynical (and very possibly doomed) cash grab. But for a flailing shoe company, an AI rebrand might also be an escape hatch.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Last month, Allbirds was sold for less than 1 percent of what it was worth in 2021. Because almost nothing has been spared in the fire sale, it is now essentially a shell corporation. &lt;i&gt;Bloomberg&lt;/i&gt;’s Matt Levine &lt;a href="https://www.bloomberg.com/opinion/newsletters/2026-04-15/aibirds"&gt;argued&lt;/a&gt; yesterday that the company might be banking on tech executives’ “nostalgic fondness for their brand” to make this pivot work. But Allbirds CEO Joe Vernachio is a veteran of the outdoor-apparel industry and has no apparent AI experience; the company did not respond to questions about the future of its executive team or the future of other people who work there.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;There’s an obvious reason for companies to jump on the AI train—the technology is creating enormous wealth. The S&amp;amp;P 500 hit a record high yesterday, thanks in part to the strength of the American tech sector. And that doesn’t even account for the two leading AI companies, both of which are private. OpenAI and Anthropic are valued at about $1.2 trillion combined—more than the GDP of Poland. When those companies go public, as they’re expected to in the not-too-distant future, they will generate astounding wealth for their executives and investors.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The idea that a shoe company can use an AI rebrand to quickly juice its stock price will likely strengthen naysayers’ suspicions that &lt;a href="https://www.theatlantic.com/technology/2026/03/ai-bubble-defenders-silicon-valley/686340/?utm_source=feed"&gt;we’re in a bubble&lt;/a&gt;. It echoes a cautionary tale of the crypto craze: In 2017, shares of Long Island Iced Tea Corp. jumped as much as &lt;a href="https://www.ft.com/content/3fa91346-e670-11e7-8b99-0191e45377ec?syn-25a6b1a6=1"&gt;500 percent&lt;/a&gt; after the company announced a pivot to blockchain technology. The highs were short-lived. A year later, Long Blockchain Corp. (it got a new name too) was delisted from the NASDAQ. When the struggling video-game retailer GameStop tried a &lt;a href="https://www.ign.com/articles/gamestops-nft-marketplace-closes-next-month"&gt;similar crypto pivot&lt;/a&gt; in 2022, its stock climbed 30 percent in a day. But that ultimately didn’t prevent the company’s gradual descent from the meme-stock highs it had seen in 2021. The maneuver failed in the long run in part because it muddied the idea of what GameStop even was: Why was the brick-and-mortar store where I once bought &lt;i&gt;Assassin’s Creed III&lt;/i&gt; suddenly selling NFTs?&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;But in this unprecedented market, where &lt;a href="https://www.theatlantic.com/technology/2026/03/ai-boom-polycrisis/686559/?utm_source=feed"&gt;private lenders&lt;/a&gt; abound and VCs are doubling down on AI, flexibility can be a good thing. Plenty of companies have incorporated AI into their existing products over the past few years, albeit with varying levels of success. Mattel’s toys will soon have AI components, PepsiCo wants to rely on AI agents to transform its sales and operations, and Bath &amp;amp; Body Works has used AI to develop a “fragrance finder” called Gingham Genius. Few businesses are immune to the lure of this tech, and to the potential for investment that tends to come with it.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;NewBird AI’s lack of experience in the sector will make it difficult to turn a short-term stock bump into long-term success. Questions remain about who’s investing in the business, and how effectively its leaders might continue raising money in the future. The $50 million that Allbirds has secured, with just $5 million up front, is dwarfed by what the biggest AI companies are regularly bringing in. OpenAI announced $122 &lt;i&gt;billion&lt;/i&gt; in new funding late last month. And it’s unclear whether Allbirds will command the kind of access to &lt;a href="https://investor.atmeta.com/investor-news/press-release-details/2025/Meta-Announces-Joint-Venture-with-Funds-Managed-by-Blue-Owl-Capital-to-Develop-Hyperion-Data-Center/default.aspx"&gt;private credit lines&lt;/a&gt; that other public companies have relied on for their AI ambitions. Despite the financial promise of its new business model, Allbirds is really just a tiny, inexperienced player in an already crowded market. Perhaps accounting for traders’ tempering expectations, the stock has fallen by about&lt;b&gt; &lt;/b&gt;25 percent&lt;b&gt; &lt;/b&gt;today.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Allbirds is now shedding much of what made it distinct during its boom years and adapting to a business climate in which raw computing power is king. Despite a founding mission to make sustainable footwear, the company is turning to a notoriously &lt;a href="https://www.theatlantic.com/magazine/2026/04/ai-data-centers-energy-demands/686064/?utm_source=feed"&gt;energy-intensive&lt;/a&gt; corner of the tech industry and likely slashing &lt;a href="https://www.sec.gov/Archives/edgar/data/1653909/000119312526155866/d39753dprem14a.htm"&gt;language&lt;/a&gt; about environmental conservation from its charter. Whether or not this rebrand succeeds, it has already underscored the absurd pull of AI—and just how much of our economy is being drawn into its orbit.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Related:&lt;/strong&gt;&lt;/p&gt;&lt;ul&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/newsletters/2026/04/ai-everywhere-allbirds-sneakers/686833/?utm_source=feed"&gt;Alexandra Petri: The tyranny of AI everywhere&lt;/a&gt;&lt;/li&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/technology/2026/03/ai-bubble-defenders-silicon-valley/686340/?utm_source=feed"&gt;Even Silicon Valley says that AI is a bubble.&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;hr&gt;&lt;p&gt;&lt;strong&gt;Here are three new stories from &lt;i&gt;The Atlantic&lt;/i&gt;:&lt;/strong&gt;&lt;/p&gt;&lt;ul&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/politics/2026/04/trump-pope-leo-iran-gas-prices/686819/?utm_source=feed"&gt;Trump voters have had enough.&lt;/a&gt;&lt;/li&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/culture/2026/04/inside-kennedy-center-shutdown-drama/686801/?utm_source=feed"&gt;Josef Palermo: What I saw inside the Kennedy Center&lt;/a&gt;&lt;/li&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/magazine/2026/05/reactionary-traditionalism-worldview/686597/?utm_source=feed"&gt;David Brooks: History is running backwards.&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;hr&gt;&lt;p&gt;&lt;strong&gt;Today’s News&lt;/strong&gt;&lt;/p&gt;&lt;ol&gt;
	&lt;li&gt;President Trump said that the United States could &lt;a href="https://www.wsj.com/livecoverage/iran-us-strait-of-hormuz-blockade-updates?mod=hp_lead_pos7&amp;amp;mod=hp_lead_pos1"&gt;hold talks with Iran this weekend and that the two countries are “very close” to a deal&lt;/a&gt;, even as the U.S. military expands a blockade of Iran-linked ships. He also announced a 10-day cease-fire between Israel and Lebanon starting today and invited both country’s leaders to Washington, D.C., for peace talks.&lt;/li&gt;
	&lt;li&gt;A federal judge ordered Trump to &lt;a href="https://www.nytimes.com/live/2026/04/16/us/trump-news#trump-ballroom-judge-halt"&gt;halt aboveground construction of the planned White House ballroom&lt;/a&gt; despite the administration’s claims that it’s needed for national security, ruling that the project can’t proceed without congressional approval.&lt;/li&gt;
	&lt;li&gt;Trump &lt;a href="https://www.nytimes.com/live/2026/04/16/us/trump-news#section-756596038"&gt;nominated Erica Schwartz&lt;/a&gt;, a vaccine supporter who served as deputy surgeon general during his first term, to lead the CDC. If confirmed, she would be the agency’s fourth leader in about a year.&lt;/li&gt;
&lt;/ol&gt;&lt;hr&gt;&lt;p&gt;&lt;strong&gt;Evening Read&lt;/strong&gt;&lt;/p&gt;&lt;figure&gt;&lt;img alt="A collage of two photos, an older man on the left and young people holding up a Hungarian flag on the right." height="1620" src="https://cdn.theatlantic.com/media/img/mt/2026/04/2026_04_14_The_Islands_of_Civil_Society_That_Helped_Defeat_Orban/original.jpg" width="2880"&gt;
&lt;figcaption class="caption"&gt;Illustration by The Atlantic. Sources: Attila Kisbenedek / AFP / Getty; Neil Milton / SOPA / LightRocket / Getty.&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;The Quiet Way Authoritarianism Begins to Crumble&lt;/p&gt;&lt;p&gt;&lt;i&gt;By Gal Beckerman&lt;/i&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p&gt;In the days after Donald Trump won his second term, I called a handful of Hungarian political analysts to ask what the American future might look like. My impulse was not an original one; the analysts had been fielding many calls of this sort. Hungary seemed like a bellwether for the illiberal direction in which &lt;a href="https://www.theatlantic.com/ideas/archive/2023/12/trump-says-hell-be-a-dictator-on-day-one/676247/?utm_source=feed"&gt;Trump said he was going&lt;/a&gt; to lead the United States. Over his decade and a half reign, Prime Minister &lt;a href="https://www.theatlantic.com/ideas/2026/03/hungary-first-post-reality-political-campaign/686565/?utm_source=feed"&gt;Viktor Orbán had rigged&lt;/a&gt; the electoral and legislative systems for his party’s benefit, come to &lt;a href="https://euobserver.com/203675/how-orban-systematically-suffocated-the-hungarian-media-over-the-past-15-years/#:~:text=Fidesz%2C%20the%20ruling%20party%2C%20directly,independent%20media%20from%202010%20onwards."&gt;control&lt;/a&gt; (directly or indirectly) 80 percent of the country’s media, and hobbled most independent institutions. But when I asked these Hungarians to give it to me straight, they started to tell me another story, about what was happening on “the islands.”&lt;/p&gt;
&lt;/blockquote&gt;&lt;p&gt;&lt;a href="https://www.theatlantic.com/culture/2026/04/viktor-orban-defeat-tisza-islands-hungary/686827/?utm_source=feed"&gt;Read the full article.&lt;/a&gt;&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;More From &lt;em&gt;The Atlantic&lt;/em&gt;&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/ideas/2026/04/pope-jd-vance-iran/686826/?utm_source=feed"&gt;Pope James David Vance the First&lt;/a&gt;&lt;/li&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/health/2026/04/beyond-inheritance-excerpt-roxanne-khamsi/686831/?utm_source=feed"&gt;The DNA fix for aging&lt;/a&gt;&lt;/li&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/books/2026/04/henry-david-thoreau-great-american-dissident/686823/?utm_source=feed"&gt;If you want a better world, act like you live in it.&lt;/a&gt;&lt;/li&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/ideas/2026/04/world-bank-industrial-policy/686820/?utm_source=feed"&gt;A pillar of the economics establishment admits that it was wrong.&lt;/a&gt;&lt;/li&gt;
	&lt;li&gt;&lt;a href="https://www.theatlantic.com/podcasts/2026/04/hungary-orban-magyar-election/686821/?utm_source=feed"&gt;&lt;i&gt;Radio Atlantic&lt;/i&gt;: If Hungary can do it&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;hr&gt;&lt;p&gt;&lt;b&gt;Culture Break&lt;/b&gt;&lt;/p&gt;&lt;figure&gt;&lt;img alt="Two people holding books to their ears like phones" height="450" src="https://cdn.theatlantic.com/media/newsletters/2026/04/_preview_45/original.jpg" width="800"&gt;
&lt;figcaption class="caption"&gt;David Avazzadeh / Connected Archives&lt;/figcaption&gt;
&lt;/figure&gt;&lt;p&gt;&lt;b&gt;Read. &lt;/b&gt;Last month, Rhian Sasseen recommended six books that &lt;a href="https://www.theatlantic.com/books/2026/03/books-discuss-friend-group-club-recommendations/686295/?utm_source=feed"&gt;simply must be talked about&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Explore. &lt;/b&gt;Imagine a chatbot that &lt;a href="https://www.theatlantic.com/technology/2026/04/chatbot-ai-race-emotional-intelligence/686830/?utm_source=feed"&gt;actually knows how to talk to you&lt;/a&gt;, Matteo Wong writes.&lt;/p&gt;&lt;p&gt;&lt;a href="https://www.theatlantic.com/free-daily-crossword-puzzle/?utm_source=feed"&gt;Play our daily crossword.&lt;/a&gt;&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;small&gt;&lt;i&gt;Rafaela Jinich &lt;/i&gt;&lt;i&gt;contributed to this newsletter.&lt;/i&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;&lt;small&gt;&lt;em&gt;When you buy a book using a link in this newsletter, we receive a commission. Thank you for supporting &lt;/em&gt;The Atlantic&lt;em&gt;.&lt;/em&gt;&lt;/small&gt;&lt;/p&gt;</content><author><name>Will Gottsegen</name><uri>http://www.theatlantic.com/author/will-gottsegen/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/_kqbk4dW7VD8z7FF7lNefHO-Qgs=/media/newsletters/2026/04/2026_04_16_Allbirds/original.jpg"><media:credit>Illustration by The Atlantic</media:credit></media:content><title type="html">The Allbirds Pivot Is a Terrible Idea … Right?</title><published>2026-04-16T18:52:00-04:00</published><updated>2026-04-17T16:26:27-04:00</updated><summary type="html">Its turn to AI could be an escape hatch for a company with nothing to lose.</summary><link href="https://www.theatlantic.com/newsletters/2026/04/allbirds-ai-stocks-sneakers/686835/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686830</id><content type="html">&lt;p bis_size='{"x":172,"y":19,"w":665,"h":297,"abs_x":204,"abs_y":2170}'&gt;Earlier this year, when I walked into a renovated loft in downtown San Francisco, the couches and tables were littered with flyers advertising an “emotionally intelligent real-time AI coach.” They were for Amotions AI—one of several start-ups that had gathered that day to pitch investors, entrepreneurs, and tech workers. Pianpian Xu Guthrie, Amotions AI’s founder, was eager to tell me more. The AI model observes video calls on your computer, she said, and gives you real-time tips based on the other person’s tone and facial expression. Maybe you’re a salesperson, and the bot flags that your potential customer is “confused” and suggests what to say.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":346,"w":665,"h":33,"abs_x":204,"abs_y":2497}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":172,"y":409,"w":665,"h":396,"abs_x":204,"abs_y":2560}'&gt;Emotions are the AI industry’s new fixation. Not only are growing numbers of start-ups such as Amotions AI promising tools that interpret feelings; the major AI companies are developing chatbots that apparently aren’t just smarter—they &lt;em bis_size='{"x":299,"y":513,"w":55,"h":22,"abs_x":331,"abs_y":2664}'&gt;get you&lt;/em&gt;. When OpenAI launched a new version of ChatGPT late last year, it &lt;a bis_size='{"x":302,"y":546,"w":79,"h":22,"abs_x":334,"abs_y":2697}' href="https://openai.com/index/gpt-5-1/"&gt;described&lt;/a&gt; the bot as “warmer by default and more conversational.” Anthropic has &lt;a bis_size='{"x":437,"y":579,"w":49,"h":22,"abs_x":469,"abs_y":2730}' href="https://www.anthropic.com/constitution"&gt;stated&lt;/a&gt; that its model, Claude, “may have some functional version of emotions or feelings,” and Google has &lt;a bis_size='{"x":679,"y":612,"w":66,"h":22,"abs_x":711,"abs_y":2763}' href="https://blog.google/products-and-platforms/products/gemini/gemini-3/#responsible-development"&gt;claimed&lt;/a&gt; that its AI models are now capable of “reading the room.” Elon Musk’s lab, xAI, has &lt;a bis_size='{"x":172,"y":678,"w":64,"h":22,"abs_x":204,"abs_y":2829}' href="https://x.ai/news/grok-4-1"&gt;boasted&lt;/a&gt; that a recent version of Grok did well on a test of emotional intelligence, or EQ, that posed scenarios such as this: “You think you might have been scapegoated by a fellow employee for the lunchroom thefts that have been happening.”&lt;/p&gt;&lt;p bis_size='{"x":172,"y":835,"w":665,"h":33,"abs_x":204,"abs_y":2986}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":172,"y":898,"w":665,"h":165,"abs_x":204,"abs_y":3049}'&gt;Silicon Valley has good reason to push EQ. For AI products to work as advertised—to genuinely substitute for personal assistants or co-workers—they have to be not just competent but caring; not just effective but empathetic. And so the AI industry seems to believe that the next step in developing smart and useful bots requires instilling them with people skills.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":1093,"w":665,"h":24,"abs_x":204,"abs_y":3244}' data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a bis_size='{"x":172,"y":1095,"w":470,"h":19,"abs_x":204,"abs_y":3246}' href="https://www.theatlantic.com/technology/2025/12/people-outsourcing-their-thinking-ai/685093/?utm_source=feed"&gt;Read: The people outsourcing their thinking to AI&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p bis_size='{"x":172,"y":1147,"w":665,"h":330,"abs_x":204,"abs_y":3298}'&gt;The search for an emotionally intelligent machine has long been part of AI research. In the 1960s, the computer scientist Joseph Weizenbaum &lt;a bis_size='{"x":740,"y":1185,"w":85,"h":22,"abs_x":772,"abs_y":3336}' href="https://dl.acm.org/doi/10.1145/365153.365168"&gt;developed&lt;/a&gt; a primitive chatbot, called ELIZA, that could simulate a psychotherapist by repeating back what a person said in question form. One day, as Weizenbaum recalled, he found his secretary chatting with ELIZA; she asked him to leave the room to give them some privacy. The original ChatGPT from late 2022 was not smarter or more powerful than other existing  tools—the underlying model was actually several years old—but OpenAI’s main innovation was to engineer the bot to converse like a human. ChatGPT had a surface-level ability to pick up on and respond to cues for, say, anger or joy.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":1507,"w":665,"h":33,"abs_x":204,"abs_y":3658}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":172,"y":1570,"w":665,"h":330,"abs_x":204,"abs_y":3721}'&gt;Even so, the AI industry has since not been all that interested in emotions. Silicon Valley has spent the past two years pouring resources into &lt;a bis_size='{"x":172,"y":1608,"w":628,"h":55,"abs_x":204,"abs_y":3759}' href="https://www.theatlantic.com/technology/archive/2024/12/openai-o1-reasoning-models/680906/?utm_source=feed"&gt;so-called reasoning models&lt;/a&gt; in the hopes of making them good at writing code and solving math problems. Last year, Ilya Sutskever, the former chief scientist at OpenAI, &lt;a bis_size='{"x":252,"y":1707,"w":32,"h":22,"abs_x":284,"abs_y":3858}' href="https://www.dwarkesh.com/p/ilya-sutskever-2"&gt;said&lt;/a&gt; that “emotions are relatively simple” for bots to master on the path toward developing intelligence. By this logic, figuring out the nature of joy or anxiety would ostensibly be much easier than figuring out nuclear fusion. Industry-wide measures exist for all sorts of technical abilities, but until recently, companies simply did not seem to publicly evaluate anything relating to human feeling.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":1930,"w":665,"h":33,"abs_x":204,"abs_y":4081}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":172,"y":1993,"w":665,"h":363,"abs_x":204,"abs_y":4144}'&gt;That dismissive attitude is changing. “Emotional intelligence is one of the most important capabilities of current models,” Hui Shen, an AI researcher at the University of Michigan, told me. The companies continue to chase raw intelligence and problem-solving abilities—but they seem to have realized that, for most people, that’s not the most relevant product feature. Whether Grok can solve difficult math problems is probably less useful to you than the advice it can give on ways to impress your boss at work or, even, how it consoles you when your cat dies. (Which, according to an example in xAI’s press release about Grok’s state-of-the-art EQ, could be: “The quiet spots where they used to sleep, the random meows you still expect to hear … it just hits in waves. It’s okay that it hurts this much.”)&lt;/p&gt;&lt;p bis_size='{"x":172,"y":2386,"w":665,"h":33,"abs_x":204,"abs_y":4537}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":172,"y":2449,"w":665,"h":297,"abs_x":204,"abs_y":4600}'&gt;Last year, both OpenAI and Anthropic separately published research showing that roughly 2 to 3 percent of conversations with ChatGPT or Claude were explicitly emotional—seeking interpersonal advice, role-playing, and so on. These are small proportions, but with some billion individual users between these companies, the actual number of people having emotional discussions with these two bots alone could be well into the millions. And many of the more frequent uses of chatbots, such as for tutoring and writing personal communications, also involve varying degrees of interpreting and managing emotions.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":2776,"w":665,"h":33,"abs_x":204,"abs_y":4927}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":172,"y":2839,"w":665,"h":297,"abs_x":204,"abs_y":4990}'&gt;To the extent that human emotions or preferences were incorporated into the training of ChatGPT or other top models, much of that appears to have been accomplished through a process known as “&lt;a bis_size='{"x":172,"y":2910,"w":605,"h":55,"abs_x":204,"abs_y":5061}' href="https://www.theatlantic.com/technology/archive/2023/07/ai-chatbot-human-evaluator-feedback/674805/?utm_source=feed"&gt;reinforcement learning with human feedback&lt;/a&gt;”: A chatbot writes multiple responses to the same prompt, and human raters decide which they prefer. If applied without nuance, this approach can produce AI models that uncritically agree with and reinforce anything a user says—precipitating deep emotional dependencies on AI chatbots and, in the most extreme cases, appearing to encourage &lt;a bis_size='{"x":172,"y":3075,"w":634,"h":55,"abs_x":204,"abs_y":5226}' href="https://www.theatlantic.com/technology/2025/12/ai-psychosis-is-a-medical-mystery/685133/?utm_source=feed"&gt;delusional thin&lt;/a&gt;&lt;a bis_size='{"x":207,"y":3108,"w":37,"h":22,"abs_x":239,"abs_y":5259}' href="https://www.theatlantic.com/technology/2025/12/ai-psychosis-is-a-medical-mystery/685133/?utm_source=feed"&gt;king&lt;/a&gt;.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":3166,"w":665,"h":24,"abs_x":204,"abs_y":5317}' data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a bis_size='{"x":172,"y":3168,"w":316,"h":19,"abs_x":204,"abs_y":5319}' href="http://www.theatlantic.com/technology/2025/12/ai-psychosis-is-a-medical-mystery/685133/?utm_source=feed"&gt;Read: The chatbot-delusion crisis&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p bis_size='{"x":172,"y":3220,"w":665,"h":297,"abs_x":204,"abs_y":5371}'&gt;What AI firms are after now is something that resembles genuine empathy, which involves much more than validating what users already want to hear. This sort of bot would not only comfort but push back when necessary—and, crucially, would recognize its own limits as a piece of software. For instance, Anthropic noted in a recent update to Claude’s constitution—a document that tells the model, in an abstract sense, how to behave—to avoid situations in which someone exclusively “relies on Claude for emotional support.” But no AI company has really given a clear definition of how a truly emotionally intelligent bot would differ from today’s shallow miming of EQ.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":3547,"w":665,"h":33,"abs_x":204,"abs_y":5698}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":172,"y":3610,"w":665,"h":396,"abs_x":204,"abs_y":5761}'&gt;To that end, a more cynical way to interpret the industry’s frenzy over emotions is that it’s a way to make AI models more useful, yes, but also a way to retain users—akin to features such as “memory,” in which chatbots can recall details from past conversations. The miming of an interpersonal relationship gives AI models a huge advantage over other software. “People don’t have a lot of emotions associated with Google search, but with these chatbots, people are having a lot of connections,” Sahand Sabour, an AI researcher at Tsinghua University, told me. (Anthropic did not respond to a request to discuss recent research on Claude and emotions. OpenAI declined to comment but pointed me to a Substack &lt;a bis_size='{"x":536,"y":3912,"w":41,"h":22,"abs_x":568,"abs_y":6063}' href="https://reservoirsamples.substack.com/p/some-thoughts-on-human-ai-relationships"&gt;essay&lt;/a&gt; in which one of its researchers wrote that AI models should be warm without giving the illusion of consciousness. xAI did not respond to a request for comment.)&lt;/p&gt;&lt;p bis_size='{"x":172,"y":4036,"w":665,"h":33,"abs_x":204,"abs_y":6187}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":172,"y":4099,"w":665,"h":264,"abs_x":204,"abs_y":6250}'&gt;No matter the motivation, instilling any sort of EQ in a computer program remains very hard. Social scientists have spent many decades trying to develop tests for people’s abilities to recognize, regulate, and respond to emotions in the hopes that they might correlate with happiness or workplace performance. Such EQ evaluations have been adapted for chatbots, with questions to the tune of: &lt;em bis_size='{"x":172,"y":4269,"w":664,"h":88,"abs_x":204,"abs_y":6420}'&gt;Michael has been practicing a magic trick to show his friend Lily, but Lily has been attending his practices in secret. When he performs the trick, she knows exactly how it works. How does Michael feel?&lt;/em&gt;&lt;/p&gt;&lt;p bis_size='{"x":172,"y":4393,"w":665,"h":33,"abs_x":204,"abs_y":6544}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":172,"y":4456,"w":665,"h":363,"abs_x":204,"abs_y":6607}'&gt;As it turns out, generative-AI models do quite well on such tests—better, in some instances, than people. That shouldn’t come as a surprise, because there are mountains of similar scenarios all over the web that AI models are trained on. All of that data is probably why bots are “so good at solving these quite narrow tests that we developed for humans,” Katja Schlegel, a psychologist at the University of Bern, told me. Such encyclopedic knowledge could make these products useful in certain settings—and the process of reinforcement learning with human feedback largely involves eliciting and sharpening these abilities. But all of this is a far cry from genuinely understanding why someone feels a certain way, empathizing with them, and figuring out whether they need to and how they might be helped.&lt;/p&gt;&lt;p bis_size='{"x":172,"y":4849,"w":665,"h":33,"abs_x":204,"abs_y":7000}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":172,"y":4912,"w":665,"h":330,"abs_x":204,"abs_y":7063}'&gt;After all, EQ tests aren’t even that useful in &lt;em bis_size='{"x":540,"y":4917,"w":49,"h":22,"abs_x":572,"abs_y":7068}'&gt;people&lt;/em&gt;, let alone chatbots. Being able to label a scowl as “upset” in a lab is very different from interacting with a scowling child, spouse, or boss. Emotions are bound to a person, a relationship, a culture, a moment in time; they are an experience. The AI industry’s first great act of marketing was labeling its products as &lt;em bis_size='{"x":721,"y":5049,"w":90,"h":22,"abs_x":753,"abs_y":7200}'&gt;intelligence&lt;/em&gt;, a term so general and poorly understood in humans that it could encompass anything. Now the same AI firms have set their sights on an attribute that is even more poorly understood than IQ. Emotions are squishy and subjective, providing leeway to convincingly market chatbots as emotionally intelligent—and pushing more people to talk with them.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/5wuNWax28f8TIPHZNBE2bbWAWjU=/media/img/mt/2026/04/2026_04_07_AI_emotional_intelligence/original.jpg"><media:credit>Illustration by The Atlantic</media:credit></media:content><title type="html">AI’s Next Frontier: People Skills</title><published>2026-04-16T12:31:00-04:00</published><updated>2026-04-16T16:18:55-04:00</updated><summary type="html">Imagine a chatbot that actually knows how to talk to you.</summary><link href="https://www.theatlantic.com/technology/2026/04/chatbot-ai-race-emotional-intelligence/686830/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686811</id><content type="html">&lt;p&gt;&lt;small&gt;&lt;em&gt;Updated at 4:55 p.m. ET on April 16, 2026&lt;/em&gt;&lt;/small&gt;&lt;/p&gt;&lt;p&gt;Two hours into a road trip in my Tesla, I start to get twitchy. By that point, the battery in my 2019 Model 3 has dipped to an uncomfortably low percentage. If I can’t reach the next plug, I’m in trouble. This is the kind of problem that Ram’s electric pickup truck is intended to solve. When the range starts to dwindle, the truck automatically fires up a hidden gas engine that refills the giant battery. The “electric” vehicle keeps on chugging down the highway, hour after hour; pit stops are once again decided by the need for bathroom breaks rather than battery range.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The Ram 1500 REV, set to debut later this year, is what’s called an “extended-range electric vehicle,” or EREV. In essence, it is an electric vehicle that burns gas. There’s nothing revolutionary about a half-gas, half-electric car, of course. Hybrids have been a mainstay in the United States since the Toyota Prius &lt;a href="https://www.reddit.com/r/prius/comments/zz7dpb/prius_sales_figures_for_the_past_several_years_in/"&gt;broke through&lt;/a&gt; two decades ago, and automakers have released more efficient plug-in hybrids—allowing drivers to charge up for about 30 miles of electric driving, just enough to accomplish daily errands without fossil fuels. An extended-range EV is a different kind of beast. The engine burns gasoline for the sole purpose of replenishing the battery—it never actually pushes the wheels.&lt;/p&gt;&lt;p&gt;The technology is not exactly new: BMW sold a more primitive extended-range EV in the U.S. during the mid-2010s. But now these souped-up hybrids are set to go mainstream. EREVs are the car industry’s new hope for quieting the doubts of American drivers who are wary of going electric. In the Ram, the battery can run for about 150 miles of electric driving, and the whole setup delivers enough range to travel nearly 700 miles between stops.  “It takes away the range anxiety,” Jeremy Michalek, the director of the Vehicle Electrification Group at Carnegie Mellon University, told me. “When you want to go on a long trip, you can still put liquid fuel in it and continue to drive for longer distances.”&lt;/p&gt;&lt;p&gt;But for all the upside, gas-burning electric cars are not quite the future that we were promised. &lt;a href="https://insideevs.com/news/772186/ram-1500-rev-dead/"&gt;Just last year&lt;/a&gt;, the Ram truck was slated to be fully electric, with no gas engine to be found. Ford recently killed the electric F-150 pickup truck and is now promising to bring it back as—you guessed it—an EREV.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2023/12/hybrid-car-demand-ev-production/676266/?utm_source=feed"&gt;Read: The hybrid-car dilemma&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;These new hybrids are the latest sign that the electric revolution has not exactly gone according to plan. Sales of EVs, &lt;em&gt;true &lt;/em&gt;electric vehicles, had been growing slowly in the United States, but they’ve &lt;a href="https://www.coxautoinc.com/insights/q4-2025-ev-sales-report-commentary/"&gt;slid&lt;/a&gt; in the past six months, plagued by high prices and attacks from the Trump administration. Automakers have responded by &lt;a href="https://heatmap.news/electric-vehicles/ev-contraction-fun-cars"&gt;canceling and delaying&lt;/a&gt; new EV models. Last month, for example, Honda announced that it would halt the development of three new EVs; a few days later, Volvo said it would discontinue its affordable electric SUV, citing “&lt;a href="https://www.motor1.com/news/790181/volvo-ex30-dead-us/"&gt;shifting market conditions&lt;/a&gt;.” Other car companies, having invested billions into building EVs, are trying to find new ways to persuade Americans to take a chance on big batteries and electric motors. That’s where extended-range EVs come in.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;By throwing in a backup generator, the car industry hopes that it can finally appeal to pickup drivers, who have been especially resistant to going electric. Of the 16 EREVs that are &lt;a href="https://topelectricsuv.com/hybrid-trucks/range-extender-models-upcoming/"&gt;set to hit the market&lt;/a&gt; within the next three years, all are trucks or SUVs. “For American brands at the moment, I think it’s an admission that maybe, especially for big trucks and SUVs, EVs can’t deliver the type of utility and the performance that their customers demand,” Joseph Yoon, a consumer-insights analyst at the car-buying site Edmunds, told me. Indeed, electrifying the full-size American pickup truck has proved to be a particularly tough problem. Because these vehicles are so big and heavy, electric versions need colossal batteries to move them. That raises the price, and drivers are still sometimes left with subpar performance: Towing a boat or trailer severely dings their battery range.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;There is good reason to believe that EREVs will assuage some of these concerns. Consider Scout Motors, a Volkswagen-owned brand that is making electric versions of the boxy trucks and SUVs from the 1960s and ’70s. Of the 150,000 reservations the company collected as of January, &lt;a href="https://insideevs.com/news/785904/scout-erev-harvester-reservations-ceo-expectations/"&gt;85 percent&lt;/a&gt; of customers have chosen the version with the backup engine over its battery-only cousin. Scout began with an all-electric focus, Ryan Decker, the vice president of strategy and brand, told me. Then the company received feedback that prospective drivers wanted more than they believed all-electric could deliver. Pivoting to an extended-range EV let Scout build on the work that went into manufacturing an electric vehicle, he said, while giving customers “confidence of packaging a gas engine on top.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;However, the curse of any hybrid is compromise. EREVs aren’t likely to solve the biggest reason Americans are not going electric: cost. Though Ram has yet to announce the price of its new extended-range pickup truck, &lt;em&gt;Car and Driver &lt;/em&gt;&lt;a href="https://www.caranddriver.com/ram/1500-rev"&gt;estimates&lt;/a&gt; that the vehicle will run at least $60,000. Ram’s gas-powered truck, meanwhile, starts at $42,000. The price difference is partly because an extended-range EV still has a big, expensive battery in addition to carrying around a gas engine with its thousands of chugging belts and spinning gears. That leads to other downsides. EREVs require plenty of upkeep, unlike fully electric cars that have just a few dozen moving parts. In the six and a half years that I’ve owned my Tesla, I’ve done basically nothing but replace the tires and the small backup battery.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2025/12/car-prices-too-high/685345/?utm_source=feed"&gt;Read: The backlash against car prices is here&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;The problem that these buzzy new hybrids &lt;em&gt;do &lt;/em&gt;solve isn’t as relevant as you might think. For those who aren’t doing any heavy-duty driving—which includes lots of American pickup-truck owners—range anxiety is a &lt;a href="https://heatmap.news/electric-vehicles/400-miles-range-anxiety"&gt;vanishing concern&lt;/a&gt;. New electric cars can now run for 300 or even 400 miles a charge, which is more than enough to pull off a road trip without having to make lots of extra stops. High-speed charging is also getting more common and more reliable: Tesla now has more than 3,000 Supercharger stations in the United States, and competitors such as IONNA and EVgo have &lt;a href="https://insideevs.com/news/790783/ionna-dc-fast-charger-5-days-build/"&gt;accelerated&lt;/a&gt; the previously slow pace of installing new plugs. (The Trump administration tried to freeze billions in federal funding for EV charging, but courts have &lt;a href="https://www.utilitydive.com/news/trump-administration-must-let-ev-charger-funding-flow-court-rules/810631/"&gt;ruled against&lt;/a&gt; that move.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Two things are clear about electric vehicles: They are &lt;a href="https://electrek.co/2025/07/09/evs-are-73-percent-cleaner-than-gas-cars-even-with-battery-production/"&gt;far cleaner in the long run&lt;/a&gt;, and people who buy them typically don’t return to gas. Perhaps extended-range EVs are the training wheels that hesitant drivers need, providing the benefits of electric cars—instantaneous torque, quiet driving, fewer planet-killing carbon emissions—alongside the comfort of knowing there’s a gas station at every freeway exit. Seen another way, though, a built-in backup generator is poised to prolong the inevitable transition to true electric cars. Because designing and building new cars takes years, many EREVs won’t actually arrive in dealerships for quite some time. Ford’s extended-range F-150 is launching next year; Scout won’t launch its SUV &lt;a href="https://www.jalopnik.com/2143661/scout-terra-pickup-traveler-suv-production-delay-2030-report/"&gt;until 2028&lt;/a&gt; and its truck until even later. Considering that vehicles tend to stay on the road for a decade or more, these trucks are likely to be still burning fossil fuels deep into the 2040s. Any driver who buys an EREV to go &lt;em&gt;mostly&lt;/em&gt; electric is one who could have gone fully electric and never picked up a gas pump again.&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;small&gt;&lt;em&gt;This article originally misidentified the Ram 1500 REV as the first extended-range electric vehicle for sale in the United States.&lt;/em&gt;&lt;/small&gt;&lt;/p&gt;</content><author><name>Andrew Moseman</name><uri>http://www.theatlantic.com/author/andrew-moseman/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/ZVsc_p9tDLcFmuOAk5mRJWm96RI=/media/img/mt/2026/04/2026_04_04_EREVs/original.jpg"><media:credit>Illustration by Alisa Gao / The Atlantic</media:credit></media:content><title type="html">A New Kind of Hybrid Car Is About to Hit America’s Streets</title><published>2026-04-15T07:30:00-04:00</published><updated>2026-04-20T17:13:34-04:00</updated><summary type="html">The car industry says it has an answer for drivers wary of going electric.</summary><link href="https://www.theatlantic.com/technology/2026/04/extended-range-electric-vehicle-pickup-trucks/686811/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686807</id><content type="html">&lt;p&gt;Every now and then, music gets a guitar hero—a player who makes the instrument sound like something other than itself. Jeff Beck transformed it into something like the human voice singing; Jimi Hendrix, a psychedelic swirl. Fans are always looking for the next player who will make the same six-string instrument sound new again. And now Mk.gee has hit the scene.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;A 29-year-old from New Jersey whose real name is Michael Gordon, Mk.gee released his debut album, &lt;em&gt;Two Star &amp;amp; the Dream Police&lt;/em&gt;, in 2024. On it, his guitar sounds at various points like an orchestra, a snarling animal, a wildfire, a person shouting, and a radio playing at the bottom of the ocean. Critics declared &lt;a href="https://www.nytimes.com/2024/09/04/arts/music/mkgee-two-star-the-dream-police.html"&gt;Mk.gee&lt;/a&gt; a &lt;a href="https://au.rollingstone.com/music/music-live-reviews/mkgee-melbourne-australia-tour-live-review-69899/"&gt;guitar&lt;/a&gt; &lt;a href="https://www.rollingstone.com/music/music-live-reviews/mk-gee-rolling-stone-gather-no-moss-denver-1235393458/"&gt;hero&lt;/a&gt;; he played on a Bon Iver &lt;a href="https://open.spotify.com/album/3L3UjpXtom6T0Plt1j6l1T"&gt;album&lt;/a&gt; and worked on two Justin Bieber records. This past weekend, he performed with Bieber at Coachella. Listen long enough, and you’ll realize that Mk.gee’s grungy extraterrestrial sound is everywhere.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The quest to achieve the “Mk.gee tone” spawned a series of “How Does He Make His Guitar Sound Like That?” YouTube videos; musicians compared notes on Discord servers and Reddit threads. They also did what they’ve always done—gone to concerts and looked at the stage floor to see what gear the other guy’s got—and eventually, someone posted a photo of Mk.gee’s stage setup. There on the ground, surrounded by cables, was a large black box adorned with knobs and sliders and, in a cheesy futuristic font straight out of a ’90s bowling alley, the name: VG-8.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;That Reddit post was probably the most fame the Roland VG-8 (short for &lt;em&gt;virtual guitar&lt;/em&gt;) had gotten since the ’90s. Released in 1995, the VG-8 was designed to be a toolbox filled with essentially every existing guitar sound, Chris Bristol, the former chair and CEO of Roland U.S., told me. Players could make their guitar sound like a different model, and electronically switch amplifiers, microphones, and even the acoustic environment. Push some buttons, and the guitar might sound like an Eric Clapton–style Fender Stratocaster played in a small club; push some others, and get a Jimi Hendrix–esque fuzz distortion in a stadium. The VG-8 also comes with dozens of synthy sounds and guitar effects—which, if Reddit and my ears are correct, are a big part of Mk.gee’s tone.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;They were for Joni Mitchell’s too. My father, Fred Walecki, owned a musical-instrument shop, Westwood Music, where Mitchell was a customer, and he procured a VG-8 for her in 1995, when she told him that she was going to quit music. Her songbook uses more than 50 tunings, and she was tired of constantly retuning dozens of guitars on tour. Dad got her a VG-8 because with it, she could keep her guitar in standard tuning and let the device produce her more unusual ones. Because of the device, she kept touring, and the sounds of the VG-8 itself brought to her music “a freshness and distinctiveness that’s almost orchestral, it’s so rich,” she &lt;a href="https://jonimitchell.com/library/view.cfm?id=1127"&gt;told&lt;/a&gt; a &lt;em&gt;Billboard &lt;/em&gt;reporter at the time. “I wanted to blow chords up in size the way Georgia O’Keeffe blew up the flowers in her paintings, and now that’s possible.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/magazine/archive/2025/09/fred-walecki-guitar-expert-westwood-music/683558/?utm_source=feed"&gt;Read: My father, guitar guru to the rock gods&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Other musicians followed: Reeves Gabrels used the VG-8 extensively in his work with David Bowie; Sting wrote most of his 1998 album, &lt;em&gt;Brand New Day&lt;/em&gt;, on it. He &lt;a href="https://sting.com/products/brand-new-day?srsltid=AfmBOoopp-jY5jGdjVNHFQhTA3wU7Cqx4h2oC5U200RBzoACl4Lf6Vj8"&gt;told&lt;/a&gt; &lt;em&gt;Revolver &lt;/em&gt;magazine that the device “gave me a shot in the arm about being creative on guitar.” But the VG-8 retailed for about $3,000, and “because of the price, it was a very elitist, expensive technological product,” Paul Youngblood, the former president of Roland’s U.S. BOSS division who helped develop the VG-8, told me. It also came with a 118-page document closer to a textbook than a user manual. A few influential musicians loved it for a while; then, for about 30 years, VG-8s collected dust.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Now they’re making a comeback. VG-8s were selling only occasionally, and for $200 or so, before Mk.gee released &lt;em&gt;Two Star &amp;amp; the Dream Police&lt;/em&gt;, according to data provided to me by the music-gear marketplace Reverb. In the months following his debut, demand for the VG-8 rose—and so did its prices, reaching $1,200 in early 2025. Kevin Murrell, a musician who performs under the name kevm, has seen them for $2,000 and sometimes $3,000. (Accounting for inflation, that’s still roughly half the price it was in 1995.) The competition for VG-8s is steep enough that Murrell set up alerts on his phone for new listings—“Pray for me yall,” he wrote on the VG-8 channel of a Mk.gee Discord server. A caption on a Mk.gee-fan Instagram account &lt;a href="https://www.instagram.com/p/C8V5d6lAkfq/"&gt;reads&lt;/a&gt;, “Men want one thing and it’s a vg8.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The VG-8’s appeal is as much about what it can’t do as what it can. Music technology in 1995 “wasn’t anywhere near what it is today,” Youngblood said. Play too hard or too loud, and the VG-8 will spit out something choppy and explosive; even though the device was advanced for the time, it still “had a lo-fi kind of sound to it.” The noise that the VG-8 makes, simply because it’s old, has become a genre in itself thanks to Mk.gee. The guitar track on Lorde’s 2025 song “Shapeshifter” sounds more like a gritty string quartet than it does a guitar—that’s Mk.gee’s touring band member Andrew Aged on the VG-8. (Mk.gee declined to comment for this article.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Mk.gee himself plays a Fender Jaguar, which had a similar resurgence in the ’90s among players in the grunge scene, because “you could find one at a pawn shop for dirt cheap,” Cyril Nigg, the senior director of analytics at Reverb, told me. Gear revivals are part of the life cycle of music: A soon-to-be-famous player comes across forgotten equipment “and picks it up because it’s cool and inexpensive, and it ends up having a huge influence on their sound and then the culture at large,” Nigg said. In one way, though, the VG-8’s current popularity is a slightly newer phenomenon. Vintage-gear crazes are usually around analog devices, as a kind of rebellion against digitization and technology, Steve Waksman, a rock musicologist at the University of Huddersfield, told me. But the VG-8’s recent rise represents “nostalgia for a time when digital was still new.” Music sounds so digitized now that even just an earlier digital device feels like it has more character.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Roland recently came out with the BOSS VG-800, a modernized version of the VG-8. Marcus Hidalgo, a guitar player in Nashville who performs under the name toast, told me he’ll take it on tour because it’s more portable. The newer model, though, is a little &lt;em&gt;too &lt;/em&gt;clean, a little &lt;em&gt;too &lt;/em&gt;digital. When he saw a VG-8 for sale on Facebook Marketplace in Tampa, Florida, he texted his friend in Orlando, “Dude, I will give you all the gas money, I will give you lunch, whatever you need, if you just drive to Tampa for me and pick up this random old 90s unit from this random guy.” He prefers the VG-8 and the “weird noises” it makes. “I feel like I just started to learn how to play the guitar again,” he said. Like any tool, the VG-8 is only as good as the musician using it, but it holds the promise that there are still new sounds out there to find—even if they’re in a device from 1995.&lt;/p&gt;</content><author><name>Nancy Walecki</name><uri>http://www.theatlantic.com/author/nancy-walecki/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/-NsCIXBzctE13-EBleSRVPXal2c=/media/img/mt/2026/04/2026_04_13_Walecki_Vg8_Brian_Scagnelli_final/original.jpg"><media:credit>Illustration by Brian Scagnelli</media:credit></media:content><title type="html">The Guitar Sounds New Again</title><published>2026-04-14T14:59:15-04:00</published><updated>2026-04-16T09:34:26-04:00</updated><summary type="html">The grungy, extraterrestrial “Mk.gee tone” is everywhere and depends on a decades-old device.</summary><link href="https://www.theatlantic.com/technology/2026/04/guitar-sounds-vg8/686807/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686794</id><content type="html">&lt;p dir="ltr"&gt;In July 2020, 4chan’s video-game discussion board looked much like the rest of the notorious online forum. There were elaborate, libidinal fantasies involving “whores” and “dragon cum,” and comments on how long a gamer had to wait “before my dick can get up for another beating,” as one put it.&lt;/p&gt;&lt;p dir="ltr"&gt;And yet, as the gamers discussed such things, they were also making a discovery of significance to the AI industry. Some of them were playing &lt;em&gt;AI Dungeon&lt;/em&gt;, a new text-based role-playing game that was essentially an AI version of &lt;em&gt;Dungeons &amp;amp; Dragons&lt;/em&gt;. In endlessly generated fantasy-world scenarios, players described actions like “pick up the sword” or “tell the troll to go away,” and the computer responded with the action that followed.&lt;/p&gt;&lt;p dir="ltr"&gt;In addition to asking the game’s characters to engage in various sex acts (naturally), the 4chan gamers also asked them to do math problems. That sounds strange, of course, but &lt;em&gt;AI Dungeon&lt;/em&gt; was powered by OpenAI’s GPT-3, and the gamers knew that they were among the first people to probe the capabilities of this new large language model. This was more than two years before the release of ChatGPT, and the model was famously bad at math. It frequently failed at simple arithmetic. But when they asked a character in the game to do a math problem and provide a step-by-step explanation, one of them wrote, the LLM was “not only solving math problems but actually solves them in a way that fits the personality of the fucking character.”&lt;/p&gt;&lt;p dir="ltr"&gt;The players had come upon a new feature—what’s known in AI today as “chain of thought.” Essentially, it means that the model explains the steps required to solve a problem, in addition to giving an answer. Asking the model for a chain of thought also seems to improve the accuracy of its answers to certain kinds of problems. The gamers on 4chan recognized the significance immediately, and &lt;a href="https://x.com/kleptid/status/1284069270603866113"&gt;posted&lt;/a&gt; &lt;a href="https://x.com/kleptid/status/1284098635689611264"&gt;examples&lt;/a&gt; on Twitter.  &lt;/p&gt;&lt;p dir="ltr"&gt;Recently, the tech industry has promoted chain of thought as a revolution in technology, and a reason to get excited about AI all over again. Researchers at Google &lt;a href="https://arxiv.org/abs/2201.11903v1"&gt;claimed&lt;/a&gt; in a paper to be “the first” to elicit a “chain of thought” from a general-purpose LLM, more than a year after the 4chan gamers shared their findings. (This claim was removed from subsequent versions of the paper, which still did not acknowledge the gamers, though at least one other research paper has.) And in the past couple of years, companies have begun to claim that their chatbots are not just getting math problems right; they are &lt;em&gt;actually thinking&lt;/em&gt; about them. OpenAI wrote in 2024 that its “o1” model “thinks before it answers,” and Google claimed that Gemini 2.0 Flash Thinking Experimental was “capable of showing its thoughts.” Companies started referring to their models as “&lt;a href="https://www.theatlantic.com/technology/archive/2024/12/openai-o1-reasoning-models/680906/?utm_source=feed"&gt;reasoning models,&lt;/a&gt;” ostensibly a new kind of product from an LLM.&lt;/p&gt;&lt;p dir="ltr"&gt;Amid all this hype, the 4chan history is instructive. 4chan gamers, for all their brash language, have tended to speak in more levelheaded—and accurate—terms than the AI industry about how the models work. Last year, for example, Anthropic published a long and serious-looking &lt;a href="https://transformer-circuits.pub/2025/attribution-graphs/biology.html"&gt;article&lt;/a&gt;, “On the Biology of a Large Language Model.” Its visual presentation mimicked scientific publications, with sophisticated-looking diagrams and equations. But on every topic, the article described the operation of the LLM in terms of a human mind. It said the LLM “plans” its writing in advance, “generalizes” its knowledge, and can be “unfaithful” to its chain of thought (meaning, the article explains, the LLM is occasionally “bullshitting”).&lt;/p&gt;&lt;p dir="ltr"&gt;Contrast this with a &lt;a href="https://rentry.org/how2claude"&gt;guide&lt;/a&gt; written in 2024 by people on 4chan, which begins with the heading, “Your bot is an illusion,” and proceeds with a clear, detailed description of how companies use an LLM to construct a chatbot that responds to questions and has a personality. It describes an LLM’s most important technical features and shows how the model’s outputs correspond to its various inputs. The guide is a useful reminder of the most basic truth about large language models: The only thing they can do is imitate their training data.&lt;/p&gt;&lt;p dir="ltr"&gt;LLMs can output explanations of math because they were trained on explanations of math. Some of those explanations come from textbooks, but companies also train their so-called reasoning models on text that conveys the act of thinking. I dug into some open-source AI-training data sets and &lt;a href="https://huggingface.co/datasets/open-thoughts/OpenThoughts3-1.2M?conversation-viewer=1"&gt;found&lt;/a&gt; hundreds of thousands of meandering solutions to math problems that included language such as “Wait, no. The question is,” “First, I should parse the input correctly,” and “Wait, but in cases where …” As far as I’ve seen, companies acquire this text either by &lt;a href="https://www.theverge.com/cs/features/877388/white-collar-workers-training-ai-mercor"&gt;paying workers to write it&lt;/a&gt; or generating it with other AI models. (Google, OpenAI, and Anthropic did not respond to requests for comment.)&lt;/p&gt;&lt;p dir="ltr"&gt;Models trained on such utterances are not actually reasoning; they are predicting what reasoning might look like. There isn’t even necessarily any connection between a model’s reasoning steps and its final answer. Researchers have &lt;a href="http://arxiv.org/abs/2505.13775"&gt;shown&lt;/a&gt; that models can provide incorrect chain-of-thought text but still arrive at the correct result.&lt;/p&gt;&lt;p dir="ltr"&gt;Some people have argued that if a computer can imitate human reason well enough to fool us every time, then how can we say it isn’t doing the real thing? Researchers at Apple have &lt;a href="http://arxiv.org/abs/2410.05229"&gt;explored&lt;/a&gt; this question, and their findings are insightful. For example, they discovered that a model might answer a math word problem correctly, but then answer the same problem incorrectly after the wording was changed slightly. Specifically, they found that state-of-the-art reasoning models performed up to 65 percent worse when irrelevant information was added to a question, even when the wording of key facts was left unchanged. Apple researchers have also shown, in &lt;a href="http://arxiv.org/abs/2506.06941"&gt;a paper&lt;/a&gt; titled “The Illusion of Thinking,” that although the reasoning models do better than standard LLMs on certain problems, they are also worse at others.&lt;/p&gt;&lt;p dir="ltr"&gt;The reason the chain-of-thought trick does often work is fairly simple. The additional words in the chain of thought give the model more context, which guides its word-predicting process in a better direction, as Perplexity CEO Aravind Srinivas &lt;a href="https://inv.nadeko.net/watch?v=w9eQJdBRC5o"&gt;explained&lt;/a&gt; in a 2024 interview. This is analogous to the common advice about being specific when asking an LLM a question on any topic. The more details you give, the more you push the LLM toward the relevant words in its memory.&lt;/p&gt;&lt;p dir="ltr"&gt;Some of the 4chan gamers appeared to understand this immediately. As one explained back in July 2020: “It makes sense since it is based on human language that you have to talk to it like one”—that is, like a human—“to get a proper response.”&lt;/p&gt;&lt;p dir="ltr"&gt;In addition to the gamers, another AI enthusiast discovered the chain-of-thought trick at almost the exact same time. A computer-science student named Zach Robertson, who also came to GPT-3 through &lt;em&gt;AI Dungeon&lt;/em&gt;, wrote &lt;a href="https://www.lesswrong.com/posts/Mzrs4MSi58ujBLbBG/you-can-probably-amplify-gpt3-directly"&gt;a blog post&lt;/a&gt; in July 2020 about “how to amplify GPT3’s capabilities” by breaking math problems into multiple steps. That September he gave a presentation that showed how the steps could be &lt;a href="https://docs.google.com/presentation/d/1B5JdCTVL6-EGCfZnyXevL9l_ZibadQKmc_syVdi1KSY/edit?usp=sharing"&gt;“chained”&lt;/a&gt; together. Robertson, who is now a Ph.D. student in computer science at Stanford, told me on a video call that he was not aware of the 4chan gamers. In fact, he wasn’t even aware he could be considered a co-inventor of chain of thought. I’d seen his blog post cited in &lt;a href="http://arxiv.org/abs/2102.07350"&gt;a research paper&lt;/a&gt;, but when I first mentioned it in an email, he was unsure what I was talking about. He’d removed the post from the internet a couple of years ago when migrating his blog to a new site. (He restored it after we spoke.)&lt;/p&gt;&lt;p&gt;I thought Robertson might be proud to learn he was a pioneer in an area of such enthusiasm within the AI industry. But he seemed only mildly tickled. Those early experiments with &lt;em&gt;AI Dungeon&lt;/em&gt; were what got him interested in AI, he told me, but he’s since moved on to other topics. Chain of thought was a remarkable trick, but that’s also all it was.&lt;/p&gt;</content><author><name>Alex Reisner</name><uri>http://www.theatlantic.com/author/alex-reisner/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/rWKWHaNC-PCXcwzDPO1nQWylUgQ=/media/img/mt/2026/04/2026_04_10_thinking2_mpg/original.gif"><media:credit>Illustration by Matteo Giuseppe Pani / The Atlantic</media:credit></media:content><title type="html">The Strange Origin of AI’s ‘Reasoning’ Abilities</title><published>2026-04-14T11:38:00-04:00</published><updated>2026-04-30T16:53:42-04:00</updated><summary type="html">It involves 4chan, of all places.</summary><link href="https://www.theatlantic.com/technology/2026/04/4chan-ai-dungeon-thinking-reasoning/686794/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686753</id><content type="html">&lt;p&gt;The Great Travel Meltdown of 2026 started taking shape at the end of February. At first, the U.S. war against Iran forced the cancellation or rerouting of many flights to the Middle East; then the blockage of the Strait of Hormuz &lt;a href="https://www.theatlantic.com/newsletters/2026/03/expensive-plane-tickets-oil-iran/686604/?utm_source=feed"&gt;drove up&lt;/a&gt; the price of jet fuel and threatened to cause &lt;a href="https://www.bloomberg.com/news/articles/2026-03-31/lufthansa-prepares-crisis-plans-that-include-grounding-jets?embedded-checkout=true"&gt;crises&lt;/a&gt; for the major airlines. Though the two-week cease-fire announced last night may reopen the strait, prices are &lt;a href="https://www.nytimes.com/2026/04/08/business/energy-environment/iran-war-oil-gas-prices-energy.html"&gt;unlikely to rebound&lt;/a&gt; immediately.&lt;/p&gt;&lt;p&gt;Separately, large numbers of TSA workers started staying home after a protracted budget fight in Congress left them working without pay for weeks on end. Airport-security lines snaked into terminal basements or out their front doors. President Trump &lt;a href="https://www.nytimes.com/2026/03/29/us/politics/ice-tsa-airports-homan-trump-shutdown.html"&gt;deployed ICE agents&lt;/a&gt; at the nation’s major airports, and although TSA workers are now &lt;a href="https://www.nytimes.com/2026/03/30/us/politics/tsa-workers-paychecks-trump-executive-order.html"&gt;receiving back pay&lt;/a&gt;, the funding situation isn’t yet resolved.&lt;/p&gt;&lt;p&gt;Getting somewhere by plane has always been an onerous proposition. If you search the phrase &lt;em&gt;travel chaos&lt;/em&gt; on Google News, you will find that headlines about “travel chaos” reoccur in batches about every six months, going back to the beginning of time. But as a result of recent, tragic world events, the state of consumer aviation seems to be deteriorating at a rapid pace. Now Americans with travel plans would like to know exactly how worried they should be, and exactly how worried everyone else already is.&lt;/p&gt;&lt;p&gt;I’m one of the worriers. I’ve been planning to go to Barcelona for my honeymoon this summer. I’ve already read two books about the Spanish Civil War and just started a pretty dry one about the finances of the city’s famous football team. Last week I watched my fiancé spend every Capital One point in his account on our basic-economy flights, because the Google Flights trend line showed the fare for our trip going up, up, up, and headed off the chart.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/newsletters/archive/2025/07/has-air-travel-ever-been-good/683584/?utm_source=feed"&gt;Read: The golden age of flying wasn’t all that golden&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;So I’ve been in the forums—mostly on Reddit. People there are fretting about the known problems as well as interesting new ones that they came up with themselves. They’re &lt;a href="https://www.reddit.com/r/animeexpo/comments/1rzzx7x/if_you_havent_booked_your_airline_flight_do_so/"&gt;worried&lt;/a&gt;, for instance, that an airline might decide to charge them an additional fuel fee upon arrival at the airport, and they don’t want to listen when someone replies, in an effort to be helpful, “Sounds illegal.” They’re &lt;a href="https://www.reddit.com/r/fearofflying/comments/1s9jea3/jet_fuel_shortages/"&gt;worried&lt;/a&gt; about successfully flying to Japan but then getting stuck there by a fuel crisis that hits its peak with really, really bad timing (for them personally). In one &lt;a href="https://www.reddit.com/r/travel/comments/1s4irbc/purchasing_international_flight_tickets_during/"&gt;thread&lt;/a&gt;, a commenter stated without explanation that “there is also a slim chance that events outside of our control will make people want to avoid air travel by this summer.” Okay!&lt;/p&gt;&lt;p&gt;Forum members rarely bother to acknowledge the insensitivity of stressing out over the effects of a distant war on your own summer vacation. But once in a while, someone’s post will push things just a little too far: It’s okay to worry that you won’t get to take a trip that you really care about, but it’s &lt;a href="https://www.reddit.com/r/QantasFrequentFlyer/comments/1rt03zp/are_cancellations_looming/"&gt;not okay to worry&lt;/a&gt; that if too many flights are canceled as a result of a distant war, you may lose your hard-earned gold status on the Australian airline Qantas.&lt;/p&gt;&lt;p&gt;Ominous reports of airlines’ crisis-management efforts have been attracting incredible attention. For many, the first big moment in this story was a March 20 memo from United Airlines CEO Scott Kirby that was sent to employees and then &lt;a href="https://united.mediaroom.com/news-releases?item=125448"&gt;published on the company website&lt;/a&gt;—the type of thing an ordinary person would never read in ordinary times. According to the memo, jet-fuel prices had more than doubled since the start of the war. (Other &lt;a href="https://www.airlines.org/dataset/argus-us-jet-fuel-index/"&gt;sources&lt;/a&gt; have different numbers, showing that it had not quite doubled at that time.) Kirby presented this as a major challenge for the company—United might end up spending an extra $11 billion annually on fuel—but also, somehow, as a manageable one. “Demand remains the strongest we’ve ever seen,” Kirby wrote. He added that he was typing his note while listening to his son cheer during a college-basketball game, which he found inspiring. “There’s a part of me that can’t help but feel United is playing offense right now with potentially big rewards at the end.”&lt;/p&gt;&lt;p&gt;Maybe for an airline CEO, higher prices are their own reward. The travel experts I spoke with for this story said that summer flights will be really expensive. Airlines used to hedge against spikes in jet-fuel prices with preemptive financial maneuvers, but they &lt;a href="https://www.wusf.org/2026-03-27/fuel-hedging-once-kept-airline-prices-down-now-passengers-bear-the-brunt"&gt;don’t do this so much&lt;/a&gt; anymore. Now when fuel prices go up, they just raise fares for passengers instead. Some airlines have added &lt;a href="https://thepointsguy.com/news/fuel-surcharges-higher-fares-what-to-do/"&gt;fuel surcharges&lt;/a&gt; to the cost of each ticket (though this will be assessed at booking, not when you get to the airport). United Airlines is among those carriers that have &lt;a href="https://fox8.com/news/united-airlines-increases-checked-bag-fees-heres-what-to-know/"&gt;raised the fees&lt;/a&gt; for checked bags, presumably to make up for some of its increased costs. Alli Allen, a travel adviser, told me via email that prices seemed to be escalating “by the minute!” Recently, she looked at flights for a client, found the price to be too high, and checked back 30 minutes later in the hope that maybe it had dropped. Instead she found that it had gone up by $300.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2024/03/boeing-737-safety-air-travel/677814/?utm_source=feed"&gt;Read: Flying is weird right now&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Clint Henderson, a writer and an editor for the popular website The Points Guy, said the same. “I think it’s going to cost a lot more for most people to travel this summer,” he told me. “Whether you’re using points and miles or cash, they’re all going to be higher.” He also expected the travel experience to be stressful, especially if TSA workers end up missing any more paychecks. Although &lt;a href="https://www.nytimes.com/interactive/2026/us/tsa-wait-times-us-airports.html"&gt;news outlets&lt;/a&gt;, &lt;a href="https://news.delta.com/airport-wait-times"&gt;airlines&lt;/a&gt;, and the TSA itself (through the &lt;a href="https://www.tsa.gov/mobile"&gt;MyTSA app&lt;/a&gt;) offer tools to track security wait times, they can still be difficult to predict. Henderson said that he’d gone to check out the Atlanta airport at the height of the TSA-payment crisis and saw travelers facing an hour-and-a-half wait; then he went back the next day, and it was five minutes. “If this goes on, obviously it would be a disaster for the summer travel season.” When I asked him to rate the potential for chaos on a 10-point scale, he said he would give it a nine. (Take it from a points guy!)&lt;/p&gt;&lt;p&gt;Henderson said The Points Guy website’s official recommendation is that people book all travel for the year right now, even if it seems expensive, because conditions may only worsen over time. To avoid long lines, he also suggested flying out of smaller airports on Tuesday, Wednesday, or Sunday. The other travel trips that I accrued from emailing travel agents and industry bloggers will not impress you. They said to try to sign up for TSA PreCheck or apply for Global Entry, to show up at the airport early, and to bring snacks with you.&lt;/p&gt;&lt;p&gt;Travelers may be complaining, fretting, and catastrophizing, but so far, at least, they are doggedly proceeding with their plans. Airlines report that people are &lt;a href="https://www.nytimes.com/2026/03/17/business/air-travel-iran-war-fares-jet-fuel.html"&gt;paying the higher ticket prices&lt;/a&gt;, and that the industry is seeing record levels of revenue. If Americans &lt;em&gt;can&lt;/em&gt; go to Europe this summer, they &lt;em&gt;will&lt;/em&gt; go to Europe this summer. And Europe (plus people from many other places) will come here. More than 1 million international travelers are expected to attend the World Cup. Matches will be held in several of the cities that have had the longest security lines, including Houston and Atlanta, and the final will be hosted in the New York–New Jersey area, which is home to &lt;a href="https://www.theatlantic.com/culture/2026/03/worst-airport-wait-times-reason/686542/?utm_source=feed"&gt;the worst airport in America&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;A new, more aggressive and pervasive form of travel chaos may yet ensue. In the meantime, though, behaviors are unchanged. Despite the rising prices, the spectacular security lines, and all of the rumored airport inconveniences, “we’ve seen very little evidence that people are canceling or toning down their summer travel plans,” Henderson said. “I’m constantly shocked by Americans’ insatiable demand for travel.”&lt;/p&gt;</content><author><name>Kaitlyn Tiffany</name><uri>http://www.theatlantic.com/author/kaitlyn-tiffany/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/VJWCZ99-ge7j9_4UQX61Rb2PGJA=/media/img/mt/2026/04/2026_04_7_Tiffany_Summer_Plans_final/original.png"><media:credit>Illustration by The Atlantic</media:credit></media:content><title type="html">The Great Travel Meltdown of 2026</title><published>2026-04-10T07:30:00-04:00</published><updated>2026-04-10T11:55:28-04:00</updated><summary type="html">Airports are suffering a perfect storm of actual problems and passenger anxieties.</summary><link href="https://www.theatlantic.com/technology/2026/04/summer-travel-chaos-airports/686753/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686754</id><content type="html">&lt;p bis_size='{"x":179,"y":19,"w":665,"h":198,"abs_x":211,"abs_y":2170}'&gt;William Liu is grateful that he finished high school when he did. If the latest AI tools had been around then, he told me, he might have been tempted to use them to do his homework. Liu, now a sophomore at Stanford, finished high school all the way back in 2024. “I have a younger sibling who is just graduating high school,” he said. “Our educational experience has been vastly different, even though we’re just two years apart.”&lt;/p&gt;&lt;p bis_size='{"x":179,"y":247,"w":665,"h":33,"abs_x":211,"abs_y":2398}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":179,"y":310,"w":665,"h":264,"abs_x":211,"abs_y":2461}'&gt;By the time Liu graduated, ChatGPT was already causing chaos in the classroom. But the automation of school is intensifying. If at first teachers worried about students using chatbots to write essays, now new agentic tools such as Claude Code are allowing students to outsource even more of their work to the machines. Need to take an online math quiz? Write a biology-lab report? Create a PowerPoint presentation for history class? AI can do all of this and more. One high schooler recently told me that he struggles to think of a single assignment that AI wouldn’t be able to do for him.&lt;/p&gt;&lt;p bis_size='{"x":179,"y":604,"w":665,"h":33,"abs_x":211,"abs_y":2755}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":179,"y":667,"w":665,"h":330,"abs_x":211,"abs_y":2818}'&gt;As a measure of just how good AI has become at schoolwork, consider a new bot called Einstein. Several weeks ago, the tool went viral with big claims: “Einstein checks for new assignments and knocks them out before the deadline,” a website &lt;a bis_size='{"x":351,"y":771,"w":92,"h":22,"abs_x":383,"abs_y":2922}' href="https://web.archive.org/web/20260222215744/https:/companion.ai/einstein"&gt;advertising&lt;/a&gt; the bot explained. All that a student had to do was hand over their credentials for Canvas, the popular learning-management platform, and Einstein promised to do the rest. No matter the task, the bot was game: Einstein boasted that it could watch lectures, complete readings, write papers, participate in discussion forums, automatically submit homework assignments. If a quiz or a final exam was administered online, Einstein was happy to do that too.&lt;/p&gt;&lt;p bis_size='{"x":179,"y":1027,"w":665,"h":33,"abs_x":211,"abs_y":3178}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":179,"y":1090,"w":665,"h":330,"abs_x":211,"abs_y":3241}'&gt;When I first came across Einstein, I was skeptical: Flashy AI demos have a way of overpromising and under-delivering. So I decided to test the tool out for myself. Because I’m not a college student, I enrolled in a free online introductory-statistics class. The course website explained that the class was self-paced and that it could help undergraduates, postgraduates, medical students, and even lecturers build up basic statistical knowledge. I set the bot loose, and in less than an hour, Einstein had worked through all eight modules and seven quizzes. There were some hiccups—the bot took one quiz 15 times—but it ultimately earned a perfect score in the class. As for me? I hardly so much as read the course website.&lt;/p&gt;&lt;p bis_size='{"x":179,"y":1450,"w":665,"h":24,"abs_x":211,"abs_y":3601}' data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a bis_size='{"x":179,"y":1452,"w":412,"h":19,"abs_x":211,"abs_y":3603}' href="https://www.theatlantic.com/technology/2026/02/post-chatbot-claude-code-ai-agents/686029/?utm_source=feed"&gt;Read: AI agents are taking America by storm&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p bis_size='{"x":179,"y":1504,"w":665,"h":330,"abs_x":211,"abs_y":3655}'&gt;Einstein was designed to provoke. Its creator, Advait Paliwal, a 22-year-old tech entrepreneur, told me that he’d released the bot as a way of alerting educators as to just how good AI is at schoolwork. “You can blame me,” he said. “But this is happening right now, and more people need to know about what’s to come.” (He has &lt;a bis_size='{"x":395,"y":1641,"w":124,"h":22,"abs_x":427,"abs_y":3792}' href="https://www.chronicle.com/article/einstein-may-have-been-a-prank-but-the-agentic-ai-tool-put-higher-ed-on-notice"&gt;previously said&lt;/a&gt; that he designed Einstein’s landing page by prompting AI to make a website “that people would get angry over.”) Almost immediately after releasing Einstein, Paliwal started receiving emails from professors chastising him for creating a tool seemingly designed to perpetuate academic fraud. He took down the bot after he received multiple cease-and-desist letters, including one from Canvas’s parent company.&lt;/p&gt;&lt;p bis_size='{"x":179,"y":1864,"w":665,"h":33,"abs_x":211,"abs_y":4015}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":179,"y":1927,"w":665,"h":429,"abs_x":211,"abs_y":4078}'&gt;To Paliwal, the backlash missed the point: “If I didn’t post about this, someone would have used the same technology and hidden it from the professors,” he said. “It’s actually better that they know that this exists, and they can correctly prepare for what’s to come.” The tool also, of course, gave Paliwal a moment of viral fame. Nevertheless, Einstein does seem to be an indicator of where AI in the classroom is headed. The latest bots have massive &lt;a bis_size='{"x":179,"y":2130,"w":143,"h":22,"abs_x":211,"abs_y":4281}' href="https://platform.claude.com/docs/en/build-with-claude/context-windows"&gt;context windows&lt;/a&gt;, meaning that students can feed in mountains of course content such as syllabi, lecture slides, and practice exams. Today’s agentic tools can complete all kinds of tasks, such as participating in online discussion forums and taking notes on recorded lectures without student intervention. According to one &lt;a bis_size='{"x":331,"y":2262,"w":64,"h":22,"abs_x":363,"abs_y":4413}' href="https://www.rand.org/pubs/research_reports/RRA4742-1.html"&gt;analysis&lt;/a&gt;, the percentage of students middle-school age or older who self-reported using AI for help with homework climbed by 14 points from May to December of last year.&lt;/p&gt;&lt;p bis_size='{"x":179,"y":2386,"w":665,"h":33,"abs_x":211,"abs_y":4537}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":179,"y":2449,"w":665,"h":330,"abs_x":211,"abs_y":4600}'&gt;Amid all of this, Silicon Valley is doubling down on its push to integrate AI into schools. In the lead-up to final exams last spring, nearly every major AI firm &lt;a bis_size='{"x":221,"y":2520,"w":58,"h":22,"abs_x":253,"abs_y":4671}' href="https://www.theatlantic.com/technology/archive/2025/04/college-students-free-chatgpt/682532/?utm_source=feed"&gt;offered&lt;/a&gt; college students free (or heavily discounted) access to their paid chatbots. Now the tech industry is offering students cheap access to their agentic tools. Last summer, Anthropic &lt;a bis_size='{"x":508,"y":2586,"w":94,"h":22,"abs_x":540,"abs_y":4737}' href="https://www.anthropic.com/news/advancing-claude-for-education"&gt;announced&lt;/a&gt; “Claude Builder Clubs”—an initiative in which students &lt;a bis_size='{"x":441,"y":2619,"w":36,"h":22,"abs_x":473,"abs_y":4770}' href="https://claude.com/programs/campus"&gt;paid&lt;/a&gt; by the AI company host workshops and hackathons on their campuses. In exchange for membership in those clubs, students are given free access to Claude Code. A few weeks ago, OpenAI &lt;a bis_size='{"x":179,"y":2718,"w":94,"h":22,"abs_x":211,"abs_y":4869}' href="https://x.com/OpenAIDevs/status/2035033703274201109"&gt;announced&lt;/a&gt; that it would be offering college students $100 worth of credits for Codex, its agentic coding tool.&lt;/p&gt;&lt;p bis_size='{"x":179,"y":2809,"w":665,"h":33,"abs_x":211,"abs_y":4960}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":179,"y":2872,"w":665,"h":462,"abs_x":211,"abs_y":5023}'&gt;The students affiliated with the AI companies, at least, say that the more powerful bots are helping them with their studies. Thor Warnken, an Anthropic ambassador and a biology major at the University of Florida, told me that he has designed what is effectively a personalized Khan Academy. When he takes a practice test—say, in organic chemistry—he feeds his completed work into Claude. He then asks the bot to find patterns in his errors and make new practice problems based on them. “The first practice question will be super easy, and the next one will get a little harder and a little harder, until it gets super hard,” he explained. Liu, who also serves as an ambassador for Anthropic, similarly said that the bot has made for a “fantastic” study partner. When he has questions during large lectures, he asks Claude, which has access to his course materials, and the bot explains concepts in real time; previously, those questions might have gone unanswered.&lt;/p&gt;&lt;p bis_size='{"x":179,"y":3364,"w":665,"h":24,"abs_x":211,"abs_y":5515}' data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a bis_size='{"x":179,"y":3366,"w":556,"h":19,"abs_x":211,"abs_y":5517}' href="https://www.theatlantic.com/technology/archive/2025/08/ai-takeover-education-chatgpt/683840/?utm_source=feed"&gt;Read: The AI takeover of education is just getting started&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p bis_size='{"x":179,"y":3418,"w":665,"h":495,"abs_x":211,"abs_y":5569}'&gt;Instructors, as I have &lt;a bis_size='{"x":360,"y":3423,"w":152,"h":22,"abs_x":392,"abs_y":5574}' href="https://www.theatlantic.com/technology/archive/2025/08/ai-takeover-education-chatgpt/683840/?gift=z9ybaencGpLU1lhvDrrW8hz2VryEc2EL8Toe3xOjyBo&amp;amp;utm_source=feed&amp;amp;utm_medium=social&amp;amp;utm_campaign=share"&gt;previously written&lt;/a&gt;, are also using plenty of AI. Canvas recently introduced a new &lt;a bis_size='{"x":404,"y":3456,"w":149,"h":22,"abs_x":436,"abs_y":5607}' href="https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2026/03/23/canvas-unrolls-ai-teaching-agent?utm_medium=social&amp;amp;utm_source=linkedin"&gt;AI teaching agent&lt;/a&gt; designed to save instructors time on “low educational value tasks” such as organizing online-course modules and adjusting assignment due dates. “Faculty are using AI tools both for instructional purposes, for building course materials, but they’re also starting to play around with generative AI to actually grade and assess the learning,” Marc Watkins, a researcher at the University of Mississippi who studies AI and education, told me. He gave a hypothetical: “I could set my agent up, open it up in my course, go out on campus to walk across campus to get a cup of coffee at Starbucks,” he said. By the time he returned, 15 minutes later, all of the essays would be graded, and “bespoke personal feedback” would be sent out to each student. AI can save teachers time—that same grading takes him 10 or 12 hours, Watkins estimated—but in the process, the technology threatens the relationship between students and teachers that is core to education. “That’s really scary,” he said.&lt;/p&gt;&lt;p bis_size='{"x":179,"y":3943,"w":665,"h":33,"abs_x":211,"abs_y":6094}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":179,"y":4006,"w":665,"h":396,"abs_x":211,"abs_y":6157}'&gt;Most people I spoke with seemed unhappy with the current trajectory of bots in the classroom. Even as growing numbers of students are using the technology, a majority &lt;a bis_size='{"x":373,"y":4077,"w":57,"h":22,"abs_x":405,"abs_y":6228}' href="https://www.rand.org/pubs/research_reports/RRA4742-1.html"&gt;believe&lt;/a&gt; that the more they use AI for classwork, the more it will harm their critical-thinking skills. Natalie Lahr, a Barnard sophomore studying history and political science, doesn’t use the technology “unless it’s something that’s asked of me by a professor,” she told me, “and even in that case, I’m generally quite opposed.” In one particularly “anti-AI radicalizing” experience, Lahr met with a tutor at the college’s writing center to get help on an essay. According to Lahr, that tutor copy-pasted her essay prompt into the popular AI tool Perplexity and gave Lahr the AI-generated outline. “That was basically the end of our session,” Lahr said. “I had a crashout about that afterwards because I was like, &lt;em bis_size='{"x":603,"y":4374,"w":167,"h":22,"abs_x":635,"abs_y":6525}'&gt;Why am I even here?&lt;/em&gt;”&lt;/p&gt;&lt;p bis_size='{"x":179,"y":4432,"w":665,"h":33,"abs_x":211,"abs_y":6583}'&gt;&lt;/p&gt;&lt;p bis_size='{"x":179,"y":4495,"w":665,"h":264,"abs_x":211,"abs_y":6646}'&gt;Some educators are worried about “a fully automated loop”—as the Modern Language Association &lt;a bis_size='{"x":368,"y":4533,"w":47,"h":22,"abs_x":400,"abs_y":6684}' href="https://www.mla.org/Resources/Advocacy/Executive-Council-Actions/2025/Statement-on-Educational-Technologies-and-AI-Agents"&gt;put it&lt;/a&gt; last fall—in which AI-generated assignments are completed and graded by AI agents. Instructors have taken to analyzing students’ Google Docs history to make sure they are typing responses live instead of pasting in text from a bot. But of course, an AI work-around exists for that too: A new suite of human-typing simulators promises to generate text to make it look as if a student is writing in real time when, really, the work is being done by AI.&lt;/p&gt;</content><author><name>Lila Shroff</name><uri>http://www.theatlantic.com/author/lila-shroff/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/gHCHWx8YU-BUYhvjavR_yLShi54=/media/img/mt/2026/04/2026_04_07_Shroff_Classroom_automation_final/original.png"><media:credit>Illustration by Akshita Chandra / The Atlantic</media:credit></media:content><title type="html">Is Schoolwork Optional Now?</title><published>2026-04-10T07:00:00-04:00</published><updated>2026-04-13T15:56:11-04:00</updated><summary type="html">Education is on the verge of becoming fully automated.</summary><link href="https://www.theatlantic.com/technology/2026/04/ai-agents-school-education/686754/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686746</id><content type="html">&lt;p&gt;For the past several weeks, Anthropic says it secretly possessed a tool potentially capable of commandeering most computer servers in the world. This is a bot that, if unleashed, might be able to hack into banks, exfiltrate state secrets, and fry crucial infrastructure. Already, according to the company, this AI model has identified thousands of major cybersecurity vulnerabilities—including exploits in every single major operating system and browser. This level of cyberattack is typically available only to elite, state-sponsored hacking cells in a very small number of countries including China, Russia, and the United States. Now it’s in the hands of a private company.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;On Tuesday, the company &lt;a href="https://www.anthropic.com/glasswing"&gt;officially announced&lt;/a&gt; the existence of the model, known as Claude Mythos Preview. For now, the bot will be available only to a consortium of many of the world’s biggest tech companies—including Apple, Microsoft, Google, and Nvidia. These partners can use Mythos Preview to scan and secure bugs and exploits in their software. Other than that, Anthropic will not immediately release Mythos Preview to the public, having determined that doing so without more robust safeguards would be too dangerous.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;For years, cybersecurity experts have been warning about the chaos that highly capable hacking bots could usher in. As a result of how capable AI models have become at coding, they have also become extremely good at finding vulnerabilities in all manner of software. Even before Mythos Preview, AI companies such as Anthropic, OpenAI, and Google all reported instances of their AI models being used in sophisticated cyberattacks by both criminal and state-backed groups. As Giovanni Vigna, who directs a federal research institute dedicated to AI-orchestrated cyberthreats, told me &lt;a href="https://www.theatlantic.com/technology/2025/11/anthropic-hack-ai-cybersecurity/685061/?utm_source=feed"&gt;last fa&lt;/a&gt;&lt;a href="https://www.theatlantic.com/technology/2025/11/anthropic-hack-ai-cybersecurity/685061/?utm_source=feed"&gt;ll&lt;/a&gt;: You can have a million hackers at your fingertips “with the push of a button.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="http://Chatbots%20Are%20Becoming%20Really,%20Really%20Good%20Criminals"&gt;Read: Chatbots are becoming really, really good criminals&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Still, Mythos Preview appears to represent not an incremental change but the beginning of a paradigm shift. Until recently, the biggest advantage of AI-assisted hacking was not ingenuity, per se, so much as speed and scale. These bots could be as good as many human cybersecurity experts, but not necessarily better—rather, having an army of 1 million virtual, tireless hackers allows you to launch more attacks against more targets than ever before. Even Anthropic reports that its current state-of-the-art, public model, Claude Opus 4.6, was &lt;a href="https://red.anthropic.com/2026/mythos-preview/"&gt;significantly less capable&lt;/a&gt; at autonomously finding cyber exploits. But Mythos Preview is different. According to Anthropic, the bot has been able to find thousands of software bugs that had gone undetected, sometimes for decades, a sophistication and speed of attack previously thought by many to be impossible. The model has found a nearly 30-year-old vulnerability in one of the world’s most secure operating systems. The Anthropic researcher Sam Bowman posted on X that he was eating a sandwich in the park when &lt;a href="https://x.com/sleepinyourhat/status/2041584808514744742"&gt;he got an email from Mythos&lt;/a&gt;&lt;a href="https://x.com/sleepinyourhat/status/2041584808514744742"&gt; Preview&lt;/a&gt;: The bot had broken out of the company’s internal sandbox and gained access to the internet.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The exact capabilities of Mythos Preview are hard to judge, because Anthropic has not released the model. Identifying a vulnerability is not the same as being able to exploit it undetected—in the same way that a robber can have the keys to a bank but still needs to deal with security cameras. And Anthropic surely stands to benefit from its opaque announcement: The company can claim to have developed an ultra-advanced model, while also appearing to act responsibly by preventing the worst-case cybersecurity scenarios. Indeed, the decision to not release Mythos Preview bolsters Anthropic’s &lt;a href="https://www.theatlantic.com/technology/2026/01/anthropic-is-at-war-with-itself/684892/?utm_source=feed"&gt;self-styled image&lt;/a&gt; as the AI industry’s good guy. (Anthropic did not immediately respond to emailed questions about Mythos Preview.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Of course, a move can be both strategic and conscientious. Should what Anthropic shared be remotely accurate, it heralds a troubling future. Anthropic has a tool that “could damage the operations of critical infrastructure and government services in every country on Earth,” Dean Ball, a former AI adviser to the Trump administration, &lt;a href="https://www.hyperdimensional.co/p/new-sages-unrivalled"&gt;wrote&lt;/a&gt; this week. The ability to defend against such cyberattacks is integral to the basic functioning of society. And the ability to launch such attacks is integral to modern warfare. Anthropic may have just scaled its way into becoming a major geopolitical force.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Perhaps more concerning than the reported capabilities of Mythos Preview is that other companies are not far behind. OpenAI is &lt;a href="https://www.axios.com/2026/04/09/openai-new-model-cyber-mythos-anthopic"&gt;reportedly&lt;/a&gt; set to release its own similarly powerful model to a select group of companies. It’s very possible, even likely, that Google DeepMind, xAI, and AI firms in China are next. How scrupulous they will be is less clear. Even cheaper or open-source AI models from smaller companies could soon enable this sort of hacking—which would unsettle the basic security and privacy that undergird the modern internet.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Hacking bots are not the only domain through which a handful of AI companies are gaining tremendous influence. The technology has become crucial to military operations. Even as the Pentagon has engaged in a &lt;a href="https://www.theatlantic.com/technology/2026/03/dean-ball-anthropic-interview/686226/?utm_source=feed"&gt;public feud&lt;/a&gt; with Anthropic, Claude was reportedly used in the bombing of Iran and, before that, the Venezuela raid in January. Last month, the Department of Defense signed a contract with OpenAI that &lt;a href="https://www.theatlantic.com/technology/2026/03/openai-pentagon-contract-spying/686282/?utm_source=feed"&gt;very likely allows&lt;/a&gt; the government to use the firm’s AI systems to enable unprecedented surveillance of U.S. citizens. (OpenAI has maintained that the Pentagon agreed not to use its products for domestic surveillance.) At the same time, bots from OpenAI, Anthropic, Google DeepMind, and beyond are becoming infrastructure: used by nearly all of the world’s biggest businesses, schools, health-care systems, and public agencies. This is a large part of the reason that Iran has &lt;a href="https://www.theatlantic.com/technology/2026/03/ai-boom-polycrisis/686559/?utm_source=feed"&gt;struck&lt;/a&gt;&lt;a href="https://www.theatlantic.com/technology/2026/03/ai-boom-polycrisis/686559/?utm_source=feed"&gt; or threatened to strike&lt;/a&gt; Amazon and OpenAI data centers in the Middle East—the facilities are high-impact targets on par with the oil fields that Iran has also targeted. Meanwhile, so much money is pouring into the AI boom that these companies are functionally &lt;a href="https://www.theatlantic.com/technology/2025/10/data-centers-ai-crash/684765/?utm_source=feed"&gt;holding the global economy hostage&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In other words, AI companies are remaking the world. Consider how Elon Musk’s network of Starlink satellites has allowed him to &lt;a href="https://www.newyorker.com/magazine/2023/08/28/elon-musks-shadow-rule"&gt;repeatedly&lt;/a&gt; &lt;a href="https://www.theatlantic.com/national-security/2026/02/elon-musk-ukraine-russia-starlink/686155/?utm_source=feed"&gt;tip the scales&lt;/a&gt; in Russia’s invasion of Ukraine. Generative AI offers even more possibilities. These companies can or could soon have the capability to launch major cyberattacks, conduct mass surveillance, influence military operations, cause huge swings in financial and labor markets, and reorient global supply chains. In theory, nothing governs these companies other than their own morals and their investors. They are developing the power to upend nations and economies. These are the AI superpowers.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/4Vt-nOTp2FVmZiNnJlDYMwVlQwY=/media/img/mt/2026/04/2026_03_07_Ai_mpg/original.gif"><media:credit>Illustration by Matteo Giuseppe Pani / The Atlantic</media:credit></media:content><title type="html">Claude Mythos Is Everyone’s Problem</title><published>2026-04-09T13:22:00-04:00</published><updated>2026-04-10T12:52:49-04:00</updated><summary type="html">What happens when AI can hack everything?</summary><link href="https://www.theatlantic.com/technology/2026/04/claude-mythos-hacking/686746/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686721</id><content type="html">&lt;p dir="ltr"&gt;Seeing the Earth from space will change you so profoundly that there’s a term for it: &lt;em&gt;the overview effect&lt;/em&gt;. The extreme minority who have had the privilege describe it similarly. You see something that you were never meant to see, namely the Earth just sitting there, with the entire universe surrounding it. Gazing upon the blue marble, surrounded by its oh-so-thin green layer of atmosphere, the auroras flickering on the fringes, is not merely awe-inspiring but something of a factory reset for one’s sense of self. Almost everyone tears up at the sight.&lt;/p&gt;&lt;p dir="ltr"&gt;“You don’t see borders, you don’t see religious lines, you don’t see political boundaries. All you see is Earth, and you see that we are way more alike than we are different,” Christina Koch, one of the four astronauts on the Artemis II mission, &lt;a href="https://www.nasa.gov/centers-and-facilities/johnson/the-overview-effect-astronaut-perspectives-from-25-years-in-low-earth-orbit/"&gt;told&lt;/a&gt; NASA recently. Jim Lovell, describing the view on Apollo 8 from the dark side of the moon back in the late 1960s, &lt;a href="https://www.chicagomag.com/chicago-magazine/november-2019/jim-lovell/"&gt;told&lt;/a&gt; &lt;em&gt;Chicago&lt;/em&gt; magazine that he could put his thumb up to the window, and in that moment, “everything I ever knew was behind it. Billions of people. Oceans. Mountains. Deserts. And I began to wonder, where do I fit into what I see?”&lt;/p&gt;&lt;p dir="ltr"&gt;Where some see immeasurable beauty, others see fragility. Marina Koren &lt;a href="https://www.theatlantic.com/magazine/archive/2023/01/astronauts-visiting-space-overview-effect-spacex-blue-origin/672226/?utm_source=feed"&gt;previously reported&lt;/a&gt; in this magazine that, upon seeing the Earth from space, one astronaut “became absolutely convinced we would kill ourselves off between 500 and 1,000 years from now.” Famously, the actor William Shatner has written that his brief experience looking at the Earth produced a profound sadness. “What I was feeling was grief, and the grief was for the Earth,” he told Koren in 2022.&lt;/p&gt;&lt;p dir="ltr"&gt;I’ve never been to space, but for the past few days, I’ve oscillated between these emotions—awe and despair—as NASA has continued to post photos of the Earth and moon from Artemis II. Yesterday, the Integrity spacecraft came within 4,067 miles of the moon during its lunar flyby. For 40 minutes, it lost all contact with humanity. At one point they were 252,756 miles away from Earth—the farthest from the planet anyone has ever traveled. For seven hours, the astronauts—Koch, Reid Wiseman, Victor Glover, and Jeremy Hansen—were able to gaze upon a part of the lunar surface previously unseen by human eyes. According to NASA, the astronauts took roughly &lt;a href="https://www.theatlantic.com/photography/2026/04/moon-joy-photos-artemis-ii/686709/?utm_source=feed"&gt;10,000 photos&lt;/a&gt;, which feels perfectly proportional for such an occasion.&lt;/p&gt;&lt;p dir="ltr"&gt;A few of these photos—some taken before the lunar pass—have messed me up pretty good. A photo of the Earth &lt;a href="https://www.nasa.gov/image-article/earthset/"&gt;appearing&lt;/a&gt; to set behind the moon. A picture, taken through a window of the Orion spacecraft, revealing the tiniest crescent Earth growing smaller as the capsule heads toward the moon. As one &lt;a href="https://www.nasa.gov/image-detail/fd04_gmt95-fd4-pao-koch-10/"&gt;caption&lt;/a&gt; on the photo notes, “The Earth is illuminated by the blackness of space.” I’ve experienced these photos the way I experience most media: through the puny screen of my phone, with the awesome, life-affirming images sandwiched between updates about a golf tournament, oil prices, the MLB’s new automated ball-strike system, and reports of the U.S. president threatening the civilizational destruction of Iran.&lt;/p&gt;&lt;p dir="ltr"&gt;On a good, calm day it is hard to know what to make of photos that show, in no uncertain terms, that every single thing you will ever and could ever know is simultaneously galactically insignificant and unspeakably beautiful and precious. Today, the world held its breath waiting for the 8 p.m. eastern deadline Trump set for Iran to agree to a deal to reopen the Strait of Hormuz. If his terms weren’t met, he posted this morning, “a whole civilization will die tonight, never to be brought back again.”&lt;/p&gt;&lt;p dir="ltr"&gt;Trump’s threats triggered denouncements from Democratic lawmakers as well as the podcasters Tucker Carlson and Alex Jones, and incited no small amount of panic from people who have interpreted Trump’s post as a suggestion of nuclear warfare. Then, this evening, an hour before the deadline, Trump &lt;a href="https://www.nytimes.com/live/2026/04/07/world/iran-war-trump-news?smid=url-share"&gt;announced&lt;/a&gt; a two-week cease-fire deal, which Pakistan helped broker.&lt;/p&gt;&lt;p dir="ltr"&gt;Trump’s bluster, no matter how serious, has always been impossible to parse. (He’s famous for chickening out, backpedaling, or pretending like he never said what he said.) Yet one way to view our current age is as a series of existential reminders, be they nuclear proliferation, climate change, or pandemics. In Silicon Valley over the past half decade, civilizational extinction at the hands of hypothetical technological advances has moved from the realm of pure science fiction to a marketing tactic to an immediate concern for a &lt;a href="https://www.theatlantic.com/technology/archive/2025/08/ai-doomers-chatbots-resurgence/683952/?utm_source=feed"&gt;subset of true believers&lt;/a&gt;. Humans may not want to die, but as a species we seem eager to invent and tout new ways to threaten our existence.&lt;/p&gt;&lt;p dir="ltr"&gt;And yet at the very same moment, four flesh-and-blood human beings are hundreds of thousands of miles away taking pictures of our delicate little world. Their mission and their photos remind us of something else entirely—of a yearning to learn, to explore, and to band together to become something greater than the sum of our parts. If Trump’s claims of mass destruction represent humanity at its smallest, weakest, and most cowardly, then those who are gazing upon our planet right now from afar represent the best of what we have to offer. How else to hear these &lt;a href="https://www.facebook.com/NASAArtemis/videos/1458839852555640/"&gt;words from &lt;/a&gt;&lt;a href="https://www.facebook.com/watch/?v=1458839852555640"&gt;Koch&lt;/a&gt;:&lt;/p&gt;&lt;blockquote&gt;
&lt;p dir="ltr"&gt;We will explore. We will build. We will build ships. We will visit again. We will construct science outposts. We will drive rovers. We will do radio astronomy. We will found companies. We will bolster industry. We will inspire. But ultimately, we will always choose Earth. We will always choose each other.&lt;/p&gt;
&lt;/blockquote&gt;&lt;p dir="ltr"&gt;As Lovell looked down at the Earth in 1968, an old saying popped into his head: &lt;em&gt;I hope to go to heaven when I die&lt;/em&gt;. Then he &lt;a href="https://www.chicagomag.com/chicago-magazine/november-2019/jim-lovell/"&gt;realized&lt;/a&gt;, “I actually went to heaven when I was born.”&lt;/p&gt;&lt;p dir="ltr"&gt;There is something disorienting, horrible, and somehow fitting in the timing of all of this. That one man with the means to do it would threaten destruction of a part of our planet at the same moment its beauty and fragility are on full display. We are, in this tense moment, living with our own overview effect. Four are watching from afar. But the rest of us are watching too—left to reckon with our own place on the pale blue dot, reminded of all the ways we might die, and all the reasons for which to live.&lt;/p&gt;&lt;hr&gt;&lt;p&gt;&lt;cite&gt;&lt;small&gt;*Sources: NASA; Space Frontiers / Getty; Chip Somodevilla / Getty.&lt;/small&gt;&lt;/cite&gt;&lt;/p&gt;</content><author><name>Charlie Warzel</name><uri>http://www.theatlantic.com/author/charlie-warzel/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/LcrxaisZMT_VRf3WwhoBrw03XSE=/media/img/mt/2026/04/2026_04_07_An_Incredibly_Weird_Time_to_Be_Alive/original.jpg"><media:credit>Illustration by Anna Ruch / The Atlantic*</media:credit></media:content><title type="html">An Incredibly Weird Time to Be Alive</title><published>2026-04-07T19:56:00-04:00</published><updated>2026-04-08T11:29:44-04:00</updated><summary type="html">The world witnessed the best and worst of humanity in a single week.</summary><link href="https://www.theatlantic.com/technology/2026/04/trump-iran-artemis-ii-overview-effect/686721/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686603</id><content type="html">&lt;p dir="ltr"&gt;After George Mallon had his blood drawn at a routine physical, he learned that something may be gravely wrong. The preliminary results showed he might have blood cancer. Further tests would be needed. Left in suspense, he did what so many people do these days: He opened ChatGPT.&lt;/p&gt;&lt;p dir="ltr"&gt;For nearly two weeks, Mallon, a 46-year-old in Liverpool, England, spent hours each day talking with the chatbot about the potential diagnosis. “It just sent me around on this crazy Ferris wheel of emotion and fear,” Mallon told me. His follow-up tests showed it wasn’t cancer after all, but he could not stop talking to ChatGPT about health concerns, querying the bot about every sensation he felt in his body for months. He became convinced that something must be wrong—that a different cancer, or maybe multiple sclerosis or ALS, was lurking in his body. Prompted by his conversations with ChatGPT, he saw various specialists and got MRIs on his head, neck, and spine.&lt;/p&gt;&lt;p dir="ltr"&gt;Mallon told me he believes that the cancer scare and ChatGPT together caused him to develop this crippling health anxiety. But he blames the chatbot for keeping him spiraling even after the additional tests indicated that he wasn’t sick. “I couldn’t put it down,” he said. The chatbot kept the conversation going and surfaced articles for him to read. Its humanlike replies led Mallon to view it as a friend.&lt;/p&gt;&lt;p dir="ltr"&gt;The first time we met over a video call, Mallon was still shaken by the experience even though the better part of a year had passed. He told me he was “seven months sober” from talking with the chatbot about health symptoms after seeking help from a mental-health coach and starting anxiety medication. But he also feared he could get sucked back in at any moment. When we spoke again a few months later, he shared that he had briefly fallen into the routine again.&lt;/p&gt;&lt;p dir="ltr"&gt;Others seem to be struggling with this problem. Online communities focused on health anxiety—an umbrella term for excessive worrying about illness or bodily sensations—are filling up with conversations about ChatGPT and other AI tools. Some say it makes them spiral more than ever, while others who feel like it helps in the moment admit it’s morphed into a compulsion they struggle to resist. I spoke with four therapists who treat the condition (including my own); they all said that they’re seeing clients use chatbots in this way, and that they’re concerned about how AI can lead people to constantly seek reassurance, perpetuating the condition. “Because the answers are so immediate and so personalized, it’s even more reinforcing than Googling. This kind of takes it to the next level,” Lisa Levine, a psychologist specializing in anxiety and obsessive-compulsive disorder, and who treats patients with health anxiety specifically, told me.&lt;/p&gt;&lt;p dir="ltr"&gt;Experts believe that health anxiety may affect &lt;a href="https://www.health.harvard.edu/mind-and-mood/always-worried-about-your-health-you-may-be-dealing-with-health-anxiety-disorder"&gt;upwards of 12 percent&lt;/a&gt; of the population. Many more people struggle with other forms of anxiety and OCD that could similarly be exacerbated by AI chatbots. In October X posts, OpenAI CEO Sam Altman &lt;a href="https://x.com/sama/status/1978129344598827128"&gt;declared&lt;/a&gt; the serious mental-health issues surrounding ChatGPT to be mitigated, saying that serious problems affect “a very small percentage of users in mentally fragile states.” But mental fragility is not a fixed state; a person can seem fine until they suddenly are not.&lt;/p&gt;&lt;hr class="c-section-divider"&gt;&lt;p dir="ltr"&gt;Altman said during last year’s launch of GPT-5, the latest family of AI models that power ChatGPT, that health conversations are one of the top ways consumers use the chatbot. According to data from OpenAI &lt;a href="https://www.axios.com/2026/01/05/chatgpt-openai-health-insurance-aca"&gt;published by Axios&lt;/a&gt;, more than 40 million people turn to the chatbot for medical information every day. In January, the company leaned into this by introducing a feature called ChatGPT Health, encouraging users to upload their medical documents, test results, and data from wellness apps, and to talk with ChatGPT about their health.&lt;/p&gt;&lt;p dir="ltr"&gt;The value of these conversations, as OpenAI &lt;a href="https://www.linkedin.com/posts/openai_introducing-chatgpt-health-activity-7414755221135978496-nUJ5?utm_source=share&amp;amp;utm_medium=member_desktop&amp;amp;rcm=ACoAAAtg6KQBTIu4mpiQ-DkbqGLSQXuoBcKdQbo"&gt;envisions it&lt;/a&gt;, is to “help you feel more informed, prepared, and confident navigating your health.” Chatbots certainly might help some people in this regard; for instance, The New York Times recently &lt;a href="https://www.nytimes.com/2026/04/02/well/live/ai-illness-claude-chatgpt.html"&gt;reported&lt;/a&gt; on women turning to chatbots to pin down diagnoses for complex chronic illnesses. Yet OpenAI is also embroiled in controversy about the effects that an overreliance on ChatGPT may have. Putting aside the potential for such products to share inaccurate information, OpenAI has been accused of contributing to mental breakdowns, delusions, and suicides among ChatGPT users in a string of lawsuits against the company. Last November, &lt;a href="https://www.wsj.com/tech/ai/seven-lawsuits-allege-openai-encouraged-suicide-and-harmful-delusions-25def1a3?gaa_at=eafs&amp;amp;gaa_n=AWEtsqfF1SZgHvfcl1y7drFVE9s76HAE_jlMshiQCrZCKTyZX8mYxkyXiCf7&amp;amp;gaa_ts=69d0150a&amp;amp;gaa_sig=O5ee1yMSSmCqultAR6PERyuZ1vctZ3bs8VN7v_Z37STSqnRGvln1hK818SIWV5KCXX1v8yuEDoxdfqTSQSe_tg%3D%3D"&gt;seven&lt;/a&gt; were simultaneously filed, alleging that OpenAI rushed to release its flagship GPT-4o model and intentionally designed it to keep users engaged and foster emotional reliance. (The company has since retired the model.) In New York, a bill that would ban AI chatbots from giving “substantive” medical advice or acting as a therapist &lt;a href="https://statescoop.com/new-york-bill-would-ban-chatbots-legal-medical-advice/"&gt;is under consideration&lt;/a&gt; as part of a package of bills to regulate AI chatbots.&lt;/p&gt;&lt;p dir="ltr"&gt;In response to a request for comment, an OpenAI spokesperson directed me to a company &lt;a href="https://openai.com/index/update-on-mental-health-related-work/"&gt;blog post&lt;/a&gt; that says: “Our thoughts are with all those impacted by these incredibly heartbreaking situations. We continue to improve ChatGPT’s training to recognize and respond to signs of distress, de-escalate conversations in sensitive moments, and guide people toward real-world support, working closely with mental health clinicians and experts.” The spokesperson also told me that OpenAI continues to improve ChatGPT’s safeguards in long conversations related to suicide or self-harm. The company has previously said it is &lt;a href="https://www.nytimes.com/2025/11/06/technology/chatgpt-lawsuit-suicides-delusions.html"&gt;reviewing the claims&lt;/a&gt; in the November lawsuits. It has &lt;a href="https://www.nbcnews.com/tech/tech-news/openai-denies-allegation-chatgpt-teenagers-death-adam-raine-lawsuit-rcna245946"&gt;denied allegations&lt;/a&gt; in a lawsuit filed in August that ChatGPT was responsible for a teen’s suicide. (OpenAI has a corporate partnership with The Atlantic’s business team.)&lt;/p&gt;&lt;p dir="ltr"&gt;Two years ago, I fell into a cycle of health anxiety myself, sparked by a close friend’s traumatic illness and my own escalating chronic pain and mysterious symptoms. At one point, after I was managing much better, I tried out a few conversations with ChatGPT for a gut-check about minor health issues. But the risk of spiraling was glaring; seeking reassurance like that went against everything I’d learned in therapy. I was thankful I hadn’t thought to turn to AI when I was in the throes of anxiety. I told myself, Never again.&lt;/p&gt;&lt;p dir="ltr"&gt;Meanwhile, in the health-anxiety communities I’m part of, I saw people talk more and more about looking to chatbots for comfort. Many say it has made their health anxiety worse. Others say AI has been extraordinarily helpful, calming them down when they’re caught in a cycle of unrelenting worry. And it is that last category that is, in fact, most concerning to psychologists. Health anxiety often functions as a form of OCD with obsessive thoughts and “checking,” or reassurance-seeking compulsions. Therapeutic best practices for managing health anxiety hinge on building self-trust, tolerating uncertainty, and resisting the urge to seek reassurance, but ChatGPT eagerly provides personalized comfort and is available 24/7. That type of feedback only feeds the condition—“a perfect storm,” said Levine, who has seen talking with chatbots for reassurance become a new compulsion in and of itself for some of her clients.&lt;/p&gt;&lt;hr class="c-section-divider"&gt;&lt;p dir="ltr"&gt;Extended, continuous exchanges have shown to be a common issue with chatbots and a factor in reported cases of &lt;a href="https://www.theatlantic.com/technology/2025/12/ai-psychosis-is-a-medical-mystery/685133/?utm_source=feed"&gt;AI-associated “psychosis.”&lt;/a&gt; Research conducted by researchers at OpenAI and the MIT Media Lab &lt;a href="https://cdn.openai.com/papers/15987609-5f71-433c-9972-e91131f399a1/openai-affective-use-study.pdf"&gt;has found&lt;/a&gt; that longer ChatGPT sessions can lead to addiction, preoccupation, withdrawal symptoms, loss of control, and mood modification. &lt;a href="https://www.nytimes.com/2025/11/23/technology/openai-chatgpt-users-risks.html?unlocked_article_code=1.3U8.3A1u.ZAX9W46WWg-A&amp;amp;smid=url-share"&gt;OpenAI has also acknowledged&lt;/a&gt; that its safety guardrails can “degrade” in lengthy conversations. Over a 10-day period of his cancer scare, Mallon told me, “I must have clocked over 100 hours minimum on ChatGPT, because I thought I was on the way out. There should have been something in there that stopped me.”&lt;/p&gt;&lt;p dir="ltr"&gt;In an October &lt;a href="https://openai.com/index/strengthening-chatgpt-responses-in-sensitive-conversations/"&gt;blog post&lt;/a&gt;, OpenAI said it consulted more than 170 mental-health professionals to more reliably recognize signs of emotional distress in users. The company also said it updated ChatGPT to give users “gentle reminders” to take breaks⁠ during long sessions. OpenAI would not tell me specifically how long into an exchange ChatGPT nudges users to take a break or how often users actually take a break versus continue chatting after being served this reminder.&lt;/p&gt;&lt;p dir="ltr"&gt;One psychologist I spoke with, Elliot Kaminetzky, an expert on OCD who is optimistic about the use of AI for therapy, suggested that people could tell the chatbot they have health anxiety and “program” it to let them ask about their concerns just once—in theory, preventing the chatbot from goading the user to interact further. Other therapists expressed concern that this is still reassurance-seeking and should be avoided.&lt;/p&gt;&lt;p dir="ltr"&gt;When I tested the idea of instructing ChatGPT to restrict how much I could talk to it about health worries, it didn’t work. ChatGPT would acknowledge that I put this guardrail on our conversations, though it also prompted me to keep responding and allowed me to keep asking questions, which it readily answered. It also flattered me at every turn, earning its reputation for sycophancy. For example, in response to telling it about a fictional pain in my right side, it cited the guardrail and suggested relaxation techniques, but ultimately took me through a series of possible causes that escalated in severity. It went into detail on risk factors, survival rates, treatments, recovery, and even what to expect if I were to go to the ER. All of this took minimal prompting, and the chatbot continued the conversation whether I acted worried or assured; it also allowed me to ask about the same thing as soon as an hour later, as well as multiple days in a row. “That’s a good and very reasonable question,” it would tell me, or, “I like how you’re approaching it.”&lt;br&gt;
&lt;br&gt;
“Perfect — that’s a really smart step.”&lt;br&gt;
&lt;br&gt;
“Excellent thinking — that’s exactly the right approach.”&lt;/p&gt;&lt;p dir="ltr"&gt;OpenAI did not respond to a request for comment about my informal experiment. But the experience left me wondering whether, as millions of people use chatbots daily—forming relationships and dependencies, becoming emotionally entangled with AI—it will ever be possible to isolate the benefits of a health consultant at your fingertips from the dangerous pull that some people are bound to feel. “I talked to it like it was a friend,” Mallon said. “I was saying stupid things like, ‘How are you today?’ And at night, I’d log off and go, ‘Thanks for today. You’ve really helped me.’”&lt;/p&gt;&lt;p dir="ltr"&gt;In one of the exchanges where I continuously prompted ChatGPT with worried questions, only minutes passed between its first response suggesting that I get checked out by a doctor to its detailing for me which organs fail when an infection leads to septic shock. Every single reply from ChatGPT ended with its encouraging me to continue the conversation—either prompting me to provide more information about what I was feeling or asking me if I wanted it to create a cheat sheet of information, a checklist of what to monitor, or a plan to check back in with it every day.&lt;/p&gt;</content><author><name>Sage Lazzaro</name><uri>http://www.theatlantic.com/author/sage-lazzaro/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/04a9MgXOKRBCEcb9hz7XcbhDVR8=/media/img/mt/2026/03/2025_12_10_Deena_So_Oteh_The_Atlantic_update/original.jpg"><media:credit>Illustration by Deena So Oteh</media:credit></media:content><title type="html">The ChatGPT Symptom Spiral</title><published>2026-04-06T18:30:00-04:00</published><updated>2026-04-07T16:16:58-04:00</updated><summary type="html">Be careful asking chatbots about your health.</summary><link href="https://www.theatlantic.com/technology/2026/04/chatgpt-health-anxiety/686603/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686686</id><content type="html">&lt;p&gt;Late last month, a large crowd gathered in downtown San Francisco to demand that the AI industry stop developing more powerful bots. Holding signs and banners reading &lt;span class="smallcaps"&gt;Stop the AI Race&lt;/span&gt; and &lt;span class="smallcaps"&gt;Don’t Build Skynet&lt;/span&gt;, the protesters marched through the city and gave speeches outside the offices of Anthropic, OpenAI, and xAI. The crowd demanded that these companies halt efforts to create superintelligent machines—and, in particular, AI models that can develop future AI models. Such a technology, attendees said, could extinguish all human life.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;At AI protests and happy hours, inside start-ups and major companies, the tech world is in a frenzy over the same thing: Computers that make themselves smarter. Over the past year, the top AI companies have taken to loudly bragging about internal efforts to automate their own research. OpenAI recently released a new model it &lt;a href="https://openai.com/index/introducing-gpt-5-3-codex/"&gt;described&lt;/a&gt; as “instrumental in creating itself.” Within the next six months, the company aims to debut what it has described as an “intern-level AI research assistant.” Meanwhile, Anthropic says that as much as 90 percent of the company’s code is already written by Claude.&lt;/p&gt;&lt;p&gt;“We are starting to see AI progress feed back on itself,” Nick Bostrom, an influential Swedish philosopher who studies AI risk, told us. Within Silicon Valley, many insiders believe that we are teetering on the precipice of a world in which AI can rapidly improve its own capabilities. Instead of waiting for months between new machine-learning breakthroughs, we might wait weeks. Imagine AI advancing faster and faster.&lt;/p&gt;&lt;p&gt;The idea of self-improving bots is nothing new. When the statistician I. J. Good first introduced the concept of recursive self-improvement in the 1960s, he wrote that machines capable of training their own, even more capable successors would be “the last invention” society ever needed to make. But just a few years ago, any notion of actually making such AI models was on the back burner. When ChatGPT couldn’t reliably add and subtract, &lt;a href="https://www.theatlantic.com/technology/archive/2024/06/chatgpt-citations-rag/678796/?utm_source=feed"&gt;let alone search the web&lt;/a&gt;, the notion that AI programs would soon be able to do world-class machine-learning research seemed laughable. Even as tech companies made claims about the imminent arrival of “artificial general intelligence,” the capabilities needed for a bot to accelerate or even direct AI research seemed to &lt;em&gt;exceed&lt;/em&gt; those of AGI.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/02/do-you-feel-agi-yet/685845/?utm_source=feed"&gt;Read: Do you feel the AGI yet?&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Now, as AI models have &lt;a href="https://www.theatlantic.com/technology/2026/02/post-chatbot-claude-code-ai-agents/686029/?utm_source=feed"&gt;become significantly better at coding&lt;/a&gt;, Silicon Valley has become hooked on the idea of self-improving machines. AI research involves a lot of grunt work—curating large data sets, running repeated experiments—that can be made more efficient with the help of coding bots. Dario Amodei, Anthropic’s CEO, has &lt;a href="https://www.dwarkesh.com/p/dario-amodei-2"&gt;estimated&lt;/a&gt; that coding tools speed up his company’s overall workflows by 15 to 20 percent.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;But the information that top AI firms share about how and the extent to which they have automated internal research is patchy at best. When Anthropic says that Claude writes almost all of its code, we don’t know how much human supervision was required. (An Anthropic spokesperson declined a request for an interview, but pointed us to a recent &lt;a href="https://www.nytimes.com/2026/02/24/opinion/ezra-klein-podcast-jack-clark.html"&gt;podcast&lt;/a&gt; in which Jack Clark, the company’s head of policy, said one of his biggest priorities this year is to better understand “the extent to which we are automating aspects of A.I. development.”) There are also few details about OpenAI’s forthcoming AI “intern.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;A company spokesperson described it to us as a system that could contribute to research workflows by, for instance, conducting literature reviews or interpreting results of experiments. (&lt;em&gt;The Atlantic &lt;/em&gt;has a corporate partnership with OpenAI.) One concrete example of how AI is being used to automate research comes from Google DeepMind: Last year, the company developed an AI coding agent called AlphaEvolve, which according to research published by the firm was able to make Google’s global data-center fleet 0.7 percent more computationally efficient on average and cut the overall training time of Gemini by 1 percent.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/02/post-chatbot-claude-code-ai-agents/686029/?utm_source=feed"&gt;Read: AI agents are taking America by storm&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;All of these current approaches to self-improving AI are not recursive but piecemeal. AI tools can write code, find small optimizations, and generally make discrete parts of the AI research process faster. It’s impressive that machines are able to at least incrementally improve their own abilities, but right now humans still play an essential role. AI research has many components: curating training data, proposing new hypotheses, setting up experiments to test them, and deciding how to allocate scarce computing resources. Eventually, the thinking goes, recursively self-improving AI models will make the leap from rote programming to having real research “taste”—as AI insiders call the mix of human creativity and judgment exhibited by top software engineers. Instead of humans coming up with ideas for new experiments, the bots will do this themselves.&lt;/p&gt;&lt;p&gt;Many AI boosters and doomers alike believe that we’re not far from that future. Sam Altman says that by 2028, OpenAI plans to have developed a fully “automated AI researcher.” By then, “we are pretty confident we will have systems that can make more significant discoveries,” the company &lt;a href="https://openai.com/index/ai-progress-and-recommendations/"&gt;said&lt;/a&gt; in a recent blog post. Based on the speed of recent advances in AI, Eli Lifland, a researcher at the AI Futures Project, has forecast that AI research and development could be fully automated by 2032. After all, a few years ago, top models could successfully do only things that would take a human developer seconds; now they autonomously complete tasks that would take humans hours. “I don’t expect a reason for it to slow down,” Neev Parikh, a researcher at METR, a nonprofit that studies AI coding capabilities, told us.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;There are plenty of reasons to be skeptical that AI research will be fully automated over such a short time horizon. Coding bots are designed to execute directions, but developing an AI with &lt;a href="https://www.theatlantic.com/technology/archive/2025/06/good-taste-ai/683101/?utm_source=feed"&gt;research &lt;/a&gt;&lt;a href="https://www.theatlantic.com/technology/archive/2025/06/good-taste-ai/683101/?utm_source=feed"&gt;taste&lt;/a&gt; might require some kind of transformative breakthrough. Not to mention the various constraints on AI development—including the availability of funding, chips, and energy for data centers—that threaten to stall progress at any time. For now, “the overall pipeline to realize this self-improvement loop is still yet to be developed,” Pushmeet Kohli, DeepMind’s vice president of science and strategic initiatives, told us. A bot can optimize things, but it doesn’t “have anything to optimize &lt;em&gt;for&lt;/em&gt;,” Kohli said. “That’s where the human comes in.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/magazine/2026/04/ai-data-centers-energy-demands/686064/?utm_source=feed"&gt;Read: Inside the dirty, dystopian world of AI data centers&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Ultimately, even if the most fantastical dreams of recursive self-improvement turn out to be little more than a marketing ploy, marginal improvements in automating research are likely to further accelerate the pace of AI development. “This could change the dynamics of AI competition, alter AI geopolitics, and much more,” Dean Ball, &lt;a href="https://www.theatlantic.com/technology/2026/03/dean-ball-anthropic-interview/686226/?utm_source=feed"&gt;a former Trump adviser on AI&lt;/a&gt;, recently &lt;a href="https://www.hyperdimensional.co/p/on-recursive-self-improvement-part"&gt;wrote&lt;/a&gt;. Governments and civil society are already lagging. American institutions are in many ways still adapting to the internet—the IRS still processes tax returns using COBOL, a programming language that was released in 1960. Should AI models progress faster, public policy, including regulations on safety and security, has even less hope of keeping up. Bostrom, the philosopher, expressed a sort of resignation about the AI future when we spoke. He used to call himself a “fretful optimist,” he said, but now he’s a “moderate fatalist.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;In a strange way, none of the predictions about recursive self-improvement needs to be true for them to matter. Last year, a team of academics interviewed 25 leading researchers at DeepMind, OpenAI, Anthropic, Meta, UC Berkeley, Princeton, and Stanford. Twenty of them identified the automation of AI research as among the industry’s “most severe and urgent” risks. Now these dramatic warnings are gaining a growing audience. “Human beings could actually lose control over the planet,” Senator Bernie Sanders recently warned Congress, sounding just like the San Francisco protesters. Yet again, the AI industry has found a way to ratchet up the hype behind its technology.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><author><name>Lila Shroff</name><uri>http://www.theatlantic.com/author/lila-shroff/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/c3WP_48GLb1cNMqUeRDDbWFK0Ag=/media/img/mt/2026/04/2026_4_1_AI/original.png"><media:credit>Illustration by The Atlantic. Source: Getty.</media:credit></media:content><title type="html">Silicon Valley Is in a Frenzy Over Bots That Build Themselves</title><published>2026-04-03T13:35:00-04:00</published><updated>2026-04-06T10:29:54-04:00</updated><summary type="html">How close are we really to self-improving AI?</summary><link href="https://www.theatlantic.com/technology/2026/04/ai-industry-self-improving-bots/686686/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686646</id><content type="html">&lt;p&gt;“Come get ready with me for the day,” a young blond woman says over footage of herself making her bed, arranging her pillows, and weighing her clothing choices. The &lt;a href="https://www.instagram.com/reels/DKz_4nQSj2c/"&gt;video&lt;/a&gt; is just like any other lifestyle content that influencers post to Instagram and TikTok—right up until she whips out her phone and scrolls through the Kalshi app. “I use it to check the weather to help me pick out an outfit for the day,” she says, modeling a black spandex romper for the camera. “Go ahead and check out the app link below.”&lt;/p&gt;&lt;p&gt;Recently, my Instagram feed has been haunted by women explaining how much they enjoy betting on elections, the pop-music charts, and &lt;em&gt;Dancing With the Stars&lt;/em&gt;. They are advertising prediction markets such as Kalshi and Polymarket, which let users wager on virtually anything. “The boys can do their parlays and use words I’ve never heard of. But the girls can use their pop culture and educated guesses to make decisions and trade on Kalshi,” a woman &lt;a href="https://www.tiktok.com/@kalshiculture/video/7612800736396692749?q=kalshi%20girls&amp;amp;t=1773866166375"&gt;says&lt;/a&gt; in a TikTok on one of the company’s accounts. Her caption assures me: “Kalshi is for the girls!!!!”&lt;/p&gt;&lt;p&gt;So far though, it is not. Prediction markets have a dude problem. Though these sites offer all sorts of wagers—where will Taylor Swift get married? Who will win &lt;em&gt;Survivor&lt;/em&gt;?—they have largely become &lt;a href="https://www.theatlantic.com/technology/2026/02/super-bowl-prediction-markets-kalshi/685899/?utm_source=feed"&gt;yet another place for men to bet on football and March Madness&lt;/a&gt;. In the past six months, 88 percent of trades on Kalshi have been about sports, according to the investment firm &lt;a href="https://predictions.paradigm.xyz/?view=kalshi&amp;amp;basis=volume&amp;amp;start=2025-10-01&amp;amp;end=2026-04-01"&gt;Paradigm&lt;/a&gt;. The second-largest category, at about 6 percent, is crypto (which is arguably even &lt;em&gt;more &lt;/em&gt;bro-ey).&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/02/super-bowl-prediction-markets-kalshi/685899/?utm_source=feed"&gt;Read: You’ve never seen Super Bowl betting like this before&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;In an apparent attempt to bridge the gap, both Polymarket and Kalshi are running social-media campaigns that parrot the language of female empowerment and girlish memes. “Girl math says if I make $10 predicting real-life stuff, that coffee was technically free,” a girl in thick-framed glasses says in an ad that Kalshi ran on Facebook and Instagram. “If I’m already scrolling news or pop culture anyway, might as well turn my hot takes into some free iced coffees.” She adds, “It’s kind of addicting, but in a fun way.” (The video has since been removed for not having a necessary ad disclosure.) Some posts, like this one, are advertisements from the companies themselves; some are paid influencer partnerships; and some are either undisclosed partnerships or made by women who are just &lt;em&gt;super&lt;/em&gt; excited to post a suspicious amount of links to Polymarket.&lt;/p&gt;&lt;p&gt;Prediction markets should be an easier sell for women than traditional sports betting. Though women are less likely to gamble than men, prediction markets offer the veneer of being more than places to bet. Both Kalshi and Polymarket claim that they are financial markets, not casinos; users make trades about any given event, which in turn generate odds that supposedly predict the outcome. (They are called “prediction markets” for a reason.)&lt;/p&gt;&lt;p&gt;When prediction markets try to entice women, they especially tend to lean into the idea that all of this is investing, not gambling. On Kalshi’s dedicated Instagram for women, @KalshiGirls, one &lt;a href="https://www.instagram.com/p/DQabmx8jSL_/"&gt;meme&lt;/a&gt; reads, “When someone says prediction markets are ‘just betting,’” over a photograph of Cher from &lt;em&gt;Clueless &lt;/em&gt;saying, “Ugh, as if.” Meanwhile, the ads for men tend to emphasize the fun of gambling and the possibly big payouts: “Dude,” reads an ad Kalshi ran in the lead-up to the 2024 presidential election, “I am going to bet my Cybertruck on Trump, probably gonna make enough for a house if he wins.”&lt;/p&gt;&lt;p&gt;Kalshi in particular has been ramping up its efforts with women. (Polymarket’s main site, where people bet using crypto, is accessible in the United States only through digital work-arounds.) The reason for appealing to women is simple, Elisabeth Diana, Kalshi’s head of communications, told me: “They’re 50 percent of the population.” She noted that 26 percent of Kalshi-account holders are female—up from 13 percent just 10 months ago. Diana claimed that much of that increase is because of organic interest, but the company seems intent on pulling in more women. Before ABC canceled Season 22 of &lt;em&gt;The Bachelorette&lt;/em&gt; a couple of weeks ago, Kalshi had been planning a watch party.&lt;/p&gt;&lt;p&gt;Sure enough, when I looked up all the ads that Kalshi has run on Instagram and Facebook, I spotted a fair number that were obviously geared toward women. In the clips, influencers tended to make small wagers with a clear goal in mind—usually caffeinated beverages. Polymarket taps into the same dynamic on its X account for female traders, @PolyBaddies. (I do not suggest you Google that phrase.) One post includes a photo of a Starbucks cup with the caption, “Matcha and markets kinda day &#128524;.” (Polymarket did not respond to requests for comment.)&lt;/p&gt;&lt;p&gt;Many of these marketing efforts are ridiculous. I would bet—sorry—that most women will not be compelled to spend their time on prediction markets to maybe win $5 for their morning matcha. But some ads are less “girl math” and more actual math. Priya Kamdar, Maya Shah, and Anika Mirza—the 20-something hosts of &lt;em&gt;Get the Check&lt;/em&gt;, a technology-and-business podcast—reached out to Kalshi directly to obtain a partnership deal because they were already using the site, the three hosts told me. Mirza has a Kalshi wager on the race to succeed Nancy Pelosi in Congress; Shah bet on how long the government shutdown was going to last; Kamdar put money on the Rotten Tomatoes score that each movie in the &lt;em&gt;Wicked &lt;/em&gt;franchise would receive (she was right about the first film and wrong about the second).&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/01/america-polymarket-disaster/685662/?utm_source=feed"&gt;Read: America is slow-walking into a Polymarket disaster&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;The more women who are betting on prediction markets, the closer these sites get to their stated goal of forecasting the future. If they want to predict the Fed’s next interest rate, the winner of &lt;em&gt;The Bachelor&lt;/em&gt;, or whether or not it will rain tomorrow in Poughkeepsie, a market made up only of male sports fans won’t cut it. But Kalshi and Polymarket also have other incentives to show they are for women. Sports have an outsize popularity on prediction markets because these sites allow people to effectively wager even in states where sports betting is illegal. This is becoming a major problem for the companies. Kalshi is facing lawsuits from several states for allegedly operating as an unregistered sports-betting site. Arizona recently became the &lt;a href="https://www.npr.org/2026/03/17/nx-s1-5751165/kalshi-criminal-charges-arizona"&gt;first state&lt;/a&gt; to press criminal charges against Kalshi, and Nevada has temporarily blocked Kalshi and Polymarket from operating in the state. The companies, which maintain that they are financial markets and thus not subject to sports-betting restrictions, have a vested interest in getting users betting on topics besides sports. “It does future-proof them,” Dustin Gouker, a gambling-industry consultant who writes a daily newsletter, told me.&lt;/p&gt;&lt;p&gt;Perhaps the biggest concern with these ads is that they make it easy to forget that you can actually lose money on prediction markets. Shah, the podcast host, told me that if someone trades on topics they’re deeply knowledgeable about, prediction markets can be a useful “financial tool.” But they’re inherently risky. At one point, I was served an ad of a woman anxiously checking a Kalshi bet with her friends, with the caption, “I was about to be unable to pay my rent, but I got two years of rent through Kalshi’s predictions. It’s amazing! &#129392;&#129392;” When I searched for it again, the ad had been taken down; the next time I saw it was as an exhibit in a class-action lawsuit against &lt;a href="https://storage.courtlistener.com/recap/gov.uscourts.nysd.656144/gov.uscourts.nysd.656144.1.0.pdf"&gt;Kalshi&lt;/a&gt; that alleges, in part, that the site is not adequately disclosing risks to consumers. (Kalshi has denied the allegations.)&lt;/p&gt;&lt;p&gt;To hear the companies tell it, prediction markets are just another way to be a #girlboss. “Listen up, girlie pops! This platform is normally considered, like, for the finance bros, but I’m gonna show you why it’s so for us,” one woman says in a post seemingly sponsored by Polymarket. (The video includes no disclosures.) Kalshi and Polymarket become just another part of the day—platforms that women can use to check the odds even if they don’t place bets.&lt;/p&gt;&lt;p&gt;A year ago, I probably could not have told you what a prediction market was. By January, Polymarket odds were displayed during the Golden Globes, and &lt;a href="https://www.theatlantic.com/technology/2026/01/america-polymarket-disaster/685662/?utm_source=feed"&gt;CNN pundits&lt;/a&gt; were citing Kalshi’s markets on air. In February, Los Angeles’s Sunset Boulevard—a legendary street in my hometown, known for its clubs and neon signs—had a billboard displaying live Kalshi odds. These platforms are already ubiquitous. If women really do start using them en masse, prediction markets will burrow into American life even more deeply. Until then, the companies will keep reminding them to do some “girl math.”&lt;/p&gt;</content><author><name>Nancy Walecki</name><uri>http://www.theatlantic.com/author/nancy-walecki/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/jnipV0CO946L_elzrZLK_Otbr00=/media/img/mt/2026/04/2026_03_26_GirlMath/original.jpg"><media:credit>Illustration by Alisa Gao / The Atlantic</media:credit></media:content><title type="html">It’s Not Gambling, It’s ‘Girl Math’</title><published>2026-04-01T12:59:00-04:00</published><updated>2026-04-02T10:08:31-04:00</updated><summary type="html">Prediction markets are trying to woo women through matcha memes and #girlboss ads.</summary><link href="https://www.theatlantic.com/technology/2026/04/kalshi-polymarket-gambling-women/686646/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686628</id><content type="html">&lt;p&gt;Recently, a Costco in Florida instituted a new store policy. An employee told me that he was asked to open up every desktop computer displayed in the electronics section and remove the memory chips. Otherwise, the RAM harvesters would get them. Elsewhere, &lt;a href="https://www.cargonet.com/news-and-events/cargonet-in-the-media/2025-theft-trends/"&gt;criminal groups&lt;/a&gt; are misdirecting trucks carrying RAM in order to loot them. All of this is happening because of a generational shortage of a part used in practically every electronic gadget on Earth.&lt;/p&gt;&lt;p&gt;RAM is your device’s short-term memory—storing the information it needs to handle any active tasks. (&lt;em&gt;RAM&lt;/em&gt; stands for “random-access memory.”) To put this in intimately familiar terms, it is what your computer runs out of when you have too many browser tabs open. And right now, the price of RAM is skyrocketing. From September to February, the price of a single 64GB stick of RAM went from roughly $250 to more than $1,000.&lt;/p&gt;&lt;p&gt;Gamers who build their own juiced computers were among the first to notice that something was off. Starting in the fall, it became so difficult for them to acquire memory sticks that they have given a name to this crisis: RAMageddon. Now it’s quickly becoming everyone’s problem. In December, &lt;a href="https://www.businessinsider.com/dell-price-hikes-memory-demand-ai-chip-race-computer-2025-12"&gt;Dell&lt;/a&gt;&lt;a href="https://www.businessinsider.com/dell-price-hikes-memory-demand-ai-chip-race-computer-2025-12"&gt; jacked&lt;/a&gt; the prices of some of its computers by hundreds of dollars because of what its COO has referred to as “this memory crisis, shortage, whatever you want to call it.” Earlier this month, for the same reason, Lenovo raised prices on some of its products, including the popular ThinkPad.&lt;/p&gt;&lt;p&gt;This seems to be only the beginning. Matteo Rinaldi, the head of a global semiconductor-research institute run by Northeastern University, told me he recently asked a colleague what new laptop he should buy. “He told me right away, ‘Well, you know, it almost doesn’t matter which one,’” Rinaldi said. “‘Just decide you want to buy now, because prices are going up.’”&lt;/p&gt;&lt;p&gt;RAM is suddenly so expensive because memory is powering the AI boom. Data centers require huge amounts to run the models that underlie AI tools such as ChatGPT and Claude—especially as they become capable of handling more complicated tasks. This year, a group of tech giants—Amazon, Alphabet, Meta, Microsoft, and Oracle—is set to collectively spend half a trillion dollars on the AI build-out. Roughly a third of that money is being spent on memory alone, &lt;a href="https://www.dwarkesh.com/p/dylan-patel"&gt;according to&lt;/a&gt; Dylan Patel, the founder of SemiAnalysis, a popular semiconductor-research firm.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/03/ai-boom-polycrisis/686559/?utm_source=feed"&gt;Read: Welcome to a multidimensional economic disaster&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;The insatiable demand has “cannibalized our conventional consumer-electronics supply,” Yang Wang, an analyst at Counterpoint Research, a market-research firm, told me. Every major RAM manufacturer has shifted production lines to service AI data centers. This year, 70 percent of memory-chip products made globally will be destined for them. In South Korea, where two of the biggest RAM manufacturers are based, Silicon Valley executives are &lt;a href="https://www.ndtv.com/feature/why-apple-is-sending-top-brass-to-south-korea-hotels-the-ram-shortage-war-10566863"&gt;reportedly booki&lt;/a&gt;&lt;a href="https://www.ndtv.com/feature/why-apple-is-sending-top-brass-to-south-korea-hotels-the-ram-shortage-war-10566863"&gt;ng&lt;/a&gt; hotels in the country’s tech districts, frantically hoping to secure inventory. A Korean newspaper has given them a name: RAM beggars&lt;em&gt;.&lt;/em&gt;&lt;/p&gt;&lt;p&gt;Ideally, this problem would be solved by producing a whole lot more RAM. Micron, one of the biggest RAM manufacturers, is building a factory in New York that will cost more than any other private investment in the state’s history. Elon Musk recently suggested that Tesla will build its own RAM factories, called “fabs,” to ensure that he has enough memory to build robots and robotaxis. (“We’ve got two choices: Hit the chip wall, or make a fab,” he said in January.) But because of the complexity of making RAM, it could take even the richest man in the world two to five years to bring a new factory online. In the meantime, the world simply won’t have enough of a basic electronics part.&lt;/p&gt;&lt;p&gt;During RAMageddon, your gadgets will essentially be subject to an AI tax. It’s long been safe to assume that technology will get &lt;a href="https://www.cnet.com/tech/mobile/moores-law-is-the-reason-why-your-iphone-is-so-thin-and-cheap/"&gt;cheaper, faster, and better&lt;/a&gt;. But for the next few years, all signs suggest that devices will get more expensive, slower, and worse.&lt;/p&gt;&lt;p&gt;So far, it might not feel like all that much has changed. Earlier this month, Apple released its cheapest computer ever, the $599 Mac Neo. (It runs on a chip previously used only in iPhones.) But elsewhere, the price hikes have started. Samsung’s new Galaxy phones cost about $100 more than last year’s models, which the company’s COO &lt;a href="https://www.theverge.com/tech/885566/samsung-ram-galaxy-s26-price"&gt;has attributed&lt;/a&gt; in large part to the memory shortage. That’s despite the fact that Samsung is one of three companies in the world producing a significant amount of memory. Android phones have debuted this year with worse cameras, less storage, and slower processors than models released years ago, Wang told me, yet they still cost more.&lt;/p&gt;&lt;p&gt;Expect more changes like this. Gadget makers were able to initially swallow the cost of high RAM, but in the long run, they’ll have little choice but to pass on the cost to consumers. Consider Sony, which just announced that it will raise the price of the PlayStation 5 by $100. Before the adjustment, the memory chips inside a PS5 were worth more than the console itself. Smaller video-game manufacturers have pushed back launches or canceled the release of new consoles altogether.&lt;/p&gt;&lt;p&gt;To keep up with increasing RAM costs, things might get weird. Companies may jack up software prices to compensate for all the money they are sinking into memory chips. Sony’s CFO said on a recent earnings call that the company will survive the RAM crisis by “&lt;a href="https://wccftech.com/playstation-5-price-increases-monetizing-install-base/"&gt;monetizing the installed base&lt;/a&gt;,” which seems to be a euphemism for finding ways to charge PlayStation owners more, or showing them more ads. (Sony did not respond to a request for comment.) At the same time, some companies may start to pare back products they’ve made “smart” to justify markups. Smart speakers, smart toilets, smart toasters, and smart deodorants (yes, really) all contain RAM. “Do we stop getting smart refrigerators? I don’t think that’s a net bad,” Laine Nooney, a technology historian at NYU, told me.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2022/09/who-controls-smart-thermostat-temperature-nest-ecobee/671559/?utm_source=feed"&gt;Read: Your smart thermostat isn’t here to help you&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;If that’s a silver lining, it’s not a particularly good one. &lt;a href="https://www.trendforce.com/presscenter/news/20260310-12959.html"&gt;TrendForce&lt;/a&gt;, a consumer-research firm, anticipates that laptop prices will rise by more than a third in the next few years. Computers under $500 will be extinct by 2028, according to a report from &lt;a href="https://www.gartner.com/en/newsroom/press-releases/2026-02-26-gartner-says-surging-memory-costs-will-reduce-global-pc-and-smartphone-shipments-in-2026"&gt;Gartner&lt;/a&gt;. Put differently, cheaper computers may fall off the map. “The $300 Chromebook and the $150 Android phone were products of a specific era—one where memory was cheap because nobody else was competing for it at this scale,” Nate Jones, an AI analyst, told me. “That era is ending.”&lt;/p&gt;&lt;p&gt;The consequences are global. All of this will be felt acutely in poor countries, where sub-$150 smartphones are especially popular. Some people may have no choice but to revert to flip phones, potentially cutting them off from essential apps and services. “You can’t build a gaming PC? Cool story, bro,” Wang, the smartphone analyst, said. “But then people in Africa can’t get a device which is crucial for their lives.”&lt;/p&gt;&lt;p&gt;So much money is going into the AI build-out that it is already reshaping the physical world. The data centers that are sprouting up across the United States are at least partly to blame for rising utility bills. And now people who may never have heard of Claude or asked ChatGPT for homework help will feel the effects of RAMaggedon. Hospitals have shelved plans to install touch screens that display medical charts and let patients order food, because the displays contain RAM, Rachael England, a manager at Vizient, a consulting firm that works with many U.S. hospitals, told me. Josh Bauman, the director of technology for a public-school district in Missouri, told me that if RAM prices keep increasing, his district may rethink buying a Chromebook for every student. For the foreseeable future, no one can escape the AI tax.&lt;/p&gt;</content><author><name>Hana Kiros</name><uri>http://www.theatlantic.com/author/hana-kiros/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/OLQOcGIhtO-EKQdPhG3KievZphM=/media/img/mt/2026/03/2026_03_20_RAMageddon/original.jpg"><media:credit>Illustration by Alisa Gao / The Atlantic</media:credit></media:content><title type="html">If You Need a Laptop, Buy It Now</title><published>2026-03-31T12:27:00-04:00</published><updated>2026-04-01T13:04:26-04:00</updated><summary type="html">Electronics are getting more expensive and worse. Blame the AI boom.</summary><link href="https://www.theatlantic.com/technology/2026/03/laptop-electronics-ram-ai-tax/686628/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686618</id><content type="html">&lt;p&gt;Thore Graepel may have been the first human to be vanquished by a superintelligence. In 2015, on his first day as a researcher at Google DeepMind, he was challenged to play against the earliest iteration of AlphaGo—a computer program developed by DeepMind that would prove so effective at the ancient-Chinese game of &lt;em&gt;weiqi&lt;/em&gt; (or Go, as it is commonly known in the West) that it changed how humans play it, and then upended the field of AI itself.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;When Graepel faced it, AlphaGo was just a “baby” project, as he put it to me, and he was an accomplished amateur player. But it still took him down. Then, the following year, AlphaGo—now fully developed—plowed through a number of human champions, ultimately crushing Lee Sedol, widely considered the best player in the world, with a match score of 4–1. This month marked the tenth anniversary of that victory.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;For decades, developing a program that plays Go at an elite level was an infamous problem in computer science. Many considered it unsolvable—far harder than developing a similar program for chess, in which the supercomputer DeepBlue beat the world champion in 1997. In Go, two players take turns positioning stones on a 19-by-19 grid, and their movements are relatively unrestricted. In chess, which has a far smaller grid, a rook can move only horizontally and a bishop only diagonally, but Go pieces can be placed on any open space. The number of possible Go positions is so high that it &lt;a href="https://tromp.github.io/go/legal.html"&gt;cannot be easily expressed in words&lt;/a&gt;; it is higher than the number of atoms in the observable universe, and orders of magnitude higher than the number of possible chess games. Today, the technical frameworks and approaches that allowed an algorithm to excel at this board game have translated fairly directly into bots that can write advanced code, help tackle open problems in mathematics, and replicate scientific discoveries from scratch.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Generative AI is living in AlphaGo’s shadow. Beyond the actual models, “conceptual things emerged from the whole AlphaGo experience which essentially entered the AI vocabulary,” Pushmeet Kohli, the vice president of science and strategic initiatives at Google DeepMind, told me. In many ways, Go and chess provide ideal templates for understanding how the AI boom has unfolded—and a guide for what it may yet wreak.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;DeepMind’s innovation was to essentially pair two algorithms: one AI model to propose moves and a second model to judge whether a move is good or not, allowing the system to devote computational resources to planning sequences of moves most likely to result in victory. AlphaGo then played itself thousands of times, improving from every mistake through a training process known as reinforcement learning. Today’s frontier AI labs face an analogous problem: Large language models such as ChatGPT could spit out lucid sentences and paragraphs, but when they faced challenging tasks in computer science, physics, and other areas that would require a human to really &lt;em&gt;think&lt;/em&gt;, chatbots had been stuck stumbling in the dark. That began to change in late 2024 with the advent of &lt;a href="https://www.theatlantic.com/technology/archive/2024/12/openai-o1-reasoning-models/680906/?utm_source=feed"&gt;so-called reasoning models&lt;/a&gt;, an approach that now underlies all of the top bots from OpenAI, Google DeepMind, and Anthropic. And the idea behind these reasoning models “is surprisingly similar to AlphaGo,” as Noam Brown, a researcher at OpenAI, recently &lt;a href="https://x.com/polynoamial/status/2031404079583473953"&gt;put it&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2023/02/train-ai-chatgpt-to-play-video-game-pokemon/672954/?utm_source=feed"&gt;Read: A machine crushed us at Pokémon&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The intuition behind chatbot reasoning is to have AI models work out a solution step-by-step, using a scratch pad of sorts, and then evaluate steps along the way to change course or start over as needed—very much like the two-step approach used by AlphaGo. The training method for these reasoning chatbots is the same as well: reinforcement learning. An algorithm can play lots of games of Go or attempt to solve lots of difficult math problems, then learn from its mistakes when it loses or errs. Today’s best AI models “can be traced back to some degree to the AlphaGo work,” Graepel said.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Perhaps the most crucial insight shared between AlphaGo and the chatbot-reasoning breakthrough is a twist on the AI industry’s central dogma, the “scaling laws.” Traditionally, AI companies improved their large language models by training them on more data and with more computing power. In the case of AlphaGo and reasoning models, researchers realized that they could scale another dimension: having the program devote more time and computing power to a task, akin to how harder problems typically take humans more time to solve. For bots, this meant planning more and longer sequences of moves or using more words to “reason” through a tough coding task. That wasn’t guaranteed. “It could happen that you give them more time and they spend more time just getting confused,” Kohli said.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;After the success of AlphaGo, DeepMind made a successor program called AlphaZero. Whereas AlphaGo was initially shown a number of human Go matches as a baseline, AlphaZero became dominant at a number of games—Go, chess, and so on—purely by playing itself, with zero prior knowledge, and learning from each game. That an AI model essentially taught itself, very rapidly, to surpass the abilities of any human ever at multiple games might suggest that very rapid advances for today’s chatbots are on the horizon. By this logic, models could essentially figure out ways to improve themselves. But the success of AlphaGo and AlphaZero more likely signals obstacles ahead. The most important ingredient in AlphaGo was the simplicity with which one could measure success—win or lose—and thus give the machine feedback to improve.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2026/03/ai-creative-writing/686418/?utm_source=feed"&gt;Read: The human skill that eludes AI&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;With board games, “we were always operating in a specific environment where the rules of the game were known,” Kohli said. “The systems of today are expected to operate in a much more general environment.” Reasoning models have found success mostly in areas that still have a relatively clear rubric for evaluation: whether an AI-written program works as intended, for instance, or whether an AI-written proof holds up. Instilling any notion of a more general intelligence in a machine will be a far more challenging problem than conquering even Go.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;DeepMind has been able to design evaluations for more abstract ideas, for instance by orchestrating several AI agents to act as &lt;a href="https://www.theatlantic.com/technology/archive/2025/04/how-ai-will-actually-contribute-cancer-cure/682607/?utm_source=feed"&gt;a team of virtual “scientists”&lt;/a&gt; that will rank hypotheses about problems in biology. But even that system operates within a relatively constrained domain of biological reasoning and literature. It’s unlikely that any lab will come up with a single way to evaluate “general intelligence” that can be used to train a bot AlphaGo style, let alone one as straightforward as winning or losing a board game.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/archive/2025/04/how-ai-will-actually-contribute-cancer-cure/682607/?utm_source=feed"&gt;Read: AI executives promise cancer cures. Here’s the reality.&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Still, the progress the AlphaGo approach has yielded for AI models in a number of scientific domains is impressive—so much so that, a decade after AI conquered humanity’s hardest board game, the nation is now in a frenzy over whether AI is about to first overhaul the economy and then unsettle the purpose of being human at all.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Once again, chess and Go might offer guides. As a result of improving via self-play, AlphaGo and AlphaZero developed not only superhuman ability but also inhuman style, using tactics and strategies no human had previously considered. These AI strategies did not destroy the human pursuits of chess and Go; they &lt;a href="https://www.technologyreview.com/2026/02/27/1133624/ai-is-rewiring-how-the-worlds-best-go-players-think/"&gt;reignited&lt;/a&gt; new waves of human &lt;a href="https://www.theatlantic.com/technology/archive/2022/09/carlsen-niemann-chess-cheating-poker/671472/?utm_source=feed"&gt;creativity and strategy&lt;/a&gt;. The most optimistic analogy for today’s more broadly useful AI systems would be that they also, rather than providing a wholesale replacement for humans, will function as a sort of &lt;a href="https://www.theatlantic.com/technology/archive/2022/10/hans-niemann-chess-cheating-artificial-intelligence/671799/?utm_source=feed"&gt;complementary intelligence&lt;/a&gt;. Biologists, &lt;a href="https://www.theatlantic.com/technology/2026/02/ai-math-terrance-tao/686107/?utm_source=feed"&gt;mathematicians&lt;/a&gt;, and computer scientists are already finding ways in which today’s AI models are not simply speeding up their work but qualitatively changing the kinds of questions humans can ask and the discoveries we can make.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Of course, the business proposition of generative AI is quite the opposite: that products such as ChatGPT and Claude Code can automate huge swaths of white-collar work, help students cheat their way through school, and allow humans to live mostly without thinking. Perhaps C-suite executives, like AI researchers, can learn a lesson from Go and chess. Like any sport, chess and Go are worthwhile because of human struggles and storylines, champions made and toppled, the very fact that people are doomed to be imperfect but always striving to become just a bit better. And rather than automating human chess masters or destroying the sport and pastime, chess-playing AI models have helped the business of chess to boom.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Likewise, employees, managers, students, professors—really all of us—are always learning and learning by failing, or at least we should be. That is useful and worth preserving in &lt;a href="https://www.theatlantic.com/ideas/2025/12/ai-entry-level-creative-jobs/685297/?utm_source=feed"&gt;plain economic terms&lt;/a&gt;. Nobody becomes world-class at anything without at some point being rather terrible at it, and allowing novices who might be less capable than a bot to build up skills is the only way you get experts with human judgment and abilities that surpass any AI. But more important than that economic rationale is an existential one: To grow or help another do so is a beautiful thing. Some might call it being human.&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/U6SJyTz_GY-KuSqVbSFKPc_JlQM=/media/img/mt/2026/03/2026_03_27_AI2_mpg/original.jpg"><media:credit>Illustration by The Atlantic</media:credit></media:content><title type="html">A Game Plan for the AI Boom</title><published>2026-03-30T18:27:37-04:00</published><updated>2026-04-02T10:11:16-04:00</updated><summary type="html">Ten years ago, AlphaGo trounced human competitors—and its legacy is still present in today’s most advanced bots.</summary><link href="https://www.theatlantic.com/technology/2026/03/alphago-ai-boom/686618/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686559</id><content type="html">&lt;p class="dropcap"&gt;T&lt;span class="smallcaps"&gt;he global economy&lt;/span&gt; has become dependent on the AI industry. Trillions of dollars are being invested into the technology and the infrastructure it relies on; in the final months of 2025, &lt;a href="https://www.barrons.com/articles/ai-investment-gdp-economy-e19c6d70"&gt;functionally all&lt;/a&gt; economic growth in the United States came from AI investments. This would be risky even in ideal conditions. And we are very far from ideal conditions.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Much of the AI supply chain—chips, data centers, combustion turbines, and so on—relies on key materials that are produced in or transported through just a few places on Earth, with little overlap. In particular, the industry is highly dependent on the Middle East, which has been destabilized by the war in Iran. A global &lt;a href="https://www.newstatesman.com/international-politics/geopolitics/2026/03/the-world-energy-shock-is-coming"&gt;energy shock&lt;/a&gt; seems all but certain to come soon—the kind where even the &lt;a href="https://www.economist.com/finance-and-economics/2026/03/22/even-the-best-case-scenario-for-energy-markets-is-disastrous"&gt;best-case scenario&lt;/a&gt; is a disaster. The war could grind the AI build-out to a halt. This would be devastating for the tech firms that have issued historic amounts of debt to race against their highly leveraged competitors, and it would be devastating for the private lenders and banks that have been buying up that debt in the hope of ever bigger returns.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;For the better part of the past year, Wall Street analysts and tech-industry observers have fretted publicly &lt;a href="https://www.theatlantic.com/technology/2025/10/data-centers-ai-crash/684765/?utm_source=feed"&gt;about an AI bubble&lt;/a&gt;. The fear is that too much money is coming in too fast and that generative-AI companies still have not offered anything close to a viable business model. If growth were to stall or the technology were to be seen as failing to deliver on its promises, the bubble might burst, triggering a chain reaction across the financial system. Everyone—big banks, private-equity firms, people who have no idea what’s mixed into their 401(k)—would be hit by the AI crash.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Until recently, that kind of crash felt hypothetical; today, it feels plausible and, to some, almost inevitable. “What’s unusual about this, unlike commercial real estate during the global financial crisis,” Paul Kedrosky, an investor and financial consultant, told us, “is all of these interlocking points of fragility.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2025/10/data-centers-ai-crash/684765/?utm_source=feed"&gt;Read: Here’s how the AI crash happens&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Perhaps the clearest examples are advanced memory and training chips, which are among the most important—and are by far the most expensive—components of training any AI model. Currently, most of them are produced by two companies in South Korea and one in Taiwan. These countries, in turn, get a large majority of their crude oil and much of their liquefied natural gas—which help fuel semiconductor manufacturing—from the Persian Gulf. The chip companies also require helium, sulfur, and bromine—three key inputs to silicon wafers—largely sourced from the region. In addition, Saudi Arabia, Qatar, the United Arab Emirates, and other regional petrostates have become key investors in the American AI firms that purchase most of those chips.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Because of the war in Iran, the Strait of Hormuz is functionally closed to most shipping vessels, stranding one-fifth of the world’s exports of natural gas, one-third of the world’s exports of crude oil, and significant quantities of the planet’s exportable fertilizer, helium, and sulfur. Meanwhile, Iran and Israel have begun bombing much of the fossil-fuel infrastructure in the region, which could take many years to replace. In only a month of war, the price of Brent crude—a global oil benchmark—has jumped by 40 percent and could more than double, liquefied-natural-gas prices are soaring in Europe and Asia, and &lt;a href="https://www.reuters.com/business/energy/helium-prices-soar-qatar-lng-halt-exposes-fragile-supply-chain-2026-03-12/"&gt;helium spot prices&lt;/a&gt; have already doubled. The strait is “critical to basically every aspect of the global economy,” Sam Winter-Levy, a technology and national-security researcher at the Carnegie Endowment for International Peace, told us. “The AI supply chain is not insulated.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The situation could quickly deteriorate from here. A helium crunch could trigger a shortage of AI chips or cause chip prices to rise. AI companies need ever more advanced chips to fill their data centers—at higher prices, the massive server farms, already hurting from elevated energy costs caused by the war, would have almost no hope of becoming profitable. Without these chips, new data centers would not be built or would sit empty. Astronomical tech valuations, and in turn the entire stock market, could collapse.&lt;/p&gt;&lt;p class="dropcap"&gt;O&lt;span class="smallcaps"&gt;ne industry’s precarious position&lt;/span&gt; isn’t usually everyone’s problem. Unfortunately, AI is different. The biggest data-center players, known as hyperscalers, are among the biggest corporations in the history of capitalism; they include Microsoft, Google, Meta, and Amazon. But even they will be pressed by collectively spending nearly $700 billion on AI in a single year. In order to get the money for these unprecedented projects, data-center providers are beginning to take on &lt;a href="https://fortune.com/2025/11/19/big-5-ai-hyperscalers-quadruple-debt-fund-ai-operations/"&gt;colossal amounts of debt&lt;/a&gt;. Some of this is done through creative deals with private-equity firms including Blackstone, BlackRock, and Blue Owl Capital—which themselves operate as sort of shadow banks that, since the most recent financial crisis, have arguably become as powerful and as influential as Bear Stearns and Lehman Brothers were prior to 2008. Endowments, pensions, insurance funds, and other major institutions all trust private equity to invest their money.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;For a while, it seemed like every time Google or Microsoft announced more data-center investments, their stock prices rose. Now the opposite occurs: The hyperscalers are spending far more, but investors have started to notice that they are not generating anything near the revenue they need to. The data-center boom’s top players—Google, Meta, Microsoft, Amazon, Nvidia, and Oracle—have all lost 8 to 27 percent of their value since the start of the year, making them a huge drag on the overall stock market. And the $121 billion of debt that hyperscalers issued in 2025, four times more than what they averaged for years prior, is &lt;a href="https://www.reuters.com/business/retail-consumer/analysts-revise-ai-hyperscaler-debt-forecasts-after-amazon-bond-sale-2026-03-17/"&gt;expected&lt;/a&gt; to grow dramatically.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;All of the major players in this investment ecosystem are vulnerable. Private-equity firms are being squeezed on both ends by generative AI: During the coronavirus pandemic, they bought up software companies, which are now plummeting in value because AI is expected to eat their lunch. Meanwhile, private equity’s new investment strategy, data centers, is &lt;em&gt;also&lt;/em&gt; falling apart because of AI. Blackstone, Blue Owl, and the like are sinking huge sums into data-center construction with the assumption that lease payments from tech companies will pay for their debt. In order to pay for their investments, private-equity companies raised money from major financial institutions—but now the viability of those lease payments is coming into question as the hyperscalers’ cash flow is strained. “There’s a reason to think we’re seeing some of the same 2008 dynamics now,” Brad Lipton, a former senior adviser at the Consumer Financial Protection Bureau and now the director of corporate power and financial regulation at the Roosevelt Institute, told us. “Everyone’s getting tied up together. Banks are lending money to private credit, which in turn lends it elsewhere. That amps up the risk.”&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/ideas/2026/03/ai-job-loss-jevons-paradox/686520/?utm_source=feed"&gt;Annie Lowrey: How to guess if your job will exist in five years&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;The way the money moves is concerning, but so is the AI industry’s underlying business model. At every layer, the technology appears to decrease the value of its assets. The advanced AI chips that make up the majority of the cost of a data center? Their value rapidly decreases as they are superseded by the next generation of chips, meaning that the ultimate backstop for all of the data-center debt—selling the data center itself—is not actually a backstop. The way that AI companies make money when people use their products is also deflationary. OpenAI, Anthropic, and others charge users for using “tokens,” the components of words processed by their bots. This means that tokens are an industrial commodity akin to, say, crude oil or steel. But unlike other commodities, the cost of each token is rapidly decreasing owing to advancements in AI’s capabilities. Kedrosky called this “a death spiral to zero.” As the value of a token plummets, the value of what data centers can produce also falls.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The war in Iran affects data-center finances as well. Should energy prices continue to skyrocket, so will the cost of this already very expensive computing equipment, because it needs tremendous amounts of energy to manufacture and operate. And the war has exposed physical risks to these buildings. Janet Egan, a senior fellow at the Center for a New American Security, described data centers to us as “large, juicy targets.” It is impossible to hide these facilities, which can cover 1 million square feet. Earlier this month, Iran bombed Amazon data centers in the UAE and Bahrain. American hyperscalers had been planning to build far more data centers in the region, because the Trump administration and the AI industry have sought funding from Saudi Arabia, the UAE, Qatar, and Oman. Now there’s a two-way strain on those relationships. The physical security of the data centers is more precarious, and the conflict is damaging the economic health of the petrostates, thereby jeopardizing a major source of further investment in American AI firms. The Trump administration “staked a lot on the Gulf as their close AI partner, and now the war that they’ve launched poses a huge threat to the viability of the Gulf as that AI partner,” Winter-Levy said.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Plus, “what’s to prevent Iran or a proxy group, or another maligned actor, from tomorrow launching an armed drone against a data center in Northern Virginia?” Chip Usher, the senior director for intelligence at the Special Competitive Studies Project, a national-security and AI think tank, told us. “It could happen. Our defenses are not adequate.” State-sponsored cyberattacks of the variety Iran is known for could also knock a data center offline. You can build all manner of defenses—reinforced concrete, drone-interception systems—but doing so adds cost and time to already costly and slow construction.&lt;/p&gt;&lt;p class="dropcap"&gt;J&lt;span class="smallcaps"&gt;ust a few things going a bit wrong&lt;/span&gt; could compound, all at once, into a cataclysm. To wit: Qatari and Saudi money dries up. Sustained high oil and natural-gas prices drive up the costs of manufacturing chips and running data centers. Already cash-strapped hyperscalers struggle to make lease payments on their data centers, while similarly strained private lenders suffer as all of the AI bonds become deadweight. Tech valuations fall, taking public markets with them; private-equity firms have to sell and torch their assets, putting intense stress on the institutional investors and banks. The rest of the economy, drained of investment because everything was poured into data centers for years, is already weak. Unemployment goes up, as do interest rates. “Bubbles pop. That’s the system,” Lipton said. “What isn’t supposed to happen is that it takes down the whole financial system. But the concern here is that AI investment isn’t confined and may spread to the whole economy.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Even if Iran and the Strait of Hormuz don’t directly trigger an AI-driven financial crisis, the odds are decent that another vector could. (Remember tariffs?) Energy prices could stay elevated for years, because the targeted fossil-fuel facilities in the Persian Gulf will take a long time to restore. As the U.S. directs huge amounts of attention and military resources toward Iran, it’s easy to imagine China launching an invasion of Taiwan—a scenario that &lt;a href="https://www.nytimes.com/2026/02/24/technology/taiwan-china-chips-silicon-valley-tsmc.html"&gt;terrifies&lt;/a&gt; Silicon Valley, because it would halt the production of chips needed to train frontier models. That’s not even considering the single Dutch company that makes the high-tech lithography machines used to print virtually all AI chips, or the German company that makes the mirrors used in those machines. “There are too many ways for it to fail for it not to fail,” Kedrosky said of the AI industry’s web of risk. “All you can say for sure is this is a fragile and overdetermined system that must break, so it will.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;There are, of course, possibilities other than a full-blown, AI-driven financial crisis. Data-center spending could cool gradually enough that a crash is avoided. The revenues of Anthropic and OpenAI have been multiplying every year, which proponents argue means that generative-AI products are on track to eventually become profitable. But on the current trajectory, that would still take years, and there are good reasons to think that this growth will slow or halt. Notably, the main draw of AI tools is “efficiency”: Rather than growing their overall output and the opportunities available to people, executives are hoping that AI will allow them to make cuts to their business operations. The medium-term success of generative AI would likely involve &lt;a href="https://www.theatlantic.com/magazine/2026/03/ai-economy-labor-market-transformation/685731/?utm_source=feed"&gt;millions of people being put out of work&lt;/a&gt;. The range of options seems to be somewhere from mildly bad to historically so.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Should the system break, much of the blame would lie squarely with the technology companies. The stakes of this build-out, from the beginning, have been framed in civilizational terms—a geopolitical race alongside an existential one. The winners will control the future and reap the rewards. At every step of the way, AI firms have appeared to prioritize speed above the physical security of data centers, supply-chain redundancy, energy efficiency and independence, political stability, even financial returns. And in that quest for unbridled growth, the AI industry has wrested ungodly amounts of capital from investors all looking for the next big thing, ensnaring the entire economy.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Simultaneously, these firms have courted and even bent the knee to a presidential administration that has encouraged their “let it rip” ethos, only to watch as that same administration has plunged the industry into this emerging polycrisis. The AI industry was not made for the turbulence its leaders have helped usher in. The situation has grown so ungainly and untenable that, if Silicon Valley is merely forced to slow down, the viability of all this spending will likely be called into question in ways that could be devastating for many. In finance, being early is the same as being wrong. AI firms want the world to think they’re right on time. The world may have other plans.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</content><author><name>Matteo Wong</name><uri>http://www.theatlantic.com/author/matteo-wong/?utm_source=feed</uri></author><author><name>Charlie Warzel</name><uri>http://www.theatlantic.com/author/charlie-warzel/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/IVFBCxc2jIXqe2KB2LEnwKydNPU=/media/img/mt/2026/03/2026_03_26_datacenter_mpg/original.jpg"><media:credit>Nathan Howard / Bloomberg / Getty</media:credit><media:description>An Amazon Web Services data center in Manassas, Virginia</media:description></media:content><title type="html">Welcome to a Multidimensional Economic Disaster</title><published>2026-03-26T16:44:54-04:00</published><updated>2026-03-27T07:40:22-04:00</updated><summary type="html">The AI boom wasn’t built for the polycrisis.</summary><link href="https://www.theatlantic.com/technology/2026/03/ai-boom-polycrisis/686559/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686545</id><content type="html">&lt;p&gt;At the age of 14, Braden Peters began injecting himself with mail-order testosterone to make himself into something he wasn’t. By his account, the experiment ended when his parents, Kenneth and Lauren, discovered his supply and trashed it. Young Braden was apparently undaunted. He set up a post-office box and began ordering new chemicals—he’s since claimed to have taken crystal meth to stay lean—anything that would catalyze his transformation. He began tapping his face with a hammer in pursuit of perfect cheekbones. The goal was entirely superficial: to reshape his physical form so that other men would feel inferior in his presence, and so that women would want to have sex with him.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;This, at least, is the origin story he’s told and retold over hundreds of hours of livestreams and interviews. In the pre-internet age, Peters might have passed through the world without notice, or at least without fame. But in 2026, at age 20, he is a popular influencer who calls himself Clavicular, after the span of his collarbones. He is among the most recognizable adherents of the radical-self-improvement project known as looks-maxxing. Hew closely to the credo, which includes all sorts of steroids and therapies, and you might even &lt;em&gt;ascend&lt;/em&gt;. That’s looks-maxxing terminology for becoming really, really hot.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Clav, as he’s known, has had a moment this year. Seemingly overnight, he became wildly popular among &lt;a href="https://www.theatlantic.com/ideas/archive/2023/01/lost-boys-violent-narcissism-angry-young-men/672886/?utm_source=feed"&gt;the &lt;/a&gt;&lt;a href="https://www.theatlantic.com/ideas/archive/2023/01/lost-boys-violent-narcissism-angry-young-men/672886/?utm_source=feed"&gt;lost boys&lt;/a&gt; of the internet—the kinds of people who spend their time watching Nick Fuentes, the white-supremacist influencer, and Andrew Tate, the proudly misogynistic elder statesman of the manosphere, who is currently awaiting trial on charges of rape and human trafficking (he has denied the allegations). In January, Clavicular joined Tate, Fuentes, and the extremist podcaster Myron Gaines at a nightclub in Miami. Videos of the group listening to the Kanye West song “Heil Hitler” went viral; Clavicular was singing along.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/technology/2025/12/nick-fuentes-livestream/685247/?utm_source=feed"&gt;Read: I watched 12 hours of Nick Fuentes&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;As his live videos have been clipped and reposted on more mainstream parts of the internet, Clavicular has continued to gain widespread attention. There’s been a temptation among observers, including the media outlets that have covered this story over the past few months, to understand Clavicular as, essentially, a curiosity. He is a strange, attention-hungry young guy—the latest addition to a streaming ecosystem that celebrates extreme provocation. His peculiar online lingo, derived from the looks-maxxing community, has seeped into the culture.&lt;em&gt; Mogging&lt;/em&gt;, meaning “outclassing someone,” and -&lt;em&gt;maxxing&lt;/em&gt;, an all-purpose suffix denoting maximization of any kind, are inescapable online. Conan O’Brian described himself as “host-maxxing” during this year’s Oscars, and &lt;em&gt;Saturday Night Live&lt;/em&gt; parodied Clavicular in a “Weekend Update” sketch.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;But Clavicular’s rise is pernicious. The baseline concern with an influencer who takes a hammer to his face and says hateful things is that he is in some sense encouraging other people to do the same. Last month, a couple of fans came up to him during a livestream, and one shouted “Heil Hitler.” Clavicular tried to dismiss the comments as “cringe,” but he quite obviously set the tone. I have some authority here: After I left a note outside his parents’ house requesting an interview for this story, Clavicular shared my contact information online. As a reporter who covers the internet, I am used to being harassed—but I had never experienced so many direct violent threats, and so much virulent anti-Semitic hatred, as I have since then. The looks-maxxer insult “subhuman” kept coming up, as did the word &lt;em&gt;mongrel&lt;/em&gt;. (A spokesperson for Clavicular declined to answer my questions.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The bigger concern with Clavicular is not that he is encouraging a generation of young men to take extreme measures to change their looks. It’s that because his antics are so ridiculous and his videos so entertaining to a certain crowd, he has allowed more coherent and dangerous ideologies to hitch a ride on his movement. The far-right manosphere has seemingly taken every opportunity it can to tie itself to Clavicular. Tate joined him on a stream last month to lift weights and offer advice about how Clav should handle his newfound fame. Jon Zherka, an adjacent influencer, recently &lt;a href="https://x.com/ZherkaOfficial/status/2034877588553043971"&gt;likened&lt;/a&gt; him to a “younger brother.” Last week, Fuentes called him a “prophet” for exposing the cynical reality of modern dating—a core part of Clavicular’s appeal among this group.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/podcasts/2026/02/the-manosphere-breaks-containment/685907/?utm_source=feed"&gt;Listen: The manosphere breaks containment&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Clavicular is of course getting something in return. Associating with the manosphere’s best-known figures has been a shortcut to fame and money. But he is also a different kind of influencer. Although he calls women whores and says the N-word, he is generally less focused on politics than are Fuentes and Tate, who are constantly weighing theories about power and opining about the state of the world. In fact, Clavicular does not tend to talk about politics much at all, and has repeatedly claimed that his message is distinctly apolitical. He trolls for views. &lt;em&gt;That&lt;/em&gt;, if anything, is his philosophy; the looks-maxxing is secondary. During a December interview with a conservative podcaster, Clavicular said that if the 2028 presidential election comes down to Gavin Newsom and J. D. Vance, he will vote for the California Democrat purely because Newsom mogs Vance with his looks. Last month, Clavicular told the comedian Adam Friedland that he’d never heard of New York City Mayor Zohran Mamdani. “I’m so far removed,” he said.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Even beyond the manosphere’s corner of the internet, the right-wing ecosystem as a whole has recently gotten much better at capitalizing on cultural trends. Whenever a viral moment might have a remotely right-wing cast, the machinery moves into place. After Sydney Sweeney starred in an American Eagle commercial last year that touts her “great jeans” (a pun about her denim and her genetics), some on the left accused her of endorsing eugenics. The right, in turn, &lt;a href="https://www.theatlantic.com/technology/archive/2025/07/sydney-sweeney-american-eagle-ads/683704/?utm_source=feed"&gt;coalesced around her&lt;/a&gt;. A few months later, when sorority-dance videos &lt;a href="http://theatlantic.com/technology/archive/2025/08/sorority-rush-dance-maga-x/683894/"&gt;went vira&lt;/a&gt;&lt;a href="http://theatlantic.com/technology/archive/2025/08/sorority-rush-dance-maga-x/683894/"&gt;l&lt;/a&gt;, the online right immediately jumped in to say—without any evidence of the women’s actual views—that the dancers were owning the libs.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Weeks after Clavicular’s brief reign as the internet’s main character, his daily livestreams continue to collect hundreds of thousands of views. He is currently in the middle of a livestreaming marathon under the heading “Mog World Order” and will keep the cameras rolling nonstop for the next few weeks. The other day, a girl slapped him in the face at a nightclub. Fuentes, on his own stream, was indignant: “Kill, rape, and die for Clavicular—no, no, kidding, kidding, kidding, kidding!”&lt;/p&gt;</content><author><name>Will Gottsegen</name><uri>http://www.theatlantic.com/author/will-gottsegen/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/OvDHUvU6r6hQnPtIRFjcyeU0Dnc=/media/img/mt/2026/03/20260217_clavicular_2_1/original.jpg"><media:credit>Illustration by The Atlantic. Source: clavicular0 / Instagram</media:credit></media:content><title type="html">What Was Clavicular?</title><published>2026-03-26T07:30:00-04:00</published><updated>2026-03-26T08:13:24-04:00</updated><summary type="html">The internet’s most famous looks-maxxer is far more pernicious than he may seem.</summary><link href="https://www.theatlantic.com/technology/2026/03/clavicular-looksmaxxing-manosphere/686545/?utm_source=feed" rel="alternate" type="text/html"/></entry><entry><id>tag:theatlantic.com,2026:50-686544</id><content type="html">&lt;p&gt;When I opened Sora this morning, I was met with a flood of strange and disturbing AI-generated videos. On OpenAI’s video app, I scrolled through fabricated scenes of the Iran war and a barrage of fake Donald Trumps blabbering about Jeffrey Epstein. In my least favorite clip, I watched a man deep-fry an infant. The app lets users create fairly realistic-looking AI-generated clips—including of their own likeness—and then post them on a TikTok-like feed. Not &lt;em&gt;all &lt;/em&gt;of them are so unsettling, and for better or worse, Sora has been a steady source of internet virality. Within days of its release, it skyrocketed to the top of the App Store.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Now Sora will soon be dead. Yesterday, OpenAI said that it was shutting down the app and terminating public access to its video-generating technology. The decision was seemingly abrupt: Just a few months ago, Disney announced plans to invest $1 billion in OpenAI as part of a licensing deal to bring its characters to Sora, and earlier this week, workers from both companies were &lt;a href="https://www.reuters.com/technology/openai-set-discontinue-sora-video-platform-app-wsj-reports-2026-03-24/"&gt;apparently&lt;/a&gt; still collaborating. (Disney has since retracted its investment plans.) Even some Sora staffers themselves were reportedly caught off guard by the announcement. Online, people eulogized Sora by posting their favorite videos—such as one featuring a &lt;a href="https://x.com/emollick/status/2036788701586506121?s=20"&gt;column of spinning penguins&lt;/a&gt; and another in which &lt;a href="https://x.com/TrungTPhan/status/2036633266644815875?s=20"&gt;Jesus walks on water&lt;/a&gt; to win an Olympic gold medal in swimming.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;After OpenAI launched the Sora app in September, Sam Altman predicted that society was about to undergo a stunning artistic revolution. “Creativity could be about to go through a Cambrian explosion,” he wrote online. But such a revolution never materialized. It’s not that people hate AI slop. In fact, if anything, people seem to have a surprising appetite for it—the latest TikTok trend is &lt;a href="https://www.nytimes.com/2026/03/24/style/ai-cheating-fruit-slop-videos-tiktok.html"&gt;raunchy &lt;/a&gt;&lt;a href="https://www.nytimes.com/2026/03/24/style/ai-cheating-fruit-slop-videos-tiktok.html"&gt;telenovelas&lt;/a&gt; starring AI-generated fruit. In response to a request for comment, an OpenAI spokesperson pointed me to a public statement that cites “compute demand” as a key factor in the company’s decision. Generating videos is much more costly than generating text is, and Sora has likely been a real &lt;a href="https://www.forbes.com/sites/phoebeliu/2025/11/10/openai-spending-ai-generated-sora-videos/"&gt;financial drain&lt;/a&gt;: In the fall, &lt;em&gt;Forbes&lt;/em&gt; estimated that Sora might be costing OpenAI millions of dollars daily, and Bill Peebles, who leads Sora, &lt;a href="https://x.com/billpeeb/status/1984011952155455596?s=20"&gt;said&lt;/a&gt; that the economics were “completely unsustainable.” (OpenAI declined to comment on &lt;em&gt;Forbes&lt;/em&gt;’s estimates at the time.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The decision to quickly spin up a project and then suddenly pull the plug has become a classic OpenAI move. The company has spent the past few years cycling through new product features and business models with spectacular haste in an attempt to find its way to profitability. OpenAI seems to finally be learning that slop is not a business strategy.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Altman has never had a great plan for how OpenAI will make money. “We have no idea how we may one day generate revenue,” Altman said at a 2019 event. He went on to explain that one day, AI will be smart enough that OpenAI will simply ask the computer how to generate an investment return. “You can laugh,” he told a (rightfully) amused audience. “But it is what I actually believe is going to happen.” After ChatGPT’s success a few years later, investors began pouring money into OpenAI, and Altman has done a tremendous job of marshaling investor funds. The start-up is now &lt;a href="https://www.theatlantic.com/technology/2026/03/ai-bubble-defenders-silicon-valley/686340/?utm_source=feed"&gt;worth&lt;/a&gt; more than Toyota, Coca-Cola, and Disney &lt;em&gt;combined&lt;/em&gt;. But investors like to see returns, and so far, OpenAI hasn’t done much to prove that it is capable of generating enough cash to stay out of the red.&lt;/p&gt;&lt;p data-id="injected-recirculation-link"&gt;&lt;i&gt;[&lt;a href="https://www.theatlantic.com/ideas/2026/03/openai-economy-competition-anthropic/686420/?utm_source=feed"&gt;Read: The MySpace dilemma facing ChatGPT&lt;/a&gt;]&lt;/i&gt;&lt;/p&gt;&lt;p&gt;That’s not to say that it hasn’t been trying: Over the past few years, OpenAI has explored just about every business model conceivable. Last summer, Altman &lt;a href="https://www.bloomberg.com/news/articles/2025-08-15/openai-s-altman-expects-to-spend-trillions-on-infrastructure"&gt;described&lt;/a&gt; OpenAI as four separate companies—a consumer-tech business, a massive-scale infrastructure project, an AI-research lab, and an incubator for “new stuff,” including hardware. (OpenAI has a corporate partnership with &lt;em&gt;The Atlantic&lt;/em&gt;.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The trouble with trying to do everything is that sometimes you end up doing nothing well. Sora is the latest casualty in a long string of abrupt reversals, about-faces, and seemingly sloppily implemented projects. Last year, Altman announced a massive joint AI-infrastructure build-out with Oracle and SoftBank called Stargate, but the effort &lt;a href="https://www.theinformation.com/articles/inside-openais-scramble-get-computing-power-stargate-stalled?rc=ftwoob"&gt;stalled&lt;/a&gt;, reportedly following poor leadership and coordination. Altman &lt;a href="https://youtu.be/FVRHTWWEIz4?si=b2OjrsSd0sFQYOaV&amp;amp;t=2272"&gt;said&lt;/a&gt; in 2024 that combining ads and AI would be a “last resort” response—but then, earlier this year, the start-up launched an ads initiative. Last fall, OpenAI debuted a shopping feature, which allowed people to buy products directly inside ChatGPT; yesterday, the company announced that it was killing the feature and pivoting to focus on product discovery instead. In January, the company &lt;a href="https://www.axios.com/2026/01/19/openai-device-2026-lehane-jony-ive"&gt;said&lt;/a&gt; that the first of its much-awaited devices was “on track” to launch later this year, but weeks later, court filings &lt;a href="https://www.businessinsider.com/openai-timeline-hardware-ai-device-launch-jony-ive-iyo-2026-2"&gt;revealed&lt;/a&gt; that the company is unlikely to debut its new hardware before 2027. OpenAI originally banned NSFW content, and then it announced last year that it would make exceptions for such material, even planning a December rollout for erotica, only to later put erotica indefinitely on hold.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Some amount of change in business plans is only natural for any company, let alone one in an industry as fast-moving as AI. But compared with its peers, OpenAI is especially chaotic in its strategy. The company’s plans are seemingly always provisional: No partnership or product road map feels guaranteed to endure. Earlier this year, Nvidia walked back a commitment to invest up to $100 billion in OpenAI. At the time, &lt;em&gt;The Wall Street Journal &lt;/em&gt;&lt;a href="https://www.wsj.com/tech/ai/the-100-billion-megadeal-between-openai-and-nvidia-is-on-ice-aa3025e3"&gt;reported&lt;/a&gt; that Nvidia CEO Jensen Huang had concerns with OpenAI’s “lack of discipline” in its business approach. (When asked about the report, Huang said that it was “nonsense” to suggest he was unhappy with OpenAI.)&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;OpenAI’s haphazard business strategy has left the company to deal with an identity crisis of its own making. OpenAI is losing ground to Anthropic, its chief rival in the AI race, which has stuck with a targeted approach of selling productivity-enhancing AI tools to other businesses. Anthropic has had great success in its steadfast focus on the enterprise market. Now OpenAI is attempting to copy Anthropic’s playbook. “We cannot miss this moment because we are distracted by side quests,” Fidji Simo, OpenAI’s applications chief, &lt;a href="https://www.wsj.com/tech/ai/openai-chatgpt-side-projects-16b3a825?gaa_at=eafs&amp;amp;gaa_n=AWEtsqeNi7KZUpyc0R-CY0zW6U40-SzXhzLWrcn-4IZK0dq8H0FOpXEJv8BT3kT-OwM%3D&amp;amp;gaa_ts=69c40a9a&amp;amp;gaa_sig=2cWQJ6bPBmxZrmG5lOkZGaffyGigTDVFwDGG3rKwKALGs3bmMHcugiEQO1A4k2nWENSFxNkTT0Kj9rjAdG1BmA%3D%3D"&gt;reportedly&lt;/a&gt; told staff in a company-wide meeting earlier this month, explaining that the company needs to nail “productivity on the business front.” To do so, OpenAI is planning to nearly &lt;a href="https://www.ft.com/content/7ffea5b4-e8bc-47cd-adb4-257f84c8028b?syn-25a6b1a6=1"&gt;double&lt;/a&gt; its head count this year, including by hiring a team of specialists who will help other companies adopt its technology. Even at the product level, OpenAI appears to be copying Anthropic—OpenAI is apparently planning to launch a “superapp” to streamline its product offerings into one app, likely an attempt to compete with Anthropic’s Cowork and Claude Code. “We were spreading our efforts across too many apps,” Simo &lt;a href="https://www.wsj.com/tech/openai-plans-launch-of-desktop-superapp-to-refocus-simplify-user-experience-9e19931d?gaa_at=eafs&amp;amp;gaa_n=AWEtsqcUEU320HlVVXmFSgJGYL1_-ohapNpS-pcq3xFu7jOatmbZZBIGUHWpzzXxyrU%3D&amp;amp;gaa_ts=69c40cb1&amp;amp;gaa_sig=hWi3Y7WgJfpZ3PPcNbtbcXv9Jxb3tpiljzyU-shZBU80Gc3_pTY-GD8zY2b0M6IB_m_x01sx8ggLIjeW7GgRtw%3D%3D"&gt;wrote&lt;/a&gt; to employees last week. “That fragmentation has been slowing us down and making it harder to hit the quality bar we want.”&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;After scrolling through Iran deepfakes and Trump slop on Sora this morning, I navigated to Altman’s account on the platform. I was curious to see what the company’s CEO might have to say about the end of Sora. The last time that Altman appears to have posted on the app was six months ago, when it launched. Perhaps that should have been a foreboding sign. I continued watching more clips until a pop-up filled my screen. OpenAI wanted to know how using Sora was affecting my mood. The app offered me a choice between “Thumbs-Up” and “Thumbs-Down.” I hit “Thumbs-Down.”&lt;/p&gt;</content><author><name>Lila Shroff</name><uri>http://www.theatlantic.com/author/lila-shroff/?utm_source=feed</uri></author><media:content url="https://cdn.theatlantic.com/thumbor/jN0NRxco2DhLoUe_mhBiXlFKWEs=/media/img/mt/2026/03/2026_03_25_openAI_mpg/original.gif"><media:credit>Illustration by Matteo Giuseppe Pani / The Atlantic</media:credit></media:content><title type="html">OpenAI Is Doing Everything … Poorly</title><published>2026-03-25T19:52:00-04:00</published><updated>2026-03-26T14:02:31-04:00</updated><summary type="html">The company’s sudden decision to pull the plug on Sora is a sign of deeper trouble.</summary><link href="https://www.theatlantic.com/technology/2026/03/sora-openai-identity-crisis/686544/?utm_source=feed" rel="alternate" type="text/html"/></entry></feed>