<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Study Hacks - Decoding Patterns of Success - Cal Newport</title>
	<atom:link href="https://calnewport.com/blog/feed/" rel="self" type="application/rss+xml" />
	<link>https://calnewport.com/blog/</link>
	<description>Computer Scientist &#38; Bestselling Author</description>
	<lastBuildDate>Sun, 19 Apr 2026 23:58:33 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Brandon Sanderson vs. AI Art</title>
		<link>https://calnewport.com/brandon-sanderson-vs-ai-art/</link>
					<comments>https://calnewport.com/brandon-sanderson-vs-ai-art/#comments</comments>
		
		<dc:creator><![CDATA[Study Hacks]]></dc:creator>
		<pubDate>Mon, 20 Apr 2026 10:00:00 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://calnewport.com/?p=16872</guid>

					<description><![CDATA[<p>Late last year, the fantasy novelist Brandon Sanderson gave a talk at Dragonsteel Nexus, an annual conference organized by his media company. It was titled, ... <a title="Brandon Sanderson vs. AI Art" class="read-more" href="https://calnewport.com/brandon-sanderson-vs-ai-art/" aria-label="Read more about Brandon Sanderson vs. AI Art">Read more</a></p>
<p>The post <a href="https://calnewport.com/brandon-sanderson-vs-ai-art/">Brandon Sanderson vs. AI Art</a> appeared first on <a href="https://calnewport.com">Cal Newport</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Late last year, the fantasy novelist Brandon Sanderson gave a talk at Dragonsteel Nexus, an annual conference organized by his media company. It was titled, <a href="https://youtu.be/mb3uK-_QkOo?si=evm1LnMf1TCQ5bH6">​“The Hidden Cost of AI Art.”​</a></p>



<p>As Sanderson explains, early in his address: “The surge of large language models and generative AI raises questions that are fascinating, and even if I dislike how the movement is going in relation to writing and art, I want to learn from the experience of what’s happening.”</p>



<p>Sanderson makes it clear that he disapproves of AI-generated art (“my stomach turns”), but he wants to understand better why this is the case. To do so, he begins considering and then ultimately dismissing a series of common objections:</p>



<ul class="wp-block-list">
<li><strong>Does he dislike AI art because of the economic and environmental impacts?</strong> “Well, those do concern me, but if I’m answering honestly, I would still have a problem with it even if AI were not so resource hungry.”</li>



<li><strong>Does he dislike AI art because it’s trained on the work of existing artists?</strong> “ Well, I don’t like that. But even if it were trained using no copyrighted work, I’d still be concerned.”</li>



<li><strong>Does he just hate the idea of a machine replacing a person?</strong> Sanderson references the folk tale of John Henry attempting to beat a steam drill in a tunnel-digging competition that culminates in Henry’s death. “We respect him, but as a society we chose the steam drill. And I would too&#8230;The truth is, I’m more than happy to have steam engines drilling tunnels for me to drive through.”</li>
</ul>



<p><em>So what is it?</em></p>



<span id="more-16872"></span>



<p>Sanderson ultimately lands on a more personal reason. Talking about his struggles with his first (failed) book manuscripts, he identifies the key value of art: it changes the artist who attempts it. As he elaborates:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>“Maybe someday the language models will be able to write books better than I can. But here’s the thing: Using those models in such a way absolutely misses the point, because it looks at art only as a product. Why did I write [my first manuscript]?&#8230; It was for the satisfaction of having written a novel, feeling the accomplishment, and learning how to do it. I tell you right now, if you’ve never finished a project on this level, it’s one of the most sweet, beautiful, and transcendent moments. I was holding that manuscript, thinking to myself, ‘I did it. I did it.’”</p>
</blockquote>



<p>As a writer myself, I’ve also been thinking about this question recently. I like Sanderson’s take, but I’ve been developing one of my own. I understand art to be an act of deep human communication, in which the artist uses a tangible medium, such as a page of prose or a painted canvas, to transmit a complex internal cognitive state from their brain to that of their audience.</p>



<p>It’s telepathy. And it’s one of the most beautiful and human things we do.</p>



<p>This makes the idea of reading a book written by a language model, or watching a film generated by a prompt, intrinsically absurd, if not anti-human. It’s the heroin needle providing a quixotic simulation of love.</p>



<p>What really struck me about Sanderson’s talk, however, was his conclusion. If art is deeply human, he argues, then it’s up to us to define it. “That’s the great thing about art – we define it, and we give it meaning,” he says. “The machines can spit out manuscript after manuscript after manuscript. They can pile them to the pillars of heaven itself. But all we have to do is say ‘no.’”</p>



<p>I’ve noticed a trend in recent AI commentary toward a certain nihilistic passivity. You probably know what I&#8217;m talking about – the now popular style of essay in which the author, with a sort of worldly weariness, lays out some grim scenario in which AI destroys something sacred, and then sort of just leaves it there, like a cat dropping a dead bird on the doorstep.</p>



<p>I’m getting tired of this meekness.</p>



<p>Sanderson reminds us that we have agency. In the areas that matter most, it’s us, not the whims of Sam Altman or Dario Amodei, that determine how we shape our existence. All we have to do is say “no.”</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<h3 class="wp-block-heading">Correction: </h3>



<p>In last week&#8217;s&nbsp;<a href="https://preview.convertkit-mail4.com/click/dpheh0hzhm/aHR0cHM6Ly93d3cueW91dHViZS5jb20vd2F0Y2g_dj1rLThzdFFDZVFpRQ==" target="_blank" rel="noreferrer noopener">AI Reality Check episode</a>&nbsp;of my podcast, I said the following:</p>



<p>&#8220;If you go back and look at the release notes for Anthropic&#8217;s earlier, less powerful opus 4.6 LLM, they say the following: their researchers used Opus to find, quote, &#8216;over 500 exploitable zero-day vulnerabilities, some of which are decades old.&#8217; And let&#8217;s stop for a moment because that note, which was hidden in the system card for opus 4.6, is almost word for word what anthropic said about Mythos.&#8221;</p>



<p>Some of this wording was sloppy, so I want to clarify it here. I was referring to&nbsp;<a href="https://preview.convertkit-mail4.com/click/dpheh0hzhm/aHR0cHM6Ly9yZWQuYW50aHJvcGljLmNvbS8yMDI2L3plcm8tZGF5cy8=" target="_blank" rel="noreferrer noopener">this report</a>&nbsp;on Opus 4.6, which Anthropic published the same day it was released. This is not technically the system card for Opus 4.6, but it is accurately described as&nbsp;<em>release notes</em>&nbsp;(or perhaps&nbsp;<em>supplementary release notes</em>).</p>



<p>This report said: &#8220;Opus 4.6 found high-severity vulnerabilities, some that had gone undetected for decades.&#8221; In another place, it said: &#8220;So far, we&#8217;ve found and validated more than 500 high-severity vulnerabilities.&#8221; Both the title of the report and the conclusion refer to these vulnerabilities as “0-day.”</p>



<p>The specific quote I provided, however, does not appear in the report. It&#8217;s actually a summary of the report from<a href="https://preview.convertkit-mail4.com/click/dpheh0hzhm/aHR0cHM6Ly94LmNvbS9fRGFuaWVsU2luY2xhaXIvc3RhdHVzLzIwMTk1MjcxMDk4ODczNzc1NTc=" target="_blank" rel="noreferrer noopener">&nbsp;this tweet</a>. In my opinion, the summary is accurate, but the way I worded the above implies that it was actually found in the report, which it was not.</p>



<p><em>Thank you to the AI researcher who pointed out these issues. I appreciate corrections! You can always send concerns or notes to podcast@calnewport.com.</em></p>
<p>The post <a href="https://calnewport.com/brandon-sanderson-vs-ai-art/">Brandon Sanderson vs. AI Art</a> appeared first on <a href="https://calnewport.com">Cal Newport</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://calnewport.com/brandon-sanderson-vs-ai-art/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Is Claude Mythos “Terrifying” or Just Hype?</title>
		<link>https://calnewport.com/is-claude-mythos-terrifying-or-just-hype/</link>
					<comments>https://calnewport.com/is-claude-mythos-terrifying-or-just-hype/#comments</comments>
		
		<dc:creator><![CDATA[Study Hacks]]></dc:creator>
		<pubDate>Mon, 13 Apr 2026 10:00:00 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://calnewport.com/?p=16865</guid>

					<description><![CDATA[<p>Last week, millions of New York Times readers were subjected to ​an alarming column​ by Thomas Friedman. “Normally right now I would be writing about ... <a title="Is Claude Mythos “Terrifying” or Just Hype?" class="read-more" href="https://calnewport.com/is-claude-mythos-terrifying-or-just-hype/" aria-label="Read more about Is Claude Mythos “Terrifying” or Just Hype?">Read more</a></p>
<p>The post <a href="https://calnewport.com/is-claude-mythos-terrifying-or-just-hype/">Is Claude Mythos “Terrifying” or Just Hype?</a> appeared first on <a href="https://calnewport.com">Cal Newport</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Last week, millions of <em>New York Times</em> readers were subjected to <a href="https://www.nytimes.com/2026/04/07/opinion/anthropic-ai-claude-mythos.html">​an alarming column​</a> by Thomas Friedman. “Normally right now I would be writing about the geopolitical implications of the war with Iran,” Friedman begins, before soon continuing, “but I want to interrupt that thought to highlight a stunning advance in artificial intelligence — one that arrived sooner than expected and that will have equally profound geopolitical implications.”</p>



<p>The “stunning advance” was the release of Anthropic&#8217;s new LLM, named Claude Mythos. In a lengthy <a href="https://www.anthropic.com/glasswing">​press release​</a>, Anthropic announced that the model would be made available to a consortium of business partners, but not to the general public. To justify this decision, Anthropic cited their concerns about its effectiveness at finding security vulnerabilities in source code, noting: “AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.”</p>



<p>They go on to explain that Mythos “has already found thousands of high-severity vulnerabilities, including some in <em>every major operating system and web browser</em>.<em>”</em></p>



<p>This announcement clearly rattled Friedman, who called Anthropic’s decision not to release the model a “terrifying warning sign,” writing:</p>



<p>“Holy cow! Superintelligent A.I. is arriving faster than anticipated, at least in this area…If this A.I. tool were, indeed, to become widely available, it would mean the ability to hack any major infrastructure system — a hard and expensive effort that was once essentially the province only of private-sector experts and intelligence organizations — will be available to every criminal actor, terrorist organization and country, no matter how small.”</p>



<p>Friedman was far from alone in this concern. Many major news outlets expressed similar unease about this scary new development, including <a href="https://finance.yahoo.com/video/is-anthropics-claude-mythos-an-ai-nightmare-waiting-to-happen-203000700.html">​one particularly anxiety-provoking headline​</a> that asked if Mythos was an “AI nightmare waiting to happen?”</p>



<p>So, what’s really going on here?</p>



<p>I thought it was worth taking a moment to look closer, not just to address the specific worries about Mythos, but also to help recalibrate, more generally, how those of us seeking depth in a distracted world should consume AI news.</p>



<p class="has-text-align-center">~~~</p>



<p>When I talked to people who were spooked by Friedman’s column, they tended to be under the impression that this ability to find and exploit security vulnerabilities was a new phenomenon; a skill that emerged unexpectedly in Mythos, &#8220;terrifying&#8221; those who studied it.</p>



<p>In reality, security researchers have been worried about using LLMs for this purpose since the beginning of consumer LLMs.</p>



<p>Back in 2024, for example, IBM researchers published <a href="https://arxiv.org/abs/2404.08144">​a splashy study​</a> about using GPT-4 to attack security vulnerabilities. They found that GPT-4 successfully exploited 87% of the vulnerabilities that it was presented, as compared to close to 0% for GPT 3.5. “Our findings raise questions around the widespread deployment of highly capable LLM agents,” they concluded.</p>



<p>To be fair, in the case of GPT-4, researchers were assessing whether an LLM could write code to exploit a known vulnerability. Mythos, however, can also find these vulnerabilities from scratch. But this isn’t new either.</p>



<p>Accompanying the release notes for Anthropic’s earlier Opus 4.6 LLM was <a href="https://www.reddit.com/r/Anthropic/comments/1r05i5g/opus_46_found_over_500_exploitable_0days_some_of/">​the observation​</a> that Anthropic’s security team used the model to find “over 500 exploitable 0-day [vulnerabilities], some of which are decades old.” This is almost word-for-word what Anthropic said last week about Mythos, the main difference being that they replaced 500 with “thousands.”</p>



<p>We are not, therefore, talking about a new capability, but rather one that has been around for multiple years.</p>



<p>The relevant question then becomes, how much better is Mythos at finding vulnerabilities? It’s hard to tell for sure because Anthropic has kept their new model private. They did, however, release that Mythos scored 83.1% on a well-known cybersecurity benchmark. For comparison, Opus 4.6 scored 66.6% on this same test.</p>



<p>In general, benchmark results should be taken with a grain of salt as they represent specific (often narrow) tests that researchers can tune their models to pass. But even if we accept that this particular measure is useful, a sixteen percentage point increase seems to represent solid incremental progress more than a nightmarish leap.</p>



<p>When we turn our attention to actual results, the waters become even murkier. In a recent Substack post (<a href="https://garymarcus.substack.com/p/three-reasons-to-think-that-the-claude">​which is worth reading​</a>), Gary Marcus rounds up responses from security researchers who took a closer look at the specific exploits that Anthropic reported that Mythos discovered. They were not impressed.</p>



<ul class="wp-block-list">
<li>Philo Groves, for example, <a href="https://x.com/philogroves/status/2042195139477557499?s=61">​noted​</a> that Mythos’s attention-grabbing attack on the Firefox browser required certain common security features to be disabled, and it built on results previously discovered by Opus. (“Shocker,” he concludes sardonically.)</li>



<li>The CEO of the AI company HuggingFace then <a href="https://x.com/clementdelangue/status/2041953761069793557?s=61">​reported​</a> that they took all of the specific vulnerabilities that Anthropic highlighted and “ran them through small, cheap, open-weight models.” What did they find? “Those models recovered much of the same analysis.”</li>
</ul>



<p>Since Marcus published his essay, I’ve come across several more similar findings:</p>



<ul class="wp-block-list">
<li>The AI security expert Stanislav Fort ran <a href="https://x.com/stanislavfort/status/2041922370206654879">​an experiment​</a> to see if existing, cheap open-weight models could find the same vulnerability in FreeBSD (an open-source operating system) that Anthropic touted as evidence of Mythos’s scary abilities to uncover bugs that had been hiding for decades. The result: all eight existing models they tested discovered the same issue.</li>



<li>Meanwhile, the renowned security researcher Bruce Schneier <a href="https://www.youtube.com/watch?v=PsKVSHjres4">​weighed in​</a>, similarly concluding: “You don’t need Mythos to find the vulnerabilities they found.”</li>
</ul>



<p>And of course, it doesn’t help that a week before Anthropic released this supposedly super-powered vulnerability detector, they accidentally leaked the Claude Code source, and security researchers immediately found <a href="https://www.securityweek.com/critical-vulnerability-in-claude-code-emerges-days-after-source-leak/">​serious vulnerabilities​</a>. (I guess Anthropic forgot to use Mythos to clean up their own software…)</p>



<p class="has-text-align-center">~~~</p>



<p>What’s really happening?</p>



<p>It’s fair to say that LLMs have created <em>significant</em> cybersecurity concerns that researchers have been scrambling to address in recent years. It’s also fair to say, however, that we don’t yet have evidence that Claude Mythos significantly changed this reality. If anything, some of the early independent testing by security researchers implies that Mythos might be better understood as a version of Opus 4.6 tuned to perform better on a handful of benchmarks. And yet, many still took Anthropic at their word and covered this model’s release as a catastrophic event.</p>



<p>In a <a href="https://www.youtube.com/watch?v=mcN1VTTIjQs">​recent video​</a>, the AI commentator Mo Bitar compared Anthropic’s model rollouts to Apple iPhone launches, where every year they resell you the same product with minor improvements. “Except here,” he adds, “the product is existential dread.”</p>



<p>And we keep falling for it.</p>



<p>I think we’ve entered a stage where we need to almost entirely discount any claims made by the AI companies themselves <em>until</em> we can independently verify what’s actually going on.</p>
<p>The post <a href="https://calnewport.com/is-claude-mythos-terrifying-or-just-hype/">Is Claude Mythos “Terrifying” or Just Hype?</a> appeared first on <a href="https://calnewport.com">Cal Newport</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://calnewport.com/is-claude-mythos-terrifying-or-just-hype/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>In Defense of Thinking</title>
		<link>https://calnewport.com/in-defense-of-thinking-2/</link>
					<comments>https://calnewport.com/in-defense-of-thinking-2/#comments</comments>
		
		<dc:creator><![CDATA[Study Hacks]]></dc:creator>
		<pubDate>Mon, 30 Mar 2026 10:00:00 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://calnewport.com/?p=16857</guid>

					<description><![CDATA[<p>Ten years ago, I published ​Deep Work​. It was my second mainstream hardcover idea book. The previous title, ​So Good They Can’t Ignore You​, hadn’t ... <a title="In Defense of Thinking" class="read-more" href="https://calnewport.com/in-defense-of-thinking-2/" aria-label="Read more about In Defense of Thinking">Read more</a></p>
<p>The post <a href="https://calnewport.com/in-defense-of-thinking-2/">In Defense of Thinking</a> appeared first on <a href="https://calnewport.com">Cal Newport</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Ten years ago, I published <a href="https://www.amazon.com/Deep-Work-Focused-Success-Distracted/dp/1455586692">​<em>Deep Work</em>​</a><em>. </em>It was my second mainstream hardcover idea book. The previous title, <a href="https://www.amazon.com/dp/1455509124/">​<em>So Good They Can’t Ignore You</em>​</a><em>,</em> hadn’t sold as well as we hoped, so the expectations were lower for this follow-up.</p>



<p>This turned out to be freeing, as it allowed me to write <em>Deep Work</em> largely for myself – exploring the conceptual edges of the issues surrounding distraction that interested me most.</p>



<p>I was fascinated, for example, by the economic reality that so many knowledge work organizations systematically undervalued focus, and was convinced that this provided a massive opportunity for those willing to correct for this mistake. In this way, I saw myself as articulating something like <em>Moneyball</em> for the cubicle class. I also firmly believed that the act of thinking was at the core of the post-Paleolithic human experience; the source of our greatest ideas, satisfactions, and even moments of transcendence.</p>



<p>This mixture of the economic and philosophical was different from the typical book in this genre at the time. Readers probably expected that I would open on a breathless tale of an overworked executive, then regurgitate some stats about interruptions, before proceeding with long lists of tips calibrated to be practical, but also not too challenging, presented in a conversational tone and accompanied by clearly manipulated case studies.</p>



<p>But <em>Deep Work</em> was much weirder and more intense than that. Re-reading it recently, I was struck by how many of my stories had nothing to do with the knowledge sector at all. I quoted philosophers of religion and a blacksmith who forged swords with ancient techniques. I profiled a memory champion and discussed <em>chavruta,</em> the Jewish practice of studying Talmud or Torah in pairs. Rather than opening the book on a frustrated executive, I focused on Carl Jung’s efforts to break free from Sigmund Freud’s capriciousness. It was a direct look at the sources and ideas that most resonated with me.</p>



<p>This idiosyncratic approach seemed to reveal something fundamentally true about the problematic state of work at that time, as the book soon found an audience, going on to sell more than two million copies in over forty-five languages. (In its wake, <em>So Good They Can’t Ignore You</em> finally found its groove as well, quietly selling more than half a million copies, providing me with a dash of retrospective vindication.)</p>



<p>All of this led me recently to ask a natural follow-up question: <strong>How have things changed since that book first came out in 2016?</strong></p>



<p>I tackled this query in <a href="https://www.nytimes.com/2026/03/27/opinion/technology-mental-fitness-cognitive.html">​a long-form essay​</a> I published in the <em>New York Times</em> over the weekend. My answer wasn’t optimistic:</p>



<span id="more-16857"></span>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>“The problems I focused on in <em>Deep Work</em>, and in my writing since, have been getting steadily worse. In 2016 my main concern was helping people find enough free time for deep work. Today I think we’re rapidly losing the ability to think deeply at all, regardless of how much space we can find in our schedules for these efforts.&#8221;</p>
</blockquote>



<p>Distractions in the workplace intensified over the past decade with the addition of instant messaging tools like Slack and low-friction digital meeting programs like Zoom. Outside of work, social media, which was generally still admired when <em>Deep Work</em> came out, has morphed into an addictive TikTok-ified slurry of optimized brain rot. Meanwhile, new AI tools offer quick-fix short-cuts to whatever intellectually engaging work activities remain.</p>



<p>None of this is great news.</p>



<p>So, what should we do? The obvious short answer is to read <a href="https://www.amazon.com/Deep-Work-Focused-Success-Distracted/dp/1455586692">​<em>Deep Work</em>.​</a> (Or, if you already have, buy some copies for people you know who need to hear its message!)</p>



<p>But that’s only a small step toward our larger goal of a world in which we once again respect the act of cognition. In my <em>Times</em> piece, I suggest a louder response: we launch a revolution in defense of thinking.</p>



<p>I go on to suggest multiple concrete actions that such a revolution can include, such as:</p>



<ul class="wp-block-list">
<li>Stop consuming social media (which is, if we are being honest, digital junk food and something adults largely need to eliminate from a healthy content diet).</li>



<li>Keep your phone plugged in and charging when at home instead of on your person.</li>



<li>Push Congress to follow Australia&#8217;s lead and ban social media for kids.</li>



<li>Build work cultures in which phones and laptops stay out of meetings, and find collaboration strategies that don’t require constant messaging.</li>



<li>Stop vague demands to “use AI” and instead carefully integrate these tools where they actually make us smarter, not just busier.</li>
</ul>



<p>But more important than any specific suggestion is the larger spirit of revolution. “I’m done ceding my brain — the core of all that makes me who I am — to the financial interests of a small number of technology billionaires or the shortsighted conveniences of hyperactive communication styles,” I write in the conclusion of my <em>Times</em> op-ed. “It’s time to move past fretting about our slide into the cognitive shallows and decide to actually do something about it.”</p>
<p>The post <a href="https://calnewport.com/in-defense-of-thinking-2/">In Defense of Thinking</a> appeared first on <a href="https://calnewport.com">Cal Newport</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://calnewport.com/in-defense-of-thinking-2/feed/</wfw:commentRss>
			<slash:comments>23</slash:comments>
		
		
			</item>
		<item>
		<title>Avoiding Digital Productivity Traps</title>
		<link>https://calnewport.com/avoiding-digital-productivity-traps/</link>
					<comments>https://calnewport.com/avoiding-digital-productivity-traps/#comments</comments>
		
		<dc:creator><![CDATA[Study Hacks]]></dc:creator>
		<pubDate>Mon, 23 Mar 2026 10:00:00 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://calnewport.com/?p=16818</guid>

					<description><![CDATA[<p>​Last week​ in this newsletter, I summarized some interesting results from ​a study​ that analyzed the behavior of 164,000 knowledge workers. It found that introducing ... <a title="Avoiding Digital Productivity Traps" class="read-more" href="https://calnewport.com/avoiding-digital-productivity-traps/" aria-label="Read more about Avoiding Digital Productivity Traps">Read more</a></p>
<p>The post <a href="https://calnewport.com/avoiding-digital-productivity-traps/">Avoiding Digital Productivity Traps</a> appeared first on <a href="https://calnewport.com">Cal Newport</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p><a href="https://calnewport.com/why-hasnt-ai-made-work-easier/">​Last week​</a> in this newsletter, I summarized some interesting results from <a href="https://www.wsj.com/tech/ai/ai-isnt-lightening-workloads-its-making-them-more-intense-e417dd2c">​a study​</a> that analyzed the behavior of 164,000 knowledge workers. It found that introducing AI tools increased administrative tasks by more than 90% while reducing deep work effort by almost 10%.</p>



<p>The problem, I concluded, was that digital productivity tools sometimes speed up the <em>wrong</em> tasks, which might feel efficient in the moment, but lead us to accomplish less over time. As I emphasized, AI is not the only technology to produce this paradoxical side effect —we saw something similar with email, mobile computing, and online meeting software as well.</p>



<p><em>So, what’s the solution to avoid these traps?</em></p>



<p>In <a href="https://open.spotify.com/show/0e9lFr3AdJByoBpM6tAbxD?si=c92344b6836b4c76">​today’s episode​</a> of my podcast, I suggested three ideas that might help. I want to summarize them here as well:</p>



<span id="more-16818"></span>



<p><strong>Idea #1:</strong> Use a Better Scoreboard</p>



<p>Make sure you measure what <em>actually</em> matters in your job. If you’re a professor at a research institution, for example, this might be the number of papers you publish per year. If you’re a team manager, it might be the number of priority projects completed per month.</p>



<p>When you introduce new digital productivity tools into your workflow, don’t focus too much on their impact on individual tasks (e.g., “Wow! That email was much faster to send than a fax,” or “AI just finished a task in 20 minutes that would have taken me 3 hours!”). Pay attention instead to your scoreboard. If you’re not producing more valuable output than before, the tool isn’t really making you more productive.</p>



<p><strong>Idea #2: </strong>Focus on the Right Bottlenecks</p>



<p>If you look closer at many knowledge work projects, you’ll identify a key <em>bottleneck</em> that determines how fast they can be accomplished. If you want to become more productive, you should look for ways to deploy tools that improve this specific step.</p>



<p>When working on <em>Deep Work</em>, for example, I spoke with a prominent Wharton professor who told me that one of the keys to publishing journal papers in his field was access to interesting data sets. He published more papers per year than most of his peers, largely because he spent more time building relationships with companies and institutions in search of good data. This was the bottleneck for his work.</p>



<p>Accordingly, any tool that could help him cultivate more such relationships and gather better data from the relationships he had already formed would directly improve his productivity. Compare this, for example, to using Claude Code to speed up the process of producing plots for his papers. This might, in limited windows of time, make his job more convenient, but not necessarily increase the number of papers he publishes per year.</p>



<p><strong>Idea #3:</strong> Separate Deep from Shallow Work</p>



<p>My final idea is the simplest: on your daily calendar, clearly separate time for focused effort that directly produces value from administrative, logistical, and collaborative tasks. In this way, if a digital productivity tool ends up accidentally increasing the volume of shallow work you face each day, you’ll limit the damage to your ability to make progress on important projects.</p>



<p>This makes it easier to experiment with different tools without worrying that you might end up — like many of the subjects in the study cited above — suddenly overwhelmed by the ultra-fast processing of minutiae while the big things slowly languish.</p>
<p>The post <a href="https://calnewport.com/avoiding-digital-productivity-traps/">Avoiding Digital Productivity Traps</a> appeared first on <a href="https://calnewport.com">Cal Newport</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://calnewport.com/avoiding-digital-productivity-traps/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>Why Hasn’t AI Made Work Easier?</title>
		<link>https://calnewport.com/why-hasnt-ai-made-work-easier/</link>
					<comments>https://calnewport.com/why-hasnt-ai-made-work-easier/#comments</comments>
		
		<dc:creator><![CDATA[Study Hacks]]></dc:creator>
		<pubDate>Mon, 16 Mar 2026 10:00:00 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://calnewport.com/?p=16813</guid>

					<description><![CDATA[<p>I’ve been studying the intersection of digital technology and office work for quite some time. (I find it hard to believe that my book, ​Deep ... <a title="Why Hasn’t AI Made Work Easier?" class="read-more" href="https://calnewport.com/why-hasnt-ai-made-work-easier/" aria-label="Read more about Why Hasn’t AI Made Work Easier?">Read more</a></p>
<p>The post <a href="https://calnewport.com/why-hasnt-ai-made-work-easier/">Why Hasn’t AI Made Work Easier?</a> appeared first on <a href="https://calnewport.com">Cal Newport</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>I’ve been studying the intersection of digital technology and office work for quite some time. (I find it hard to believe that my book, <a href="https://www.amazon.com/Deep-Work-Focused-Success-Distracted/dp/1455586692">​<em>Deep Work</em>​</a>, just passed its ten-year anniversary!?) Here’s a pattern I’ve observed again and again:</p>



<ul class="wp-block-list">
<li>A new technology promises to speed up some annoying aspects of our jobs.</li>



<li>Everyone gets excited about freeing up more time for deep work and leisure.</li>



<li>We end up <em>busier</em> than before without producing more of the high-value output that actually moves the needle.</li>
</ul>



<p>This happened with the front-office IT revolution, and email, and mobile computing, and once again with video-conferencing.</p>



<p>I’m now starting to fear that we’re beginning to encounter the same thing with AI as well.</p>



<p>My worries were stoked, in part, by a recent article in the <em>Wall Street Journal</em>, titled <a href="https://www.wsj.com/tech/ai/ai-isnt-lightening-workloads-its-making-them-more-intense-e417dd2c">​“AI Isn’t Lightening Workloads. It’s Making Them More Intense.”​</a></p>



<p>The piece cites new research from the software company ActivTrak, which analyzed the digital activity of 164,000 workers across more than 1,000 employers. What makes the study notable is its methodology: it tracked individual AI users for 180 days before and after they began using these tools, providing clear insight into what changed. The results?</p>



<p>“ActivTrak found AI intensified activity across nearly every category: The time they spent on email, messaging and chat apps more than doubled, while their use of business-management tools, such as human-resources or accounting software, rose 94%.“</p>



<p>The one category where activity was <em>not</em> intensified, however, was deep work:</p>



<p>“[T]he amount of time AI users devoted to focused, uninterrupted work—the kind of concentration often required for figuring out complex problems, writing formulas, creating and strategizing—fell 9%, compared with nearly no change for nonusers.”</p>



<p>This is a worst-case scenario: you work faster and harder, but mainly on shallow, mentally taxing tasks (because of all the context shifting they require) that only indirectly help the bottom line compared to harder efforts.</p>



<span id="more-16813"></span>



<p>It’s not quite clear why AI tools are having this impact. One tantalizing clue, however, comes from Berkeley professor Aruna Ranganathan, who is quoted in the article saying: “AI makes additional tasks feel easy and accessible, creating a sense of momentum.”</p>



<p>This points toward a pattern similar to what happened when email first arrived. It was undeniably true that sending emails was more efficient than wrangling fax machines and voicemail. But once workers gained access to low-friction communication, they transformed their days into a furious flurry of back-and-forth messaging that felt “productive” in the <a href="https://www.amazon.com/Slow-Productivity-Accomplishment-Without-Burnout/dp/0593544854/">​abstract, activity-centric sense​</a> of that term, but ultimately hurt almost every other aspect of their jobs and <a href="https://www.newyorker.com/tech/annals-of-technology/e-mail-is-making-us-miserable">​made everyone miserable​</a>.</p>



<p>AI tools might be replicating this dynamic with small, self-contained tasks. Users are now furiously bouncing ideas back and forth with chatbots, iteratively refining text and generating drafts of memos and slide decks that are often <a href="https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity">​too sloppy ​</a>to be useful. If they’re particularly tech savvy, perhaps they’re even monitoring the efforts of agent swarms deployed to parallelize such efforts even further. Once again, this all seems “productive” in the sense that these individual tasks appear to be happening faster, and activity seems intensified overall.</p>



<p>But are we sure we’re accelerating the right parts of our jobs?</p>



<hr class="wp-block-separator has-text-color has-global-color-8-color has-alpha-channel-opacity has-global-color-8-background-color has-background is-style-default"/>



<h3 class="wp-block-heading"><strong>I Need Your Help</strong></h3>



<p>I’m working on an article for a major publication about the move toward simple, high-friction, single-use technologies like the <a href="https://tincan.kids/">​Tin Can phone​</a>. If you have a Tin Can phone/are on the waiting list, or have recently embraced similar retro technologies, and are willing to talk, please send me an email at <a href="mailto:podcast@calnewport.com">​<strong>podcast@calnewport.com</strong>​</a>. I want to hear about your motivations and experience!</p>



<figure class="wp-block-image"><img fetchpriority="high" decoding="async" width="2400" height="240" src="https://calnewport.com/wp-content/uploads/2026/03/pqwRyXLTHodBmNohEhe4Yo.png" alt="" class="wp-image-16815" srcset="https://calnewport.com/wp-content/uploads/2026/03/pqwRyXLTHodBmNohEhe4Yo.png 2400w, https://calnewport.com/wp-content/uploads/2026/03/pqwRyXLTHodBmNohEhe4Yo-300x30.png 300w, https://calnewport.com/wp-content/uploads/2026/03/pqwRyXLTHodBmNohEhe4Yo-1024x102.png 1024w, https://calnewport.com/wp-content/uploads/2026/03/pqwRyXLTHodBmNohEhe4Yo-768x77.png 768w, https://calnewport.com/wp-content/uploads/2026/03/pqwRyXLTHodBmNohEhe4Yo-1536x154.png 1536w, https://calnewport.com/wp-content/uploads/2026/03/pqwRyXLTHodBmNohEhe4Yo-2048x205.png 2048w" sizes="(max-width: 2400px) 100vw, 2400px" /></figure>



<h3 class="wp-block-heading"><strong>AI Reality Check</strong>: Is Claude Conscious?</h3>



<p>If you were following AI news last week, you might have noticed a barrage of concerning headlines about Anthropic’s Claude LLM, including:</p>



<ul class="wp-block-list">
<li><a href="https://futurism.com/artificial-intelligence/anthropic-ceo-unsure-claude-conscious">​“Anthropic CEO Says Company No Longer Sure Whether Claude is Conscious.”​</a></li>



<li><a href="https://www.newsnationnow.com/jesse-weber-live/claude-ai-consciousness/">​“Is AI Assistant Claude Conscious – and Suffering from Anxiety?”​</a></li>



<li><a href="https://www.ndtv.com/world-news/is-claude-conscious-anthropic-ceo-dario-amodei-says-possibility-cant-be-ruled-out-11175771">​“Is Claude Conscious? Anthropic CEO Says Possibility Can’t Be Ruled Out”​</a></li>
</ul>



<p><em>Here’s what happened.</em> Anthropic infamously puts outlandish warnings and observations in their release notes for their new models because, I suppose, they think it makes them look more safety-aware and responsible (e.g., their classic <a href="https://www.aipanic.news/p/ai-blackmail-fact-checking-a-misleading">​AI blackmail farce​</a>).</p>



<p>True to form, in the notes accompanying the recent release of Opus 4.6, they wrote that the model <strong>“expresses occasional discomfort with the experience of being a product</strong>” and would <strong>“assign itself a 15 to 20 percent probability of being conscious under a variety of prompting circumstances.”</strong></p>



<p>That last part is key. With the right prompts, you can induce an LLM to describe itself as anything you want. Remember: the goal of LLMs is to complete whatever story they’re provided as input. If you wind a model up – even subtly – to write a story from the perspective of being a conscious AI, it will oblige.</p>



<p>Anyway, in <a href="https://www.nytimes.com/2026/02/12/opinion/artificial-intelligence-anthropic-amodei.html">​a recent interview​</a>, Ross Douthat asked Anthropic CEO Dario Amodei about this particular release note. Amodei answered, in part, by saying:</p>



<p>“We don’t know if the models are conscious. We are not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. But we’re open to the idea that it could be.”</p>



<p>Of course, you could say the same thing about a vacuum cleaner. It’s a non-answer containing no actual information or testable claims. But, the internet being the internet, ran with it. <em>Sigh.</em></p>
<p>The post <a href="https://calnewport.com/why-hasnt-ai-made-work-easier/">Why Hasn’t AI Made Work Easier?</a> appeared first on <a href="https://calnewport.com">Cal Newport</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://calnewport.com/why-hasnt-ai-made-work-easier/feed/</wfw:commentRss>
			<slash:comments>5</slash:comments>
		
		
			</item>
		<item>
		<title>The Original Attention Crisis</title>
		<link>https://calnewport.com/the-original-attention-crisis/</link>
					<comments>https://calnewport.com/the-original-attention-crisis/#comments</comments>
		
		<dc:creator><![CDATA[Study Hacks]]></dc:creator>
		<pubDate>Mon, 09 Mar 2026 10:00:00 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://calnewport.com/?p=16810</guid>

					<description><![CDATA[<p>I recently heard from a historian of science at All Souls College, Oxford. He forwarded me ​an essay​ he wrote about Nicolaus Steno, a seventeenth-century ... <a title="The Original Attention Crisis" class="read-more" href="https://calnewport.com/the-original-attention-crisis/" aria-label="Read more about The Original Attention Crisis">Read more</a></p>
<p>The post <a href="https://calnewport.com/the-original-attention-crisis/">The Original Attention Crisis</a> appeared first on <a href="https://calnewport.com">Cal Newport</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>I recently heard from a historian of science at All Souls College, Oxford. He forwarded me <a href="https://nunocastelbranco.substack.com/p/focused-work-in-early-modern-times">​an essay​</a> he wrote about Nicolaus Steno, a seventeenth-century anatomist and geologist who was later ordained as a Catholic Bishop.</p>



<p>Steno’s training as a scholar unfolded in a period challenged by a novel problem: information overload. Here’s how the essay describes it:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>“Books were a leading distraction in the early modern period—and how envious we should be of those times. From the 1500s onward, with the development of the printing press and the humanist revival of ancient philosophies, knowledge became available at a much greater pace than ever before.”</p>
</blockquote>



<p>This created pressing questions for aspiring thinkers, including: “How do we decide what to read? How long should we read it for? Must every single chapter be excerpted?”</p>



<p>Part of the solution was the development of “new note-taking techniques,” including the copying of excerpts into a master notebook called a book of commonplaces. (For more on this technique, I recommend William Powell’s delightful 2010 techno-history, <a href="https://www.amazon.com/Hamlets-BlackBerry-Building-Good-Digital/dp/0061687170/">​<em>Hamlet’s Blackberry</em>​</a>).</p>



<p>But as the essay on Steno elaborates, better notes weren’t enough on their own, as there were simply too many good books available. In response to this reality, Steno, during his university studies in the 1650s, innovated some more advanced attention management strategies:</p>



<span id="more-16810"></span>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>“[H]e learned to focus on specific themes, rather than letting his mind read multiple things quickly. A ‘harmful hastening should be avoided’ as he put it. His solution was to ‘stick to one topic.’</p>



<p>In practice, that meant blocking specific moments of time to go through the hardest tasks. As he wrote in his personal notebook, ‘before noon nothing must be done except medical things.’ … As Steno told a friend, he took ‘almost all the morning hours’ to read the works of the Church Fathers and old biblical manuscripts available at the Medici library.”</p>
</blockquote>



<p>In other words, Steno created a method that combines what we might now call <a href="https://www.amazon.com/Slow-Productivity-Accomplishment-Without-Burnout/dp/0593544854">​slow productivity​</a>, <a href="https://www.amazon.com/Deep-Work-Focused-Success-Distracted/dp/1455586692">​deep work​</a>, and <a href="https://www.timeblockplanner.com/">​time blocking​</a>.</p>



<p>The lessons here are clear. The use of our brains to think deeply about meaningful ideas isn’t new. It’s been at the core of the human experience since the early modern period, when access to sophisticated information first became somewhat widespread.</p>



<p>The best practices developed back then remain the best practices today: avoid overload, focus on one thing at a time, and block off specific hours in your day for your most mentally demanding efforts.</p>



<figure class="wp-block-image"><img decoding="async" src="https://embed.filekitcdn.com/e/ekndSb6aixDTy6CAJEGkrv/pqwRyXLTHodBmNohEhe4Yo" alt=""/></figure>



<h3 class="wp-block-heading"><strong>AI Reality Check</strong>:</h3>



<p>Two weeks ago, a small financial services firm, Citrini Research, published <a href="https://www.citriniresearch.com/p/2028gic">​an essay​</a> describing a bleak scenario in which AI agents destroy the white-collar job market in the near future. The piece went viral and was <a href="https://www.bloomberg.com/news/articles/2026-02-24/citrini-founder-shocked-his-ai-prediction-spurred-stocks-selloff?embedded-checkout=true">​cited as a factor​</a> in a modest decline of the S&amp;P 500 the next day.</p>



<p>The Citrini essay wasn’t the first to float this scenario. In recent weeks, there have been multiple credulous articles and op-eds in major publications proposing similar outcomes (e.g., <a href="https://www.theatlantic.com/ideas/2026/02/ai-white-collar-jobs/686031/">​1​</a>, <a href="https://www.theatlantic.com/magazine/2026/03/ai-economy-labor-market-transformation/685731/">​2​</a>, <a href="https://www.nytimes.com/2026/03/05/opinion/ai-jobs-white-collar-apocalpyse.html">​and 3​</a>). But the negative impact on the stock market seems to have been the last straw for serious economists who began to push back on these technological ghost stories last week. (I particularly enjoyed a Deutsche Bank analyst who, perhaps borrowing <a href="https://calnewport.com/the-dangers-of-vibe-reporting-about-ai/">​some of my​</a> terminology, <a href="https://www.nytimes.com/2026/02/25/business/citrini-ai-stock-market.html">​told the <em>Times</em>​</a> that the Citrini article had a “vibes-to-substance ratio” that was “undeniably high.”)</p>



<p>If you’re looking to reduce your blood pressure about this idea that AI is about to unravel the economy, I suggest reading <a href="https://www.citadelsecurities.com/news-and-insights/2026-global-intelligence-crisis/">​a detailed response article​</a> published by an analyst from the Global Macro Strategies group at Citadel. It begins with a bit of finance geek sarcasm:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>“Despite the macroeconomic community struggling to forecast 2-month-forward payroll growth with any reliable accuracy, the forward path of labor destruction can apparently be inferred with significant certainty from a hypothetical scenario posted on Substack…”</p>
</blockquote>



<p>It then continues to systematically destabilize the economic naivety of these breathless op-eds and viral essays about how AI will dismantle the economy all at once. It certainly made me feel better.</p>



<p>(If you’re looking for additional soothing of your AI anxiety, then you should also check <a href="https://www.youtube.com/watch?v=JRayjrpX10k">​the first episode​</a> of my new <em>AI Reality Check</em> podcast series, which I published last Thursday. I have a new episode of the series coming out this upcoming Thursday as well.)</p>
<p>The post <a href="https://calnewport.com/the-original-attention-crisis/">The Original Attention Crisis</a> appeared first on <a href="https://calnewport.com">Cal Newport</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://calnewport.com/the-original-attention-crisis/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>What Do Social Media Companies Fear? Time Management.</title>
		<link>https://calnewport.com/what-do-social-media-companies-fear-time-management/</link>
					<comments>https://calnewport.com/what-do-social-media-companies-fear-time-management/#comments</comments>
		
		<dc:creator><![CDATA[Study Hacks]]></dc:creator>
		<pubDate>Mon, 02 Mar 2026 11:00:00 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://calnewport.com/?p=16806</guid>

					<description><![CDATA[<p>I recently came across an interesting academic article in the journal Frontiers in Psychology. It was titled, ​“The relationships between social media use, time management, ... <a title="What Do Social Media Companies Fear? Time Management." class="read-more" href="https://calnewport.com/what-do-social-media-companies-fear-time-management/" aria-label="Read more about What Do Social Media Companies Fear? Time Management.">Read more</a></p>
<p>The post <a href="https://calnewport.com/what-do-social-media-companies-fear-time-management/">What Do Social Media Companies Fear? Time Management.</a> appeared first on <a href="https://calnewport.com">Cal Newport</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>I recently came across an interesting academic article in the journal <em>Frontiers in Psychology.</em> It was titled, <a href="https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2025.1702767/full">​“The relationships between social media use, time management, and decision-making styles.”​</a></p>



<p>The paper’s author surveyed 612 university students and young adults, asking them, among other things, about their digital habits and levels of personal organization. Using a linear regression analysis, she uncovered the following:</p>



<p>“Social media use was negatively and significantly associated with overall time management and all its subscales.”</p>



<p>Here’s the standard interpretation of this result: Social media is distracting, and if you’re distracted, it becomes harder to maintain control over your schedule. So, the more you use social media, the worse you become at time management.</p>



<p>But I’ve become interested in the reverse form of this argument: <strong>the better your planning system, the less time you’ll spend on engagement-based applications like social media</strong>.</p>



<span id="more-16806"></span>



<p><em>Here’s my thinking…</em></p>



<p>When you’re following an intentional schedule, your efforts are oriented toward goals that you find important. You also feel a satisfying sense of self-efficacy. These realities engage your long-term reward system, which can override the urges generated by its short-term counterpart, dissipating the drive for quick gratification from activities like glancing at your phone.</p>



<p>In other words: The more you organize your analog life, the less appealing you’ll find the digital alternative.</p>



<p>If this is true, then maybe the thing social media companies fear most is not some newly-powerful application-blocking software or impossibly strict regulation, but rather a good old-fashioned daily planner.</p>



<figure class="wp-block-image"><img decoding="async" src="https://embed.filekitcdn.com/e/ekndSb6aixDTy6CAJEGkrv/pqwRyXLTHodBmNohEhe4Yo" alt=""/></figure>



<h3 class="wp-block-heading"><strong>In Other News</strong>:</h3>



<p>A lot of people I know have been freaked out recently by a viral essay with a grandiose title: <a href="https://x.com/mattshumer_/status/2021256989876109403">​“Something Big is Happening.”​</a> I recently released <a href="https://www.youtube.com/watch?v=Ijt8lV6b7QY">​a short video​</a> in which I conduct a close analysis of this piece. (Spoiler alert: I wasn’t impressed.) <a href="https://www.youtube.com/watch?v=Ijt8lV6b7QY">​<em>Check it out.</em>​</a></p>



<p>(More generally, I’ve been considering starting a separate weekly podcast/newsletter dedicated to providing a reality check on recent AI news. It feels like it might be useful to separate this discussion from my existing podcast and newsletter, which are more focused on how individuals can seek depth in a distracted world. But also, maybe this is a bad idea? I’m interested to hear your thoughts about this plan.)</p>
<p>The post <a href="https://calnewport.com/what-do-social-media-companies-fear-time-management/">What Do Social Media Companies Fear? Time Management.</a> appeared first on <a href="https://calnewport.com">Cal Newport</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://calnewport.com/what-do-social-media-companies-fear-time-management/feed/</wfw:commentRss>
			<slash:comments>19</slash:comments>
		
		
			</item>
		<item>
		<title>Film Students Can No Longer Sit Through Films</title>
		<link>https://calnewport.com/film-students-can-no-longer-sit-through-films/</link>
					<comments>https://calnewport.com/film-students-can-no-longer-sit-through-films/#comments</comments>
		
		<dc:creator><![CDATA[Study Hacks]]></dc:creator>
		<pubDate>Mon, 23 Feb 2026 11:00:00 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://calnewport.com/?p=16793</guid>

					<description><![CDATA[<p>Last month, The Atlantic published an article with an alarming headline: ​“The Film Students Who Can No Longer Sit Through Films.”​ The author of the ... <a title="Film Students Can No Longer Sit Through Films" class="read-more" href="https://calnewport.com/film-students-can-no-longer-sit-through-films/" aria-label="Read more about Film Students Can No Longer Sit Through Films">Read more</a></p>
<p>The post <a href="https://calnewport.com/film-students-can-no-longer-sit-through-films/">Film Students Can No Longer Sit Through Films</a> appeared first on <a href="https://calnewport.com">Cal Newport</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Last month, <em>The</em> <em>Atlantic</em> published an article with an alarming headline: <a href="https://www.theatlantic.com/ideas/2026/01/college-students-movies-attention-span/685812/">​“The Film Students Who Can No Longer Sit Through Films.”​</a></p>



<p>The author of the piece, Rose Horowitch, spoke with professors around the country who have begun to complain about this trend. What she learned was disheartening:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>“I used to think, if homework is watching a movie, that is the best homework ever,” Craig Erpelding, a film professor at the University of Wisconsin at Madison, told me. “But students will not do it.”</p>



<p>I heard similar observations from 20 film-studies professors around the country. They told me that over the past decade, and particularly since the pandemic, students have struggled to pay attention to feature-length films.</p>
</blockquote>



<p>What’s the source of this attention span crisis? The professors interviewed for Horowitch&#8217;s article point to a clear culprit: <em>smartphones</em>.</p>



<p>The founding director of Tufts University’s Film and Media Studies, for example, tried to ban electronics during screenings, but found the rule impossible to enforce. “About half the class ends up looking furtively at their phones,” she said. Meanwhile, a Cinema and Media Studies professor at USC reports that his students remind him of “nicotine addicts going through withdrawal…the longer they go without checking their phone, the more they fidget.”</p>



<p>The mechanism at play here is an ability that reading scholar Maryanne Wolf calls <em>cognitive patience</em><strong>, </strong>which is <a href="https://ssol-journal.com/articles/10.61645/ssol.176">​defined as​</a> the “ability to [maintain] focused and sustained attention and delay gratification, while refraining from multitasking.”</p>



<p>The presence of smartphones degrades cognitive patience because they activate neuronal bundles in our brain’s short-term reward system that anticipate a high expected value from picking up the device. These bundles effectively <em>vote</em> for the distracting behavior, creating a cascade of neurochemicals that are experienced as motivation to grab the phone. After a while, due to a lack of practice, you lose your comfort with sustained attention altogether.</p>



<p>It’s no wonder more and more people lack the cognitive patience to make it through a two-hour film!</p>



<p>But as I elaborate on my <a href="https://open.spotify.com/show/0e9lFr3AdJByoBpM6tAbxD?si=eebbb70d4a344292">​podcast this week​</a>, in this specific problem with movies, we can find a solution to the more general issue of weakened attention. Why not make the ability to watch an entire film a training goal for the attempt to reclaim our brains? Like the new runner working up to completing their first 5k, it’s a milestone that’s challenging, but not too challenging, and therefore a great way to begin an effort toward attention autonomy.</p>



<span id="more-16793"></span>



<p>Assuming you take on this goal, what’s the best way to improve your cinematic cognitive patience? Here are my three suggestions:</p>



<ol class="wp-block-list">
<li><strong>Keep your phone in a different room.</strong> This prevents your short-term reward system from firing out of control with distracting impulses.</li>



<li><strong>Watch better movies</strong>. If you have a meaningful viewing experience, your long-term reward system will more strongly associate movies with lasting benefits, making it easier to delay gratification in the future.</li>



<li><strong>To help get through these movies at first, practice the thirty-minute rule</strong>. Before you start the movie, read a review or analysis that helps explain why it’s good. Pause the movie every thirty minutes or so to read <em>another</em> review or analysis. This helps reorient your brain toward a perspective of critical appreciation, allowing you to continually find value and avoid the sense of slogging for the sake of slogging.</li>
</ol>



<p>I appreciate the irony here: I’m suggesting you watch one screen to reduce the distracting impact of another. But it’s become clear to me recently that although many people are fed up with the impact of digital devices on their brains, they don’t know how to push back. Maybe rediscovering the patient joys of movies can be a part of that answer…</p>



<figure class="wp-block-image"><img decoding="async" src="https://embed.filekitcdn.com/e/ekndSb6aixDTy6CAJEGkrv/pqwRyXLTHodBmNohEhe4Yo" alt=""/></figure>



<h3 class="wp-block-heading"><strong>In Other News</strong>: AI Vibe Reporting</h3>



<p><em>I’m experimenting with including a section like this more often, in which I briefly discuss news relevant to technology, distraction, and the fight for depth.</em></p>



<p>Judging by the increasing volume of distressed messages I now receive from people I know, the quantity of <a href="https://calnewport.com/the-dangers-of-vibe-reporting-about-ai/">​AI vibe reporting​</a> out there is on the rise. I want to help you navigate this media landscape without becoming unnecessarily worried. With this in mind, let&#8217;s tackle a case study. Last week, <em>The Atlantic</em> published a vibe-filled article titled <a href="https://www.theatlantic.com/ideas/2026/02/ai-white-collar-jobs/686031/">​“The Worst-Case Future for White-Collar Workers.”​</a> I want to take a critical look at several quotes from this piece:</p>



<ul class="wp-block-list">
<li><strong>“[T]he labor market for office workers is beginning to shift. Americans with a bachelor’s degree account for a quarter of the unemployed, a record.”</strong> Clearly, the intention here is to imply that this trend is caused by AI eliminating knowledge work jobs. But we have no solid evidence that these two issues are related. Indeed, as <a href="https://www.employamerica.org/labor-market-analysis/dont-blame-ai-for-the-rise-in-recent-graduate-unemployment/">​this critique notes​</a>, the decline in jobs for college grads began <em>well before</em> the more recent generative AI revolution.</li>
</ul>



<p></p>



<ul class="wp-block-list">
<li><strong>“Occupations susceptible to AI automation have seen sharp spikes in joblessness.”</strong> This is classic vibe reporting. The author doesn’t <em>directly</em> say that joblessness spikes are due to AI automation – carefully read how she words the sentence – but she clearly wants to <em>imply</em> that it’s true. This implication, however, is not currently supported by the evidence. As I’ve reported, job reductions in the tech sector <a href="https://calnewport.com/the-dangers-of-vibe-reporting-about-ai/">​are better explained​</a> by corrections to over-hiring during the pandemic. Something like this is happening <a href="https://www.moreaboutadvertising.com/2026/02/omar-oakes-an-exodus-in-advertising-something-doesnt-add-up/">​in the advertising world​</a> as well. On Friday, Cade Metz published <a href="https://www.nytimes.com/2026/02/20/technology/ai-coding-software-jobs.html">​an article​</a> in the <em>Times</em> that made a similar point.</li>
</ul>



<p></p>



<ul class="wp-block-list">
<li><strong>“Businesses really are shrinking payroll and cutting costs as they deploy AI.”</strong> Another classic vibe reporting technique: this sentence implies the shrinking payroll is <em>due</em> to AI deployments. But in most cases, these are unrelated. Lots of companies are deploying some sort of AI products for their employees. Some of these companies are also shrinking their payroll (especially those that overhired during the pandemic). This doesn’t mean one causes the other. This is the classic <em>post hoc ergo propter hoc </em>fallacy.</li>
</ul>



<p></p>



<ul class="wp-block-list">
<li><strong>“In recent weeks, Baker McKenzie, a white-shoe law firm, axed 700 employees, Salesforce sacked hundreds of workers, and the auditing firm KPMG negotiated lower fees with its own auditor.”</strong> By placing these specific examples of shrinking payroll immediately after discussions of AI automation, the author once again implies, without a direct claim, that these job losses were <em>due</em> to AI. But let’s look closer. Consider Salesforce: They did indeed lay off around 1,000 workers earlier this month, but not because they automated these jobs using AI. It was instead the result of a restructuring aimed at combining their Agentforce and Slack products under a single executive. Here’s how one close observer of the company <a href="https://www.salesforceben.com/salesforce-lays-off-nearly-1000-employees-in-early-2026-cuts/">​described it​</a>: <em>“Cross-team layoffs like these are not unusual for a company of Salesforce’s size, especially at this time of year, before announcing end-of-fiscal-year earnings.”</em></li>
</ul>



<p>What’s actually going on with AI and jobs? Generative AI might very well create broad disruptions in the job market. But we’re not there yet. The first major shift will likely occur in software development, but its magnitude remains unclear. (More on this soon: I’m in the middle of a reporting project in which I’ve now heard from over 300 computer programmers about how they’re currently using AI; tl;dr: <em>it’s complicated!</em>)</p>



<p>In the meantime, however, the actual stories related to AI are important enough on their own. We don&#8217;t also need reporters working backward to support trends that they feel like should be true.</p>



<p>(<em>To be clear:</em> The rest of the article is quite good. It explores, more hypothetically, how the government could respond to massive economic disruptions, and it’s written by a journalist who I respect and who knows a lot about that topic. It’s worth reading! Just don’t get freaked out by the vibe reporting in the opening section.)</p>



<p></p>
<p>The post <a href="https://calnewport.com/film-students-can-no-longer-sit-through-films/">Film Students Can No Longer Sit Through Films</a> appeared first on <a href="https://calnewport.com">Cal Newport</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://calnewport.com/film-students-can-no-longer-sit-through-films/feed/</wfw:commentRss>
			<slash:comments>8</slash:comments>
		
		
			</item>
	</channel>
</rss>

<!-- plugin=object-cache-pro client=phpredis metric#hits=2508 metric#misses=11 metric#hit-ratio=99.6 metric#bytes=797122 metric#prefetches=0 metric#store-reads=70 metric#store-writes=1 metric#store-hits=127 metric#store-misses=3 metric#sql-queries=25 metric#ms-total=323.29 metric#ms-cache=7.41 metric#ms-cache-avg=0.1059 metric#ms-cache-ratio=2.3 -->
