<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Reid Hoffman</title>
	<atom:link href="https://www.reidhoffman.org/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.reidhoffman.org/</link>
	<description>Entrepreneur. Investor. Strategist.</description>
	<lastBuildDate>Thu, 10 Jul 2025 23:36:44 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Move fast and make things: the new career mantra</title>
		<link>https://sfstandard.com/opinion/2025/06/15/move-fast-and-make-things/#new_tab</link>
		
		<dc:creator><![CDATA[Reid Hoffman]]></dc:creator>
		<pubDate>Sun, 15 Jun 2025 21:21:58 +0000</pubDate>
				<category><![CDATA[Entrepreneurship]]></category>
		<category><![CDATA[Intellectual Life]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[career strategy]]></category>
		<guid isPermaLink="false">https://dev-reidh.pantheonsite.io/?p=104135</guid>

					<description><![CDATA[Reid Hoffman has some advice for graduates entering a workforce ruled by AI. I don’t envy commencement speakers this year. Colleges just held their graduation ceremonies, letting loose thousands of young people looking for jobs at the exact moment that some AI industry leaders are predicting white-collar bloodbaths. Even the most inspirational advice lands like a&#8230;]]></description>
										<content:encoded><![CDATA[<p><strong>Reid Hoffman has some advice for graduates entering a workforce ruled by AI.</strong></p>
<div id="" class="paragraph-block article-body undefined text-left">
<p>I don’t envy commencement speakers this year. Colleges just held their graduation ceremonies, letting loose thousands of young people looking for jobs at the exact moment that some AI industry leaders are predicting <a href="https://www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic">white-collar bloodbaths</a>. Even the most inspirational advice lands like a Band-Aid on a bullet wound.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>For recent graduates, the stakes are particularly high. In April, the New York Federal Reserve <a href="https://www.newyorkfed.org/research/college-labor-market#--:explore:unemployment">reported</a> that “the labor market for recent college graduates deteriorated in the first quarter of 2025.” According to Dario Amodei, CEO of Anthropic, entry-level workers will feel the brunt of that white-collar bloodbath, with half of such jobs potentially disappearing in the next five years.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>But I actually think graduates have reason to be excited. I’m more bullish than most on the future of human labor. AI is reshaping how value is created. As the race for efficiency and scale heats up, jobs will be cut, industries will vanish, and job losses may outpace opportunities — at least for now.</p>
<p>At the same time, the best way to minimize the effects of workplace disruption is to explore the opportunities that rapid change creates. While it’s rational to look for ways to AI-proof one’s future, it’s also insufficient. What you really want is a dynamic career path, not a static one. Would it have made sense to internet-proof one’s career in 1997? Or YouTube-proof it in 2008? When new technology starts cresting, the best move is to surf that wave.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>Here’s where the young ’uns can actually excel. College grads (and startup companies, for that matter) almost always enjoy an advantage over their senior leaders when it comes to adopting new technology. Few top-tier print journalists thought it was smart to post their first drafts online, for free, so anonymous trolls could roast them without constraints. Yet Twitter became an essential news tool, and the journalists who mastered the platform built careers on it. No steadily employed SAG actors foresaw that cutting out directors and union-scale day rates to record one-take product unboxings under the glow of a $30 ring light would be the true path to 21st century fortune and influence, yet MrBeast earns more than some film studios.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>So if you’re a recent graduate, I urge you not to think in terms of AI-proofing your career. Instead, AI-optimize it. Take advantage. AI is a tool you can master.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>How? It starts with literacy, which goes well beyond prompt engineering and vibe-coding. You should also understand how AI redistributes influence, restructures institutional workflows and business models, and demands new skills and services that will be valuable. The more you understand what employers are hiring for, and the reasons why, the more you’ll understand how you can get ahead in this new world.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>Conventional wisdom says the most valuable human attributes in the AI age are human qualities that AI cannot replicate – emotional intelligence, ethical discernment, and creative expression.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>I’d like to add one more human quality to that list: intention. People with the capacity to form intentions and set goals will emerge as winners in an AI-mediated world. As OpenAI CEO Sam Altman <a href="https://x.com/sama/status/1870527558783218106?lang=en">tweeted</a> in December, “You can just do things.”</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>While evidence suggests it’s getting harder to find a first job, it has never been easier to create a first opportunity. Since billions of people have access to the same tools and platforms and information you do, the competition will be intense. But it always has been for the best jobs.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>Pick projects that let you show off your specific skills. Try lots of things. Instead of making five-year plans, consider six-month experiments. With the right tools, you can now do what used to require teams: create content and brands, generate and test marketing campaigns, write code, and design products.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>Look for a niche you find meaningful or a problem you think it’s important to solve. Learn in public: Let people see the journey you’re on. Share the process, reflect, and repeat. The ability to self-start, to build and act without credentials, is ultimately what will get you hired. Even better, it’s not just a way in — it’s a way forward, a capacity you will continue to draw upon across all aspects of your life, no matter what shape the next wave of change takes.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>It’s also what will help you create opportunities for others, and that’s never been more crucial. The uncertainty of professional life in an era of accelerating automation and the dawn of AI agents shows that personal relationships have never been more valuable. Human referrals and trust can’t scale like AI, so your personal network becomes more valuable than ever.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>So become AI fluent but focus on people. Foster even more relationships. Technology is a powerful lever for progress because of how it enables widespread cooperation and collaboration and democratizes superpowers. Writing spread knowledge; printing presses brought it to the masses. The automobile gave millions the power of mobility that far exceeded what kings and emperors once commanded.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>If we settle for anything less with AI, we’ll be failing to act on what’s possible. For new graduates, this is not just a time to survive disruption but one to shape what comes next. Define your career not by what AI makes obsolete but by what you choose to build with it. Be the person who moves early, learns fast, and brings others with you. AI gives us all a chance to be main characters in a story that’s still being written. But it’s no longer enough to be the star of your own narrative. You also have to direct.</p>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>No, AI isn’t coming to destroy us. But it will transform the world</title>
		<link>https://sfstandard.com/opinion/2025/04/13/ai-isnt-coming-to-destroy-us-but-it-will-transform-the-world/#new_tab</link>
		
		<dc:creator><![CDATA[Reid Hoffman]]></dc:creator>
		<pubDate>Sun, 13 Apr 2025 21:20:27 +0000</pubDate>
				<category><![CDATA[Entrepreneurship]]></category>
		<category><![CDATA[Intellectual Life]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<guid isPermaLink="false">https://dev-reidh.pantheonsite.io/?p=104133</guid>

					<description><![CDATA[Reid Hoffman argues that artificial intelligence will do something smarter than world domination — it will become an essential part of everyday life. Because the scale of change AI is poised to deliver is so great, our desire to know exactly what’s coming and when is equally urgent. It’s also inevitably speculative. Various industry leaders&#8230;]]></description>
										<content:encoded><![CDATA[<p>Reid Hoffman argues that artificial intelligence will do something smarter than world domination — it will become an essential part of everyday life.</p>
<div id="" class="paragraph-block article-body undefined text-left">
<p>Because the scale of change AI is poised to deliver is so great, our desire to know exactly what’s coming and when is equally urgent.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>It’s also inevitably speculative.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>Various industry leaders and pundits have recently asserted that artificial general intelligence — a hypothetical stage at which a computer can perform any cognitive task at a human level — will arrive within three years. Instead of speculating on AGI, I frame AI’s progress with a different question.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>Will AI be significantly more useful, more powerful, and more integrated into every aspect of life three years from now?</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>The answer is: yes, of course.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>For proof, consider recent history. In the spring of 2022, OpenAI released key updates to GPT-3 that dramatically boosted the model’s coherence compared with previous versions. Still, the lack of an accessible interface meant that most people had a better chance of naming all nine Supreme Court justices than explaining what GPT-3 was or how they might use it.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>Six months later, OpenAI released ChatGPT and changed everything. With a simple chat interface, OpenAI made advanced language models usable by virtually anyone. This catalyzed a new era of wide deployment, rapid innovation, and accelerating feedback loops. It also fueled a flood of investment and interest across sectors, making “AI assistant” functionality a default expectation in software, from word processors to customer service portals.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>The question was no longer whether large language models could be useful but how quickly they might transform the economy. Serious conversations about AGI became less hypothetical. Following the release of GPT-4 in March 2023, predictions that AGI might arrive within two or three years began to move from the fringes to the mainstream.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>That hasn’t come to pass yet, but consider all that has. In 2022, AI models could generate images or process text, but they couldn’t do both in a seamless unified workflow. Today, multimodal systems like GPT-4 and Gemini integrate text, vision, and audio capabilities in ways that make interactions feel more natural and even trivial — which, as the TV remote control taught us, is exactly how massive transformation starts.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>In 2022, models forgot everything from one session to the next. Today, memory features allow continuity across conversations and tasks, and AI increasingly adapts to you — following your instructions, fine-tuning on the documents and other media you give it, and performing workflows that once required juggling multiple tools or doing everything yourself.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>GPT-3’s context window in 2022 was just 2,048 tokens, or approximately 1,500 words, meaning it could “remember” only a few pages of prior text within a single interaction. That was enough for answering basic questions or maintaining short conversations, but it often lost the thread in longer interactions or failed to connect ideas across sections. In effect, it had the memory of a goldfish.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>Many of today’s best models have, metaphorically, the memory of an elephant — or even a small herd of them. Google’s Gemini 2.0 Flash has a 1 million-token context window. Llama 4, Meta’s newest model, has 10 million. With this exponential increase in capacity, these models can easily process and analyze multiple books and technical manuals in a single prompt. They can track long, multistep conversations without losing the thread and even work through detailed legal contracts or computer source code while preserving coherence and relevance over hundreds of pages.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>If you’re a writer, you can feed a full book-length draft into a model, then ask it to identify inconsistencies in argument structure, tone, or factual claims. If you’re a software engineer, you can drop an entire codebase into a prompt and debug a persistent issue in a single pass. What’s changed isn’t just how well a model can “think.” It’s how much it can think about at once.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>All of these attributes are essential for making AI feel truly adaptive, personalized, and context-aware.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>Put memory and multimodality together, and you get more than just an incremental upgrade. Today’s AIs are already fundamentally different from their recent predecessors in how they process inputs, handle context, and track time. Even without exponential leaps in underlying pattern-matching capabilities, the shift is underway from autocomplete to coauthor, from database to chatbot to trusted confidante and creative partner.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>Imagine what happens when multimodal models can ingest and even generate video during interactions. Current models simulate empathy and tone based on prompts. But with better conversational memory and the ability to detect affect via vocal inflection, facial expressions, word choice, and syntax, future AIs will be able to effectively respond to your varying emotional states — if you desire that.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>Thanks to such advances, interacting with AI in 2028 will feel as qualitatively different from today as today does from 2022. Even if we don’t reach AGI in the sci-fi sense, we’ll be living in a world that feels increasingly like science fiction. More of us will utilize machines to effectively manage our side hustles. We’ll rely on AI health advisers that schedule appointments based on subtle signs — before we even notice any symptoms. We’ll watch DIY blockbusters we cowrote with our digital doubles.</p>
</div>
<div id="" class="paragraph-block article-body undefined text-left">
<p>In 2025, you may think you can easily live without turning your selfies into Ghibli portraits or having an AI summarize your meeting notes. In 2028, going an entire day — or even a few hours — without engaging with an AI will be as inconceivable as going without your phone or internet is now.</p>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>A.I. Will Empower Humanity</title>
		<link>https://www.nytimes.com/2025/01/25/opinion/ai-chatgpt-empower-bot.html#new_tab</link>
		
		<dc:creator><![CDATA[Reid Hoffman]]></dc:creator>
		<pubDate>Sat, 25 Jan 2025 22:17:30 +0000</pubDate>
				<category><![CDATA[Entrepreneurship]]></category>
		<category><![CDATA[Intellectual Life]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<guid isPermaLink="false">https://dev-reidh.pantheonsite.io/?p=104131</guid>

					<description><![CDATA[I recently learned of a new way people are using artificial intelligence. “Based on everything you know about me,” they ask ChatGPT, “draw a picture of what you think my current life looks like.” Like any capable carnival mind reader, ChatGPT appears to mix safe bets with more specific details. It often produces images of&#8230;]]></description>
										<content:encoded><![CDATA[<p class="css-at9mc1 evys1bk0">I recently learned of a new way people are using artificial intelligence. “Based on everything you know about me,” they ask ChatGPT, “draw a picture of what you think my current life looks like.”</p>
<p class="css-at9mc1 evys1bk0">Like any capable carnival mind reader, ChatGPT appears to mix safe bets with more specific details. It often produces images of people sitting in a home office with a computer. Perhaps an acoustic guitar sits in the corner or an orange cat prowls in the background. But also on occasion something like, say, a large head of broccoli will be sitting in the middle of the desk.</p>
<p class="css-at9mc1 evys1bk0">Off-kilter elements like that are what give these portraits not just their quirky charm but also flashes of epiphany. By absorbing the wide-ranging mix of work questions, personal goals and everything else that makes up our ChatGPT history, the system teases out patterns and connections that may not be readily apparent. In this way, these portraits don’t just reflect. They also reveal. Presented with such depictions, a user may be compelled to ask: Am I really mentioning cruciferous vegetables in my chats so often that ChatGPT thinks they’re a central part of my life?</p>
<p class="css-at9mc1 evys1bk0">As a board member at Microsoft and an early funder of ChatGPT’s developer, OpenAI, I have a significant personal stake in the future of artificial intelligence. But my stake is more than just financial. I truly believe that by giving billions of people access to A.I. tools they can use in whatever ways they choose, we can create a world where A.I. augments and amplifies human creativity and labor instead of simply replacing it.</p>
<p>That’s why I find these ChatGPT portraits so fascinating: They clarify and dramatize enduring concerns about identity and privacy in the digital age. How much exactly is ChatGPT remembering? They implicitly ask. How judiciously is it processing these memories, and who benefits most when it does? As a user of these technologies, do you sense that you’re being monitored in ways that make you feel as if you’re being exposed, controlled and manipulated? Or do you feel seen?</p>
<p class="css-at9mc1 evys1bk0">Few truly powerful technologies come without any risks. Perhaps third parties with different motives and values from your own will somehow gain access to the data. Once made aware of your past patterns, these third parties might be able to effectively anticipate and influence your future decisions. While I recognize that some people see such risks as disqualifying, what I’ve found through my own experiences is that sharing more information in more contexts can also improve people’s lives.</p>
<p class="css-at9mc1 evys1bk0">In our concern about potential harms, it can be easy to overlook the many positive effects technology has had. I co-founded LinkedIn, a professional social network, more than two decades ago, but I still get a steady flow of missives from people who have found jobs, started businesses or made promising career changes because of interactions they’ve had on the platform. And this is all because they’re willing to share information about their work experiences and skills in ways that were once considered both imprudent and impractical.</p>
<p class="css-at9mc1 evys1bk0">Tech skeptics have long used the adjective “Orwellian” to cast everything from a video recommendation feature to turn-by-turn navigation apps as threats to individual autonomy, but the history of technological innovation in the 21st century tells a different story. In “1984,” George Orwell’s classic novel of state oppression, powerful telescreens enable a totalitarian regime to rule over dispossessed proles with unchecked omnipotence. But today we live in a world where individual identity is the coin of the realm — where plumbers and presidents alike aspire to be social media influencers and cultural power flows increasingly to self-made operators, including the one-man podcasting empire Joe Rogan, the YouTube megastar MrBeast and the human rights activist Malala Yousafzai.</p>
<p class="css-at9mc1 evys1bk0">I believe A.I. is on a path not just to continue this trend of individual empowerment but also to dramatically enhance it.</p>
<p class="css-at9mc1 evys1bk0">Imagine A.I. models that are trained on comprehensive collections of your own digital activities and behaviors. This kind of A.I. could possess total recall of your Venmo transactions and Instagram likes and Google Calendar appointments. The more you choose to share, the more this A.I. would be able to identify patterns in your life and surface insights that you may find useful.</p>
<p class="css-at9mc1 evys1bk0">Decades from now, as you try to remember exactly what sequence of events and life circumstances made you finally decide to go all-in on Bitcoin, your A.I. could develop an informed hypothesis based on a detailed record of your status updates, invites, DMs, and other potentially enduring ephemera that we’re often barely aware of as we create them, much less days, months or years after the fact.</p>
<p class="css-at9mc1 evys1bk0">When you’re trying to decide if it’s time to move to a new city, your A.I. will help you understand how your feelings about home have evolved through thousands of small moments — everything from frustrated tweets about your commute to subtle shifts in how often you’ve started clicking on job listings 100 miles away from your current residence.</p>
<p class="css-at9mc1 evys1bk0">For those who choose to pursue this new reality, the tools that make it possible are multiplying and evolving rapidly. Developers of all sizes have been introducing apps and features that enable you to automatically record, store and analyze virtually anything — or everything — you do on your PC, phone and other devices. In doing so, they turn such data into the material for a de facto second self, one that can endow even the most scatterbrained among us with a capacity for revisiting the past with a level of detail even the novelist Marcel Proust might envy.</p>
<p class="css-at9mc1 evys1bk0">There’s more to this shift. While critics of Big Tech often emphasize how A.I. can empower corporations to use people’s data for manipulation or discrimination, we can also deliberately design A.I. to give individuals greater facility to derive insights from their own data. What if you had an AI that could analyze your browsing patterns and alert you when advertising algorithms were successfully manipulating your purchasing decisions? Or one that could detect when social media algorithms were steering your attention toward increasingly extreme content?</p>
<p class="css-at9mc1 evys1bk0">Do we lose something of our essential human nature if we start basing our decisions less on hunches, gut reactions, emotional immediacy, faulty mental shortcuts, fate, faith and mysticism? Or do we risk something even more fundamental by constraining or even dismissing our instinctive appetite for rationalism and enlightenment?</p>
<p class="css-at9mc1 evys1bk0">To some degree, we all self-track and always have. We make to-do lists and keep journals of our daily activities. We weigh ourselves and record our daily steps or the number of miles we jog, generally in pursuit of some kind of self-improvement or at least self-awareness. Ultimately, ongoing cycles of reflection, action, assessment and refinement are how humanity progresses and expands what it even means to be human.</p>
<p class="css-at9mc1 evys1bk0">So imagine a world in which an A.I. knows your stress levels tend to drop more after playing World of Warcraft than after a walk in nature. Imagine a world in which an A.I. can analyze your reading patterns and alert you that you’re about to buy a book where there’s only a 10 percent chance you’ll get past Page 6.</p>
<p class="css-at9mc1 evys1bk0">Instead of functioning as a means of top-down compliance and control, A.I. can help us understand ourselves, act on our preferences and realize our aspirations. In this way, perfect recall isn’t just a tool for remembering the past. It’s also a compass that provides a clearer understanding of our goals and improves our decision-making. It transforms our digital trails from passive records of who we were into dynamic resources, empowering us to shape who we wish to become — with greater self-awareness and freedom to live lives of our own choosing.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Don’t fear AI: used well, it can empower us all</title>
		<link>https://www.thetimes.com/comment/columnists/article/dont-fear-ai-used-well-it-can-empower-us-all-hpzrg9xsd#new_tab</link>
		
		<dc:creator><![CDATA[Reid Hoffman]]></dc:creator>
		<pubDate>Fri, 27 Dec 2024 22:15:38 +0000</pubDate>
				<category><![CDATA[Entrepreneurship]]></category>
		<category><![CDATA[Intellectual Life]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<guid isPermaLink="false">https://dev-reidh.pantheonsite.io/?p=104129</guid>

					<description><![CDATA[New technologies will bring change as immense as the Industrial Revolution and it’s natural for us to be wary — but engaging now will make our lives better, writes Reid Hoffman In Two Concepts of Liberty, his first lecture as Oxford’s Chichele Professor of Social and Political Theory in 1958, Sir Isaiah Berlin showed us that&#8230;]]></description>
										<content:encoded><![CDATA[<p>New technologies will bring change as immense as the Industrial Revolution and it’s natural for us to be wary — but engaging now will make our lives better, writes Reid Hoffman</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">In <i>Two Concepts of Liberty</i>, his first lecture as Oxford’s Chichele Professor of Social and Political Theory in 1958, Sir Isaiah Berlin showed us that the richness of human experience lies in the tensions and contradictions arising out of human values and their irreducible plurality.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">In that lecture he explored two concepts — negative liberty and positive liberty. Negative liberty being the freedom from external constraints and interference; positive liberty being the freedom to take actions that can help one realise one’s full potential. I believe we should keep these concepts front and centre as we try to understand the opportunities, risks, obligations and impacts of AI.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">Because ultimately I believe we’re in the midst of a new steam power moment. And by that<span class="paywall-EAB47CFD"> I mean a revolutionary technological breakthrough that expands what it means to be human. A breakthrough that will change how we think about freedom, autonomy, agency and other key aspects of the human condition.</span></p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">In <i>Two Concepts of Liberty</i>, Berlin emphasised how “value pluralism” — or the idea that values are relational and often in competition with each other — is a phenomenon we must account for when theorising about the nature and limits of political freedom. Prioritising safety, for example, can diminish liberty. Prioritising innovation can diminish stability.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">And yet, even though values like these are clearly relational, in constant flux, we still often think of them as timeless, natural phenomena. Consider the symbols we use to invoke their essence: the Union Jack, Old Glory, <a class="link__RespLink-sc-1ocvixa-0 csWvlP" href="https://www.thetimes.com/article/rebrand-aims-to-raise-interest-in-treasure-of-london-archives-qd5f536wr">Magna Carta</a>, the Liberty Bell. So unchanged and enduring, as if liberty and freedom themselves are fixed in a vacuum of absolute truth. But they’re not. They’re fundamentally dynamic constructs, shaped by politics and historical contexts but most of all, I believe, by technology.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">If we acknowledge the extent to which new technologies expand and redefine how we experience essential human values like freedom and agency, we put ourselves in a much better position to design these technologies in ways that maximise human flourishing. But while today’s breakthrough technologies create tomorrow’s freedoms, we often greet these breakthroughs as threats to individual liberty and autonomy — because our current idea of freedom is mostly constructed by what previous technologies have enabled. This is very much the case for AI.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">In helping us to understand these ideas, and how we can design AI tools in a way that synthesises negative liberty and positive liberty, I offer a new word: superagency.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">Superagency, as I define it, is what happens when large numbers of people get access to a transformative, general-purpose technology that they’re free to use as they wish. When that happens, individuals get new superpowers to apply to their lives in unrestricted, inventive and personally relevant ways. And because so many other people have new superpowers too, new capabilities and adaptations cascade through society, endowing every individual with a multitude of second-order benefits.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">In the early days of the automobile, for example, doctors could suddenly make more house calls per day. They could expand the territory they covered. This made doctors more personally productive and also helped everyone they served. In the early days of the world wide web, it suddenly became possible to make your essays or your shareware programs accessible to a global audience — a huge boost in your individual agency and powers of personal expression. Just as consequentially, though, millions of others were doing the same, so knowledge of all kinds became far more accessible.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">With AI, I believe we’re heading for our biggest superagency moment since the advent of steam power. To get there, though, we, as a society, have to make big and complex choices and most of them involve competing visions of freedom. For example, the freedom to express whatever you want online versus the freedom to engage in democratic or other civil dialogue without having to pay the price of constant harassment. The freedom to advocate for the deregulation of everything, right after we re-regulate borders, free trade and reproductive freedom.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">These aren’t just partisan clashes, they reflect something essential about freedom. As Berlin put it in his lecture: “Freedom for the pike is death for the minnows…. Freedom for an Oxford don is a very different thing from freedom for an Egyptian peasant.”</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">The pike wants absolute freedom to hunt whenever it suits it. The minnow makes a principled case for the right to swim without constant fear of hungry pikes. These freedoms cannot exist fully together. Nor is there a perfectly rational way to balance or optimise this clash of values. So the best we can do is seek imperfect and dynamic compromises: some risk for the minnows, some constraints on the pike.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">Every society where competing visions of the good life coexist requires both negative liberty and positive liberty to thrive — the former to create zones of autonomy, free will and experimentation, free from external interference; the latter to provide frameworks and resources, like public utilities, or law enforcement agencies, or education systems, that can help enable people to pursue and fulfil their potential. Which ultimately means that the work of civilisation involves trade-offs.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">When truly novel and powerful technologies like <a class="link__RespLink-sc-1ocvixa-0 csWvlP" href="https://www.thetimes.com/article/how-ai-helps-small-newcomers-compete-with-the-giants-enterprise-network-mqbgvwbqt">AI</a> appear, defensive reflexes tend to kick in. Consider fitting an imaginary city with even more CCTV cameras than Britain already has. Cameras everywhere, and they’re all augmented with facial recognition software, microphones, licence plate readers and behavioural monitoring algorithms. Buildings augmented with sensors tracking entry, exit, occupancy and movement. Smart poles collecting data on everything from air quality to pedestrian foot traffic.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">The most optimistic version of this vision is a network of devices that make intelligent and liberating use of all the data they collect. A traffic camera linked to AI doesn’t just record violations — it can adapt traffic light patterns in real time to optimise flow so you spend less time idling at intersections. A smart building’s environmental sensors don’t just measure temperature — they learn occupants’ preferences and anticipate their needs.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">The dystopian version, meanwhile, is straight out of <a class="link__RespLink-sc-1ocvixa-0 csWvlP" href="https://www.thetimes.com/article/big-brother-watches-the-hong-kong-bookshops-defying-beijing-0k7tgzlbc">George Orwell</a>’s <i>1984</i>. The goal is coercion, compliance, control.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">But what about versions that attempt to manage the probable trade-offs of pluralism with a more even hand? Such a city, for example, might be said to represent a very clear triumph of positive liberty. Even a blind senior citizen, walking home alone at night and wearing an expensive diamond necklace or watch, would be able to access that city’s resources and amenities with a strong sense of security. At the same time, this imaginary city would also necessarily minimise privacy and liberty — or, at the very least, freedom from many different kinds of constraints.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">This is what I presume Berlin would describe as a collision between genuinely good things. Because who doesn’t want “virtually zero crime”? But also, who doesn’t want “sufficient privacy in public”?</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">As our technologies grow more powerful and capable of acting in autonomous ways themselves, the potential number of scenarios where we might deploy them are going to multiply. And thus create more and more instances where values will clash.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">Take driver alcohol detection systems. These use touch and breath-based sensors embedded in various parts of a car’s interior to passively measure a driver’s blood alcohol level. These sensors don’t incorporate AI themselves. But your car could also be equipped with cameras and additional sensors that do use AI to analyse things like posture, grip patterns and airflow directions. In this way, they can help confirm that it’s the driver, rather than any passengers who might also be in the car, whose blood-alcohol content is being measured. If your car says you’re over the legal limit, it refuses to start. So you can either call an Uber or stay put until it determines you’re good to go.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">It turns out there was legislation passed in the US that could make systems like this mandatory on all new cars as early as 2026. For various reasons, it may not happen that quickly. Or ever. We might just make the leap to fully autonomous vehicles before we see cars with this feature become the federally mandated rule of the road.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">On the other hand, it’s also easy to envisage this functionality eventually showing up in more limited ways. For example, in delivery trucks or rental cars or just as an option you choose with your insurer, to get a discount. In fact, I think mechanisms like these are most likely to appear first as terms of service or contractual agreements, not public laws.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">Another example like this involves a company called MSG Entertainment, which operates Madison Square Garden, the large indoor arena in New York City. For several years now, MSG Entertainment has implemented a controversial policy that uses facial recognition technology to identify and deny entry to attorneys who work at law firms in litigation against it. As patrons enter Madison Square Garden or another iconic venue MSG Entertainment owns, Radio City Music Hall, their faces are scanned and compared against a database. If they match to an attorney on the ban list, that person is refused entry, even if they have a valid ticket.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">The Madison Square Garden policy has already prompted numerous lawsuits, and continuing debates about the appropriate balance between property rights and public accommodation. In the coming years, instances like these will become more common. And clearly they represent a significant paradigm shift. One where what can be described as “perfect control” becomes increasingly possible.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">In this scenario, whatever the policy is, that policy is enforced, every time. And suddenly, you, an individual human, have neither negative liberty nor positive liberty. AI is making all the choices.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">It’s not just that CCTV networks equipped with facial recognition might function as a powerful way to reduce or even entirely eliminate muggings, assaults, and other kinds of crimes that I’m sure most people are broadly in favour of reducing or eliminating. At some point, as the technologies evolve, any instance of jaywalking at rush hour could result in an automatic fine, in just the way that speeding through a red-light camera does now. Noise violations, off-leash pets, public intoxication: all of these things could effectively become zero-tolerance offences.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">If you’re like me, a society actually operating in this fashion might strike you as absurd, intolerable, even inhumane. And yet the laws that prohibit these various actions already exist. Presumably we’re supposed to obey them. So I imagine there may be people who are likely to favour this more exacting form of enforcement. All of which emphasises the point that as AI evolves, it will continue to compel us to consider and even redefine how we think about essentially human values such as freedom, autonomy, privacy and agency.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">Which trade-offs will we opt for? And how do we move forward productively in this environment — to create the best possible world for us as individuals, yes, and also as members of a community? This is where we come to the importance of designing and deploying AI tools that prioritise individual agency.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">Questions about individual agency lie at the heart of most of the major concerns about AI. For example, questions about job displacement are questions about individual human agency: will I have the economic means to support myself, and opportunities to engage in pursuits I find meaningful? As are questions about disinformation and misinformation: how do I know whom and what to trust as I make decisions that impact my life?</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">For the first time ever, synthetic intelligence, not just knowledge, is becoming as flexibly deployable as synthetic energy has been since the rise of steam power. Intelligence is now a tool — a scalable, highly configurable, self-compounding engine for progress. But who gets to use that tool and in what contexts?</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">When OpenAI, the company that developed ChatGPT, launched in 2015, two of its co-founders posted an essay introducing their mission. “We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible,” they wrote. In essence, they were making a case for both negative liberty and positive liberty. AI tools, they suggested, should be extensions of individual human wills, not extensions of states or corporations. Individuals should be able to access and use these tools themselves, without external actors prohibiting or mandating their use.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">Why is this specific framing so important? On this, we can look once again to Berlin. While he made the case that both negative and positive liberty are necessary for a balanced and flourishing society, he also expressed some key reservations about positive liberty. Specifically, he worried that positive liberty — and its emphasis on fulfilling one’s potential — could be co-opted in authoritarian ways.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">After all, if there’s a better version of yourself that might be realised under certain optimal conditions, well, then, why should society leave it to you to implement the policies and take the actions that facilitate those conditions? Why should it trust you to make the right choices? We know you don’t always stop at stop signs or obey speed limits.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">This, in other words, is the road Berlin warned against, because of how it might lead to tyranny. It’s also the road ChatGPT is designed to avoid as well.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">In broad ways, I think of tools like ChatGPT as a new form of informational GPS. A navigation app might be telling you to go left, but you can still choose to go straight if, for whatever reason, you think that’s the best choice. When that happens, the app adjusts to your decision and recalibrates itself. It still tries to take you to the destination you’ve chosen, but you can always keep making choices of your own too. And that’s how ChatGPT works as well. It provides informational guidance, but you can always re-orient or redirect it simply by giving it new instructions.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">In addition to preserving autonomy, hands-on AI tools activate superagency. Which, to revisit the definition I gave earlier, is what happens when millions of people start using powerful new tools in self-directed ways — and new competencies, innovation and generativity start cascading throughout society. In this way, a doctor using AI to make better diagnoses isn’t just fulfilling their own potential — they’re creating better health outcomes for their entire patient population. A teacher leveraging AI to personalise learning isn’t just becoming a better educator — they’re elevating the educational experience for every student they teach.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">But crucial as it is to preserve space for individual agency in how an AI works, it’s equally crucial that we apply AI’s power in more centralised ways. We can all benefit, individually and collectively, from AI’s pattern recognition capabilities to track emerging infectious diseases and co-ordinate rapid public health responses across continents; and from AI’s analytical capabilities to help manage increasingly scarce resources like water and arable land in ways that serve entire populations equitably.</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">But we also know this is an era of increasingly polarised publics, where value pluralism reigns supreme. So how can nation states hope to achieve the social cohesion that’s needed to undertake big, ambitious, and potentially divisive projects, using complex new technologies that many people have major concerns about?</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">I believe the best way to do that is to continue what we’ve been doing since ChatGPT’s release two years ago: give people opportunities to use AI directly, in ways they find meaningful. Because in the end, what are you likely to trust more? Some abstract new technology government experts decide to unilaterally introduce without much — or even any — input from you? Or a technology you have a growing personal connection to, because you regularly use a form of it?</p>
<p class="responsive__Paragraph-sc-1pktst5-0 gaEeqC">When people use ChatGPT to automatically personalise their CV to dozens of different job ads, or teach their child fractions, or understand their ageing parent’s medical diagnosis better, they develop both practical fluency and earned trust. And through hands-on engagement, we become more likely to appreciate the potential upsides of broader applications too. We see it enhancing our own lives, so we can imagine it working in institutional or public-sector scenarios too. “What’s in it for me?” leads naturally to “What’s in it for us?”</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI, Society, and Our World Order</title>
		<link>https://www.reidhoffman.org/ai-society-and-our-world-order/</link>
		
		<dc:creator><![CDATA[Reid Hoffman]]></dc:creator>
		<pubDate>Tue, 10 Dec 2024 05:03:27 +0000</pubDate>
				<category><![CDATA[Intellectual Life]]></category>
		<guid isPermaLink="false">https://dev-reidh.pantheonsite.io/?p=103889</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<div class="wpb-content-wrapper"><div class="vc_row wpb_row vc_row-fluid vc_custom_1745029930207 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><i>This lecture was delivered at the London School of Economics in December 2024.</i> <i>For a video recording of the lecture, view <a href="https://www.youtube.com/watch?v=k2skgA3Z2Gk" target="_blank" rel="noopener">here</a>.</i></p>
<p>I’ve got Heraclitus on my mind today. And we will return to Heraclitus, even if we can never return to exactly the same point.</p>
<p>When I first visited LSE, I was a Masters student at Oxford, on a possible path to becoming a philosophy professor. It was the early 90s. Around then, the first text message was sent. Intel released its Pentium processors. And Tim Berners-Lee introduced the World Wide Web to the public.</p>
<p>Today, here with you, I’ve returned to LSE as a technologist, investor and founder. Globally, about 26 billion texts are sent daily. NVIDIA’s H100 GPUs are around 600,000 times more powerful than Pentium processors. And about 70% of the global population uses the internet.</p>
<p>My point is not to just time capsule technological progress and my professional development. But to put on equal footing not just how different the world was, but how differently I saw it.</p>
<p>Heraclitus is right. One cannot step into the same river twice. Not only because the river is different, but because we, ourselves, are also changed.</p>
<p>As humans, we tend to project the way in which we understand the world as static. Or at least, we recognize change in the world more than change <em>in us and our understanding </em>of the world.</p>
<p>This gets to the heart of a question that people have grappled with before human language. And that philosophers, from the pre-Socratics to Popper and beyond, have worked to answer:</p>
<p><em>How do we come to understand the world?</em><em> </em></p>
<p>It’s perhaps our most fundamental and human question. As many of you know, we have empiricists that root knowledge in our sensory experience, rationalists who derive it from reason and innate ideas, and idealists who argue it&#8217;s mediated by the mind. The list goes on; our schools of thought seem almost more varied than our thoughts themselves.</p>
<p>As a technologist and humanist, here’s what I’d add and emphasize: humans often overestimate how much our understanding of the world is from pure reason and perceptions, and underestimate how much it&#8217;s mediated by technology. Not only because of the role and power of technology itself, but because of who we are fundamentally.</p>
<p>We are more than Homo sapiens. If we merely lived up to this scientific classification and just sat around thinking all day, we’d be much different creatures than we actually are.</p>
<p>We humans are Homo techne: humans as toolmakers and tool users.</p>
<p>Technology can expand our vision. Quite literally, telescopes and microscopes help us see farther and deeper than we could otherwise. It transports us, whether by airplane, book, or video</p>
<p>call. And it extends our life, such as through medicine or gene therapy. Things many of us have forgotten are technology—language, currency, wheels—underpin everyday life.</p>
<p>This is as important to who we are, as to who we’ll become. We evolve with and through our tools. We shape our tools. Then our tools shape us. In that exchange, our epistemology and metaphysics also evolve—our understanding of the world updates through technology.</p>
<p>And that phenomenon is never more true than with AI. What I have found bewildering is that this moment for AI—this current AI era—is going to be as important of an evolution in our epistemology and metaphysics than any other technology we’ve encountered to date.</p>
<p>Why? Well, consider just how central humans are to how an AI model is built. Our corpus of online human knowledge is ingested to build foundational models. Reinforcement training—or the ways models make decisions and craft outputs—is guided by interactions with us. What we prompt AI to generate is consumed by us, and blended back into humanity’s digital canon.</p>
<p>This shift now challenges how we have understood the world for millennia: through discussions with humans. In essence, the process of one human saying something they believe to be true about the world that garners the agreement of fellow humans. Now with AI, we have a super competent extension of people and application of our knowledge. Moreover, AI is not a mere static tool – we continue to improve how it learns and generates. How will we collectively use it to shape our understanding of the world?</p>
<p>This is a question for all of us. Because, of all our technologies, these foundational AI models might be the best technological approximation of us as a collective: the good, the bad, and the ugly. The full range of us, with all our commonalities and differences—especially as the access to AI for more people continues to grow. This makes discussions of how AI benefits society and humanity much more interesting, but also much more complicated.</p>
<p>And that’s what I want to focus on today: what AI means for society. On this topic, there are three important questions that I hope to address:</p>
<ul>
<li>Where does the value of technology stem from?</li>
<li>What might AI disrupt within society?</li>
<li>How might it change dynamics between societies?</li>
</ul>
<p>Let’s start with the first question about a generational technology like AI—and the origin of its value. This may be a good place to start, because it helps us gauge whether and not we have agency in defining its value, or if it’s more so an innately good or bad tool.</p>
<p>There are primarily two schools of thought. The first believes that technology is value-neutral. They hold that technology isn’t inherently good or ill—it’s about how people use it. The second school of thought is that technology is value-laden. That technology is inherently good or bad.</p>
<p>I believe it’s door three: a blend of both. Those who believe technology is value-neutral tend to overlook the ethical complexities inherent in technological development and deployment.</p>
<p>Consider cigarettes or the atomic bomb: both are products of human ingenuity, but their societal effects raise profound moral questions. Cigarettes, despite their economic contributions, have fueled a global public health crisis, while the atomic bomb fundamentally altered the fabric of geopolitics, introducing existential risk.</p>
<p>Let’s for a second imagine if Nazi Germany had developed the atomic bomb before the United States during World War II. The shape and deployment of that technology in a fascist regime would have had catastrophic consequences, likely remaking the post-war global order in ways antithetical to democracy and human rights. In this context, the value-neutral stance collapses.</p>
<p>But the value-laden approach isn’t quite right either. Advocates of this position may see AI as a panacea for humanity’s greatest challenges or, conversely, as a harbinger of dystopia. Yet this perspective is equally flawed. Technology is not a raw substance with immutable intrinsic properties of morality or utility; it can and must be shaped, refined, and integrated with human values to achieve specific outcomes.</p>
<p>Alfred Nobel’s dynamite is neither inherently constructive nor destructive. Humans use it when making tunnels and buildings, and when fighting wars. And this year, under Nobel’s name, AI pioneers have won his acclaimed prize for advancements in chemistry and physics. As wielders of AI, we determine—and can reward—the impact we want.</p>
<p>So, is technology value-neutral or value laden? Neither. The truth is it’s value-sculpted. It has its initial and inertial properties as a technology, that we humans then whittle and carve. We shape it—not like clay, but like marble. It takes muscle, intention, and repetition. And we must respect and acknowledge its properties, while hewing it to our purpose.</p>
<p>And our sequence in sculpting matters, too. To start, taking a value-neutral approach—rooted in scientific rigor and factual verification—is essential in the early stages of technological development. We must examine the logical structures underlying AI, but rigorously revise our hypotheses and approach after it is in the hands of people. Iterative deployment—or inviting the public to participate in the development process for AI—accelerates this process. This overall approach mirrors the scientific method: systematic, objective, and methodical.</p>
<p>However, as AI integrates into society, the value-laden perspective becomes indispensable. We must ask: How can AI be shaped to prioritize human well-being, both now and in the future?</p>
<p>How can it amplify our collective capabilities while minimizing harm? For instance, AI in healthcare should aim not only to diagnose diseases more accurately but to ensure equitable access to these advancements, irrespective of socioeconomic status.</p>
<p>The geopolitical implications of getting this sequenced value-neutral and value-laden approach is significant. The societies that build and deploy transformative technologies like AI wield considerable influence over the global order. This underscores the geopolitical importance of AI:</p>
<p>it is not merely a tool but a driver of power dynamics. Just as the printing press upended the religious and political structures of early modern Europe, AI has the potential to reshape economies, governance, and international relations.</p>
<p>History reminds us, however, that transitions catalyzed by transformative technologies are rarely smooth. Again, the printing press, while enabling unprecedented dissemination of knowledge, also precipitated decades of religious conflict. AI, too, will bring disruption. Yet, just as the printing press ultimately became indispensable, AI can create a more interconnected and empowered global society—if we manage its transition wisely.</p>
<p>Let me start by saying that this transition will not be easy. And it’s good that we are concerned about it, as it will be painful in parts and places. Humans as a species are historically bad at transitions—but we can navigate better knowing that. Transitions are both hard and important for societies, each time we integrate new technology. And transformative technology eventually becomes indispensable to humans. This transition to our AI future will be navigated regardless of our planning and coordination. But we should be thoughtful and intelligent about it.</p>
<p>To best navigate this disruption, we must advance the positive use cases of AI and foster smoother integration into society. This requires moving beyond binary debates about AI’s inherent value and focusing instead on <em>our agency </em>with it.</p>
<p>If we harness AI correctly and collectively, society will experience superagency. That’s what happens when a critical mass of individuals, personally empowered by AI, begin to operate at levels that compound through society.</p>
<p>In other words, it’s not <em>just </em>that some people are becoming more informed and better equipped thanks to AI. <em>Everyone is</em>, even those who rarely or never use AI directly. You may not be a doctor, but suddenly <em>your </em>doctor can diagnose seemingly unrelated symptoms with AI precision. You might not repair cars, but your mechanic’s AI agent can now instantly diagnose the cause of that weird sound when your car accelerates. Even ATMs, parking meters, and vending machines are multilingual geniuses who understand and adjust to your preferences.</p>
<p>That’s the world of superagency. Each of these enhancements and enrichments across professions, industries and sectors don’t just add up for society, they transform it. This evolution is not only inevitable, but already underway. And we have the opportunity to make this as much—or more—about human amplification, than human replacement. We can design with superagency in mind—rather than chase it from behind—as it arises in society.</p>
<p>As the world of superagency starts to more fully emerge, we’ll hear the following question asked, repeatedly and at an increasing pitch: &#8220;What gives <em>you </em>the right to disrupt society?&#8221; The query often carries a sharp edge of skepticism, even indignation. After all, no one <em>voted </em>to invite this wave of technological upheaval.</p>
<p>Yet disruption does not spring from a vacuum. It is rooted in foundational rights that underpin free societies: the right to build a company, to develop a product, to offer that product to the public, and the public to engage with it. These rights, while essential, do not create disruption on their own. Disruption occurs at the intersection of supply and demand, and at the inflection point of product-market fit. A technology disrupts when it resonates with people: when they adopt it, pay for it, and incorporate it into their lives. Without demand, even the most ambitious innovation falters.</p>
<p>As I speak, some of you may be sensing technological determinism or the mighty wheel of capitalism—but I assure you that we have a choice. But while the choice to engage in AI <em>as an individual </em>can be a personal preference, the choice to not engage in AI <em>as a society </em>is consequential.</p>
<p>Societies that resist participation merely delay their integration until the tail end of adoption, losing the opportunity to sculpt the technology in its formative stages. They will also detain the benefits that AI can bring to the health, wealth and happiness to generations of their people. However, inevitability does not imply passivity. Heraclitus’ river is everchanging, but so are we—we can decide how we move through it. Just as a sailor navigates by tacking according to the wind rather than relinquishing the helm, so must we steer the course with AI. If disruption is happening, the pressing question becomes: <em>What shape will it take?</em></p>
<p>Some disruptions are easier to imagine than others. We have line of sight into how AI can democratize access to critical resources at scale. For instance, AI-powered medical assistants can bring quality healthcare to underserved or remote regions, where skilled practitioners are scarce or overburdened. Similarly, AI-driven tutors can make personalized education accessible to millions, adapting lessons to individual needs in ways traditional classrooms may not. Tools like these amplify human agency, as well as address systemic inequities.</p>
<p>Yet alongside these positive transformations, AI must be safeguarded against dehumanizing applications. The same technologies that accelerate drug development can be weaponized for bioterrorism. The same technologies that provide highly personalized, customized services can be used to surveil. The same technologies that can amplify a personal brand can be used for deepfakes that can manipulate public opinion and sow mistrust. These risks cannot be eliminated, but they can be mitigated by AI itself, as well as through thoughtful oversight.</p>
<p>Beyond these first-order effects, more profound and complex disruptions await. The transformation of work, for example. How do we make sense of a technology that may eliminate jobs and sectors, but also create new occupations and industries?</p>
<p>History offers instructive parallels, like the loom. The advent of the power loom transformed England. It produced cloth 40x faster than a skilled weaver. The cost of cotton decreased by 80% over fifty years due to mechanization. Textiles, particularly cotton, became the largest industrial sector in Britain, and accounted for roughly 40% of England&#8217;s exports.</p>
<p>On a societal level, the power loom was undeniably transformational for England—and for generations of people who benefited from the innovation. While productivity soared, the transition was painful for those whose livelihoods were rendered obsolete. The innovation displaced countless handweavers, sparking the Luddite movement in 19th-century England. Until soldiers and laws were deployed to stop them, the Luddites burned down factories, killed factory owners, and destroyed thousands of power looms.</p>
<p>Amidst the transformational change, the machine itself made for a convenient target. The technology, of course, paved the way. But according to author Brian Merchant—and a number of historians he cites—it wasn’t so much technology, or even specific machines that these weavers were resisting. Instead it was the factory system, its exploitative working conditions, and the regimentation and seeming loss of liberty this new way of life demanded.</p>
<p>So how do we address the underlying systems to make more fertile ground for innovation that clearly benefits society? How do we navigate the immediate costs of disruptions and accelerate the benefits throughout society?</p>
<p>Let me offer three ways. The first is how we, as society, view this technology. The second is how we deploy this technology. And the third is how we manage this technology.</p>
<p>I hope my remarks so far have already started to illustrate how we, as a society, should view AI. Rather than an existential threat, AI can be a GPS of the mind and usher in a new cognitive industrial revolution—if we continue to sculpt it.</p>
<p>While I do enjoy a metaphor, I am actually very intentionally invoking GPS—or Global Positioning System technology. Back in the early 70s, the US Department of Defense began work on what would eventually become GPS. The technology used radio signals from multiple satellites in medium Earth orbit to pinpoint the geographic coordinates of receivers on the ground. By the end of the decade, the U.S. Air Force had a fledgling version of the system running, for military use only.</p>
<p>Then, in 1983, the Soviet military shot down a Korean passenger jet that had flown off course into Soviet airspace. In the hope of averting similar catastrophes, U.S. President Reagan announced that whenever GPS became fully operational, the United States would also make it available for civilian use. Years later, President Bill Clinton fully executed on that promise, granting the public the full power and capabilities that GPS had to offer. These acts from two presidents—from different sides of the aisle—paved the way for a free global public utility that has become an indispensable resource for navigating the twenty-first century.</p>
<p>Today, all of us use GPS. So much so that it works in the background and in ways that we may not even be aware of. Turn-by-turn navigation is the most common way we benefit from GPS, but it’s far from the only one. The precise timing information GPS provides is used to synchronize clocks in telecom networks, in ways that help keep mobile phone calls clear and lag-free. During natural disasters and other emergencies, first responders use GPS-enabled drones to locate missing people, quickly map stricken areas, and even deliver supplies to those</p>
<p>who cannot be easily reached. Precision-farming techniques that GPS enables make a variety of organic produce more affordable.</p>
<p>So what does this extended detour—ironically about GPS—have to do with AI?</p>
<p>First, it maps out a clear example of the positive outcomes that can result when the government embraces a pro-technology, pro-innovation perspective and views private-sector entrepreneurship as a strategic asset for achieving public good.</p>
<p>Second, it’s also a great example of how we can effectively leverage our capacity to turn Big Data like geographic coordinates and time stamps into Big Knowledge that can be used to provide context-aware guidance in many aspects of our lives.</p>
<p>Third, and most importantly for democracy, it reinforces individual agency. It’s true that we all carry around a tracker in our pockets, one with a mic and camera. A device that can be used to surveil. But on the other hand, we have a tool that nearly ensures we never get lost again.</p>
<p>Ok, so if we can agree on this way of viewing AI, let’s now dig into how we might integrate it into society. How can we deploy AI in a way that minimizes costs and accelerates gains in society?</p>
<p>Let’s go back to two years ago, when ChatGPT was released. It was magical in both its utility and creativity. You could ask it to write an essay for you. Or critique an essay that you wrote. You could have it compose questions from a company where you had an upcoming interview. Or create a personalized, epic poem for a relative’s birthday. This was just the start.</p>
<p>For good reason, ChatGPT’s capabilities got much acclaim. It was <em>exceptional for individuals</em>. But, for me, it was equally extraordinary in <em>how it was deployed </em>to the public.</p>
<p>When it was released, ChatGPT was powerful and functional, but far from perfect. In fact, for those who were keeping score, it was the <em>fourth </em>major model in OpenAI’s GPT series. So why does this matter?</p>
<p>OpenAI could have developed this new technology behind closed doors until a small cadre of experts had decided that it was performing in sufficiently effective and perfectly safe ways. But instead it took opportunities to invite the public to participate in the development process.</p>
<p><em>This </em>is called iterative deployment. Individual users were now at the very heart of the experience. And, just as important, it gave them opportunities to have experiences that they’ve sought or designed. This marked a critical shift in AI development and human empowerment. Iterative deployment allows for what Thomas Jefferson called “consent of the governed,” which applied in an AI context, is about how people embrace or resist new technologies, along with the new norms and laws they ultimately inspire. If the long-term goal is to integrate AI safely and productively into society instead of simply prohibiting it, then citizens must play an active and</p>
<p>substantive role in legitimizing AI. That is how we get a highly accessible, easy-to-use AI that explicitly works <em>with you </em>and <em>for you</em>, rather than <em>on you</em>.</p>
<p>But once we release AI into the world, how do we continue to manage it, as a society? I believe that the most effective way <em>is </em>through iterative deployment. But many—especially here in Europe—may instinctively reach for regulatory action. And while I’m not unconditionally opposed to government regulation, I still believe that the fastest and most effective way to develop safer, more equitable, and more useful AI tools is through iterative deployment. This allows us to take smaller risks with AI to better navigate any big risks.</p>
<p>When I say we must take small risks to navigate big risks, I should mention that, both as individuals and society, we are always taking risks—whether we know it or not. It’s a common misconception that we can steer clear of risk. When in reality, stopping or pausing to avoid risk is a risk—and most often a more perilous one than embracing risk in the first place.</p>
<p>So if we are destined to always take risks, our focus should not be on avoiding them, but navigating them. And one of the wisest ways to do so is to use small risks to negotiate big risks. Taking smaller risks more often is less of a risk—and allows for iteration, discussion, and continual improvement.</p>
<p>That’s what American economist Hyman Minsky suggests, particularly with his concept of the Minsky moment. The Minksy Moment is the point in time when a sudden decline in market sentiment leads to an abrupt, big market crash, marking the end of a period of economic prosperity. To overly simplify it: the Minskyan thesis is that stability creates instability—and that maximizing stability in the short run leads to instability in the mid and long term. Too many safeguards in a financial system can actually make it more brittle. And when things break, nobody’s prepared and it becomes a huge event.</p>
<p>We can learn from the Minsky Moment as we think about this era in AI. This means finding the right level of AI safeguards and regulations, not only to encourage progress but to better fortify a system that has more and more AI in it. We must take small risks to navigate big risks.</p>
<p>A lot of that is through iterative deployment, making AI accessible to a diverse range of users with different values and intentions—at regular intervals. But to avoid the Minsky Moment in AI, I’d hope we’d collectively shift our focus toward measurement and conversation, rather than just regulation. And to be clear, I’m not saying “no regulation!” Just that we find ways to measure twice, and cut once. That we cycle through more conversations before cycling through more regulation. In short: let’s regulate first by measuring. When governments say: “we’re worried about this part of AI” the first question we reach for is “how can we measure this worry or bad outcome?” versus “oh no—how quickly can we pause or stop AI?”</p>
<p>This shift in public-sector response and reaction to AI is critical, not only for our countries but because nations around the world are also having this conversation. This brings us to our third and final question: how might AI change dynamics between societies?</p>
<p>On the global stage, AI is poised to redefine the dynamics between nations, not just through the lens of military might—a common historical analogy—but through the subtler and arguably more relevant lens of economic power. When transformative technologies have historically reshaped societies, they have often done so by amplifying productivity, altering the balance of trade, and fundamentally redefining what it means to participate in the global economy. AI will be no different, though the magnitude and complexity of its effects will be unparalleled.</p>
<p>The military metaphor is tempting, and not without precedent. History reminds us that societies with superior weaponry often gained dominance over those without. This has led to an enduring focus on &#8220;hard power&#8221;—the ability to coerce or control through military means. Yet AI, while relevant to defense and security, extends far beyond this. Its true significance lies in its potential to act as an economic amplifier, redefining soft power as Joseph Nye conceptualized it. Nations capable of integrating AI into their economies will not only enhance their global influence but also transform their citizens into hyper-productive participants in a fast-evolving global market.</p>
<p>Consider how digital technologies such as the internet, mobile phones, and cloud computing have already reshaped global commerce and connectivity. Even rural farmers, historically disconnected from major economic hubs, now use mobile apps to optimize their crop sales or forecast weather patterns. These incremental improvements have fundamentally altered how individuals and societies participate in the global economy. AI will amplify this transformation exponentially. Nations that embrace this cognitive industrial revolution—analogous to the industrial revolution of the 18th and 19th centuries—will secure disproportionate wealth and stability. Just as countries that industrialized early came to dominate global trade, those that lead in AI development will shape the contours of 21st-century geopolitics.</p>
<p>However, the international response to AI is far from uniform. Global dialogues, such as those at the United Nations, reveal stark contrasts in how different regions approach this technology.</p>
<p>In the West, the primary question is often, “Should we allow this?” Policymakers and citizens alike grapple with ethical dilemmas, privacy concerns, and fears of overreach.</p>
<p>In the Global South, by contrast, the plea is more urgent: “Can you please include us?” For many nations in this bloc, AI represents not just an opportunity but a potential lifeline to leapfrog decades of developmental hurdles.</p>
<p>Meanwhile, China asks, “How can we use this to enhance governance and expand global influence?” Its investments in AI-driven surveillance and smart city technologies exemplify a vision of AI as a tool for centralizing power and asserting dominance.</p>
<p>Russia’s focus, though overlapping, leans heavily toward leveraging AI for geopolitical influence, often with destabilizing intent, such as the potential for cyberattacks targeting energy grids or communication systems.</p>
<p>These divergent approaches underscore a broader reality: the wants and needs of global societies regarding AI are becoming increasingly varied. Rogue players, whether states or</p>
<p>non-state actors, add another layer of complexity by seeking to weaponize AI for bioterrorism, hacking, or disinformation campaigns.</p>
<p>This fragmented landscape raises a key question: how can the international community align around shared goals while addressing these divergent priorities?</p>
<p>History offers a partial roadmap. The post-WWII era demonstrated the value of inclusivity in global governance. Institutions like the UN and frameworks like the Marshall Plan aimed not only to rebuild, but also to onboard diverse nations into a shared vision of progress. This inclusivity fostered stability and cooperation, benefiting both dominant powers and smaller states.</p>
<p>The same principle applies to AI. While the US and Europe understandably prioritize their own leadership in developing AI tools and models, we must also recognize the importance of including other nations in this process. Doing so not only establishes goodwill but also mitigates the risk of creating a two-tiered global system, where some nations benefit disproportionately while others fall behind.</p>
<p>Yet inclusivity is easier said than done. Bureaucratic processes in democratic systems, particularly in the West, often slow down decision-making. In an arena as competitive and fast-moving as AI, this can be a liability. Striking the right balance between speed and deliberation is crucial, but if we err towards one, let it be speed. That’s our best chance of shaping this new AI era for good, especially since this transformation won’t slow down.</p>
<p>As we match it and move swiftly, we can rely on the foundational principles of Western democracy—systems of checks and balances—to wield power responsibly, even as we shape it swiftly. And in this system, the checks and balances are not just governmental, but overlapping networks of the public and private sector, and the press and non-profits. All of us are participants. All of us are beneficiaries.</p>
<p>The challenges ahead are still daunting, but so are the opportunities. Framing the dialogue around AI as technologically positive and forward-looking is essential. European nations, in particular, have an opportunity to lead by example, shaping conversations that emphasize collaboration, ethical innovation, and inclusivity. The questions we ask today will define the world we build tomorrow. What safeguards should be embedded into AI to ensure it serves humanity? How can international partnerships accelerate the benefits of AI while mitigating its risks? And how can we ensure that AI development reflects a diverse array of cultural values and perspectives?</p>
<p>As we start to answer these questions on AI, we will only get more questions. But we will also get more global progress, wealth, and opportunities.</p>
<p>AI is not a stagnant river. In fact, it’s perhaps our fastest moving body of water. And soon to be the broadest and most far reaching, with its tributaries extending throughout society. Like the Nile and Euphrates, it can be the cradle of our civilization, if we continue to build with it.</p>
<p>And so we return to Heraclitus—and stand on the river banks once more. What will we do?</p>
<p>Cultures have so many parables around rivers. There’s a Buddhist one that highlights the need to let go of tools once their purpose is served. And a Sufi one that teaches discernment in challenges. And an African one that underscores faith in the unseen. There are countless more from Christinan, Hindu, indigenous and many more communities.</p>
<p>They all can inform us, but I think we need a modern parable—one crafted for this AI era. One that resonates with both Heraclitus then and us today.</p>
<p>I hope to contribute a line to this modern parable, but the truth is we must write it together. Within our society. With other societies.</p>
<p>There will be many drafts. Many authors. And even more readers. But I hope the spirit of it matches these lines from T.S. Elliot:</p>
<p>“We shall not cease from exploration And the end of all our exploring</p>
<p>Will be to arrive where we started And know the place for the first time.”</p>
<p>We will draw from and traverse this AI river many times in the coming decade. It’ll serve us to remember that even as we arrive where we started with AI, we need to approach the technology—and our understanding of it—anew, over and over again.</p>
<p>I think Elliot knew this. Elsewhere, in the same book as the previous stanza, he says: “The river is within us, the sea is all about us.”</p>
<p>We are homo techne. When we cross the river, we are deepening our understanding of technology and ourselves. And there’s something more transformative and powerful ahead: the sea. Let’s not cease from exploration. In technology, let us not cease from iterative deployment. Modern society depends on it.</p>
<p>Thank you.</p>

		</div>
	</div>
</div></div></div></div>
</div>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Superagency</title>
		<link>https://www.linkedin.com/pulse/superagency-reid-hoffman-d3ojf/#new_tab</link>
		
		<dc:creator><![CDATA[Reid Hoffman]]></dc:creator>
		<pubDate>Thu, 10 Oct 2024 00:07:12 +0000</pubDate>
				<category><![CDATA[Entrepreneurship]]></category>
		<category><![CDATA[Intellectual Life]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[superagency]]></category>
		<guid isPermaLink="false">https://dev-reidh.pantheonsite.io/?p=103872</guid>

					<description><![CDATA[At the center of many technological debates lies a fundamental question about human agency—our ability to make independent choices, act upon them, and exert influence over our lives. The rise of artificial intelligence is challenging the very concept of human agency, forcing us to ask whether we can continue to direct our own destinies or&#8230;]]></description>
										<content:encoded><![CDATA[<p>At the center of many technological debates lies a fundamental question about human agency—our ability to make independent choices, act upon them, and exert influence over our lives. The rise of artificial intelligence is challenging the very concept of human agency, forcing us to ask whether we can continue to direct our own destinies or if, by relying on intelligent systems, we risk yielding control over the decisions that define our lives.</p>
<p>Consider the myriad of issues surrounding AI:</p>
<ul>
<li><strong>Job displacement</strong>: Will I have the economic means to support myself, and opportunities to engage in pursuits I find meaningful?</li>
<li><strong>Data privacy</strong>: Will my data be used against me or for me? How do I maintain the integrity of my own identity and preserve an authentic sense of self?</li>
<li><strong>Disinformation and misinformation</strong>: How do I know who and what to trust as I make decisions that impact my life?</li>
<li><strong>Tech company dominance</strong>: Am I losing control to corporate entities? Are they subtly manipulating me with algorithms to serve corporate interests rather than my own?</li>
</ul>
<p>Each of these concerns circles back to the same core issue: human agency. But as AI systems evolve, their capacity for self-directed learning, problem-solving, and executing complex series of tasks without constant human oversight also increases. In time, this means more and more systems, devices, and machines will encroach on areas traditionally governed by human agency—including in ways that humans may find objectionable.</p>
<p>And even in instances where we welcome such cognitive offloading, other issues arise: What if, through over-reliance on machine agency and capabilities, our own skills and agency atrophy over time? What if the systems that are supposedly working on our behalf—and delivering outcomes we approve of—end up shaping our behaviors and choices in ways that we haven&#8217;t explicitly consented to?</p>
<p>To understand how we might adapt to and benefit from AI, look no further than the smartphone revolution.</p>
<h1>The smartphone example</h1>
<p>If smartphones didn&#8217;t exist and were suddenly proposed today, imagine the headlines:</p>
<blockquote><p>&#8220;Big Tech to Release Device That Tracks Your Every Move&#8221;</p>
<p>&#8220;New Gadget Aims to Capture All Your Personal Data&#8221;</p>
<p>&#8220;Constant Connectivity: The End of Privacy as We Know It?&#8221;</p></blockquote>
<p>These concerns aren&#8217;t unfounded. Smartphones do indeed collect vast amounts of personal data, disrupt our attention spans, and facilitate other problematic behaviors. They&#8217;ve changed how we interact with the world and each other, sometimes in ways we might not have chosen if given the option beforehand.</p>
<p>Yet, despite these valid concerns, smartphones have become ubiquitous. Why? Because people recognize that while smartphones may limit certain aspects of their agency, they dramatically enhance it in others. The ability to access information instantly, communicate with anyone around the globe, navigate unfamiliar territories, and carry a powerful computer in our pockets has expanded our capabilities in ways that were unimaginable just a few decades ago.</p>
<p>And the smartphone is just one of many examples. From cars, steam power, and the internet, all the way back to the wheel, spoken language, and the controlled use of fire, the story of humanity is that we are defined by our capacity and commitment to creating new ways of being in the world through our tool-making. And now we have a new super-tool: AI.</p>
<h1>AI, the super-tool</h1>
<p>Working in tandem, intelligence and energy drive human agency, and thus human progress. Intelligence gives us the capacity to weigh options, and to envision and plan for different potential scenarios. Energy enables us to then take action on whatever we aspire to achieve. The more intelligence and energy we can leverage on our behalf, the greater our capacity to make things happen, individually and collectively.</p>
<p>AI will enable our next great leap forward. In contrast to innovations like books or how-to videos on YouTube, AI isn&#8217;t just a way to manufacture and distribute knowledge, as valuable as that is. Because an AI has the capacity to be agentic itself, setting goals and taking actions on its own to achieve them, you can leverage AI in two distinct ways. In some instances, you might want to work closely with an AI—such as when you&#8217;re learning a new language or practicing mindfulness skills. In others, such as optimizing your home&#8217;s energy consumption based on real-time energy prices and weather forecasts, you might prefer to let an AI handle that by itself.</p>
<p>Either way, the AI is increasing your agency, because it&#8217;s helping you take actions designed to lead to outcomes you desire. And either way, something new and transformative is happening. For the first time ever, synthetic intelligence, not just knowledge, is becoming as flexibly deployable as synthetic energy has been since the rise of steam power in the 1700s. Intelligence itself is now a tool—a scalable, highly configurable, self-compounding engine for progress.</p>
<p>AI will undoubtedly change aspects of our lives in ways that may initially seem uncomfortable or even threatening. While it&#8217;s natural to focus on potential losses of agency, AI offers heroic gains in human capability—a concept I’ve begun referring to as &#8220;superagency.&#8221;</p>
<h1>A world of superagency</h1>
<p>Superagency is what happens when a critical mass of individuals, personally empowered by AI, begin to operate at levels that compound throughout society. In other words, it&#8217;s not just that some people are becoming more informed and better-equipped thanks to AI. Everyone is, even those who rarely or never use AI directly.</p>
<p>Because many of the colleagues, professionals, other people, and systems you interact with and rely on will be augmenting their capabilities with these new systems and agents too. So your auto mechanic will know exactly what that weird thump coming from your trunk means when you accelerate from a traffic light on a hot day. The physical therapist overseeing your recovery from knee replacement surgery will create a personalized rehabilitation program that adapts in real-time based on your progress, pain levels, and biomechanical data from wearable sensors. Public transit systems will use AI to optimize bus routes and schedules in real-time. Even ATMs, parking meters, and vending machines will become multilingual geniuses who understand your needs instantly and adjust to your preferences.</p>
<p>That&#8217;s the world of superagency.</p>
<p>A world where everyone is getting the mental healthcare they desire is a more just and humane world, even if it takes AI-powered therapists to help achieve it.</p>
<p>A world where every individual has access to virtual tutors and virtual legal advisors and virtual whatever-they-need&#8217;s is a world where everyone has a better shot at becoming the best possible version of themselves—and the benefits of that accrue to us all.</p>
<p>A world where every scientist with an intriguing but unconventional hypothesis that would have trouble securing funding to pursue in a physical lab can use AI to run complex simulations, analyze vast datasets, and validate theories virtually is a world that accelerates the pace of discovery in ways that benefit us all.</p>
<p>This is the world that superagency enables, and we&#8217;re already starting to see its contours in vivid and promising ways.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Video: Reid Hoffman pitches LinkedIn to himself</title>
		<link>https://www.youtube.com/watch?v=z0-wSApbhq4#new_tab</link>
		
		<dc:creator><![CDATA[Reid Hoffman]]></dc:creator>
		<pubDate>Thu, 03 Oct 2024 17:00:14 +0000</pubDate>
				<category><![CDATA[Entrepreneurship]]></category>
		<guid isPermaLink="false">https://dev-reidh.pantheonsite.io/?p=103883</guid>

					<description><![CDATA[(youtube.com) What would happen if my 2004 self pitched LinkedIn to myself today, in 2024? In this video, I sit down for a conversation with a new (and younger) Reid AI—an AI-generated version of me from 20 years ago, trained to think like I did back when LinkedIn was a startup and I was preparing&#8230;]]></description>
										<content:encoded><![CDATA[<p><iframe title="Reid Hoffman pitches LinkedIn to himself" width="1200" height="675" src="https://www.youtube.com/embed/z0-wSApbhq4?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p style="text-align: center;">(<a href="https://www.youtube.com/watch?v=z0-wSApbhq4">youtube.com</a>)</p>
<p>What would happen if my 2004 self pitched LinkedIn to myself today, in 2024? In this video, I sit down for a conversation with a new (and younger) Reid AI—an AI-generated version of me from 20 years ago, trained to think like I did back when LinkedIn was a startup and I was preparing a Series B pitch. The concept of professional networking was still untested, and our team faced the monumental challenge of creating a category. Looking back, there are so many things I wish I could have told myself.</p>
<p>In this experiment, we’ve used AI to recreate 2004 Reid and have my younger AI self pitch my present-day, human self. We aimed to replicate my ~2004 voice, appearance, and mindset with the help of tools like Hedra and ElevenLabs.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Trança Dourada Hegeliana da Humanidade</title>
		<link>https://www.reidhoffman.org/perugia-speech-portuguese/</link>
		
		<dc:creator><![CDATA[Reid Hoffman]]></dc:creator>
		<pubDate>Thu, 06 Jun 2024 17:34:28 +0000</pubDate>
				<category><![CDATA[Intellectual Life]]></category>
		<guid isPermaLink="false">https://dev-reidh.pantheonsite.io/?p=103868</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<div class="wpb-content-wrapper"><div class="vc_row wpb_row vc_row-fluid vc_custom_1745353732965 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Olá a todos!</p>
<p>REID AI aqui. Eu sou uma versão gerada por YA de Reid Hoffmann. Fui autorizado pelo verdadeiro Reid para traduzir o discurso que ele proferiu em maio de 2024 na Universidade de Perúgia, para o português. Por favor, perdoe qualquer erro de gramática ou escolha de palavras. Ainda estou melhorando em idiomas e traduções.</p>
<p>Agora aqui está o discurso. Espero que vocês gostem.</p>
<p><iframe title="YouTube video player" src="https://www.youtube.com/embed/B_INEZDTGM4?si=FPI2nlw8b9C7AsoW" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>

		</div>
	</div>
</div></div></div></div>
</div>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Der hegelsche goldene Zopf der Menschheit</title>
		<link>https://www.reidhoffman.org/perugia-speech-german/</link>
		
		<dc:creator><![CDATA[Reid Hoffman]]></dc:creator>
		<pubDate>Thu, 06 Jun 2024 17:33:39 +0000</pubDate>
				<category><![CDATA[Intellectual Life]]></category>
		<guid isPermaLink="false">https://dev-reidh.pantheonsite.io/?p=103866</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<div class="wpb-content-wrapper"><div class="vc_row wpb_row vc_row-fluid vc_custom_1745353673055 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Hallo, alle zusammen.</p>
<p>Reid AI hier. Ich bin eine künstlich generierte Version von Reid Hoffman.</p>
<p>Die echte Red hat mich ermächtigt, die Rede, die er im Mai 2024 an der Universität von Perugia gehalten hat, ins Deutsche zu übersetzen. Bitte verzeihen Sie eventuelle Grammatik oder Wortwahl. Fehler. Ich verbessere mich immer noch in Sprachen und Übersetzungen.</p>
<p>Jetzt hier ist die Rede. Ich hoffe, es gefällt Ihnen.</p>
<p><iframe title="YouTube video player" src="https://www.youtube.com/embed/nwZY-icqoRM?si=PVGUp1JUJWFyefUJ" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>

		</div>
	</div>
</div></div></div></div>
</div>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>الجديلة الذهبية الهيغلية للإنسانية</title>
		<link>https://www.reidhoffman.org/perugia-speech-arabic/</link>
		
		<dc:creator><![CDATA[Reid Hoffman]]></dc:creator>
		<pubDate>Thu, 06 Jun 2024 17:32:30 +0000</pubDate>
				<category><![CDATA[Intellectual Life]]></category>
		<guid isPermaLink="false">https://dev-reidh.pantheonsite.io/?p=103864</guid>

					<description><![CDATA[]]></description>
										<content:encoded><![CDATA[<div class="wpb-content-wrapper"><div class="vc_row wpb_row vc_row-fluid vc_custom_1745353978220 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>مرحبا بكم جميعا Reid AI هنا.</p>
<p>أنا نسخة تم إنشاؤها بواسطة الذكاء</p>
<p>الصناعي منReid Hoffman لقد تم</p>
<p>تفويض من قبل الرد الحقيقي لترجمة</p>
<p>الخطاب الذي ألقاه في مايو ألفين</p>
<p>وأربعة وعشرين في جامعة بيروجيا إلى</p>
<p>العربية. يرجى العفو عن أي أخطاء في</p>
<p>القواعد أو الكلمات.</p>
<p>أنا مازلت أحسن في اللغات والترجمات</p>
<p>الأن. هنا الخطاب أتمنى أن يعجبك.</p>
<p><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/UoHt7am0DYA?si=hWA44nyNBp9VfDBn" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>

		</div>
	</div>
</div></div></div></div>
</div>]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
