<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Reading Room Archives</title>
	<atom:link href="https://www.skeptic.com/reading_room/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.skeptic.com/reading_room/</link>
	<description>Examining Extraordinary Claims and Promoting Science Since 1992</description>
	<lastBuildDate>Wed, 30 Oct 2024 20:54:45 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7</generator>
	<item>
		<title>Do Animals Have the Same Rights as Humans?</title>
		<link>https://www.skeptic.com/reading_room/do-animals-have-the-same-rights-as-humans/</link>
		
		<dc:creator><![CDATA[Seong-han Kim]]></dc:creator>
		<pubDate>Fri, 15 Nov 2024 16:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[animal ethics]]></category>
		<category><![CDATA[animal experimentation]]></category>
		<category><![CDATA[animal liberation]]></category>
		<category><![CDATA[animal rights]]></category>
		<category><![CDATA[moral status of animals]]></category>
		<category><![CDATA[speciesism]]></category>
		<category><![CDATA[utilitarianism]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=45386</guid>

					<description><![CDATA[Can humans justifiably do whatever they want with animals? This thought-provoking article challenges common justifications for human dominance over animals—from religious arguments to claims of human superiority. Through philosophical perspectives like Singer's utilitarianism and Regan's rights theory, it explores the moral status of animals and questions our assumptions about species-based discrimination.]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">R</span>ecently, a law banning the consumption of dog meat was passed by the National Assembly of Korea. One of the main goals of domestic animal rights activists and dog lovers has been accomplished. However, quite a few people were dissatisfied with the passing of this particular law, asking why it refers specifically to dogs, but does not protect cows and pigs. Those who welcomed the species-specific ban responded to such reactions by citing the thousands of years of companionship shared between dogs and humans, and our uniquely close bond.
</p>
<p>
	Animal ethics, a division within applied ethics that deals with such disputes over animal issues, is one of today’s most hotly debated fields. It includes discussions on the moral status of animals, the use of animals for food or for experimentation, the ethics of having zoos, aquariums, xenotransplantation, and consumption of dog meat, among various other topics. Such topics are not only interesting in their own right but also invite us to reflect critically on our often-assumed position atop the animal kingdom.
</p>
<h4>
	Can humans do with animals as they please?<br />
</h4>
<p>
	Until relatively recently, people considered animals merely as tools for human use, thinking that the use of animals in any way was entirely up to humans. What is the basis for such thinking? And can such assumptions be justified?
</p>
<p>
	One of the reasons some people think humans can do whatever they want with animals is that our species now sits atop of the food pyramid, and that position justifies our dominance over animals. They argue that humans have the power to subdue animals and that using this power to treat animals as they please is thereby justified.
</p>
<p>
	With a bit of thought, however, it is not difficult to recognize that this logic is flawed. If the logic of power were justified, then imperialist invasions could be justified, as would be the actions of bullies who harass weaker peers in schools. Invoking power is a logic that thoroughly represents the perspective of the strong, but if we consider ourselves in the position of the weak in a Rawlsian “veil of ignorance” thought experiment, we can see that this reasoning is flawed. If we encountered an extraterrestrial intelligence vastly superior to humans, would we accept the logic of power and agree they could exploit us as food? Obviously not.<span id="more-45386"></span>
</p>
<p>
	One reason that makes the logic of power seem justified is the failure to distinguish facts from values. The two are distinctly different; for example, it is a fact that the strong dominate the weak. The world is indeed a place where the powerful rule over others, such as the wealthy oppressing the poor or the strong bullying the weak. However, just because such practices are facts does not mean they are values we should accept. In philosophy, this mistake of deriving value directly from fact is called <em>the naturalistic fallacy</em>. That humans dominate animals is a fact, but that does not justify it as right, as something we ought to do.
</p>
<h4>
	Did God grant dominion to humans?<br />
</h4>
<p>
	Christianity, which has established itself as world’s largest and most widespread religion with approximately 2.4 billion followers, has long believed that humans are the crown of creation, were made in the image and likeness of God, and are endowed with rationality unique among all of God’s creatures. This belief justifies the special position of humans who, possessing these characteristics, should be the subject of greater concern compared to other animals, and in cases of conflicting interests between humans and other beings, the latter should yield to the former. From the Christian perspective, all beings, including animals, ultimately exist for God, while in this world, they exist for humans, as commanded in Genesis 9:1–3:
</p>
<blockquote>
<p>
		Every moving thing that lives shall be food for you. And as I gave you the green plants, I give you everything.
	</p>
</blockquote>
<p>
	There is debate over whether interpreting such messages as granting humans the right to treat other animals as they wish is appropriate. Indeed, there is room to interpret this dominion as meaning that humans must fulfill their duty to God by being wise stewards of His creation and therefore making efforts to protect animals. Nevertheless, if indeed God has bestowed such authority upon humans, then they would be able to treat animals as they see fit.
</p>
<p>
	The real problem with this argument arises due to the fact it is specifically rooted in the God of Christianity (as well as Judaism and Islam), and adherents of other religions and agnostics and atheists would therefore not accept this position. Hinduism, for example, advocates respect for animals, teaching that all living beings are imbued with divinity, and those who adhere to this religion would argue for the importance of respecting animals. Justifying human dominion over animals based on the doctrines or teachings of a specific religion lacks persuasiveness. If one were not a follower of that religion, they could simply reject such a position.
</p>
<h4>
	Are humans fundamentally different from other animals?<br />
</h4>
<p>
	Whatever the reason, we tend to believe that humans are fundamentally different from other animals, and that this difference justifies our dominion over them. Human rationality is often used to highlight the distinction between people and animals, with particular focus on intelligence or capacity for moral reasoning as the specific differences that demonstrate human superiority. Even conceding these differences, do they justify the various forms of abuse perpetrated against animals for consumption or experimentation?
</p>
<p>
	Consider how we treat humans who lack these abilities. For example, while rationality is a capability that the average adult possesses, there are members of society who lack it entirely, such as those with severe intellectual disabilities, individuals in a vegetative state, infants, or those with dementia. Yet, we do not believe anyone can treat these individuals in the same way they are allowed to treat most animals. If this is the case, we cannot justify discrimination based merely on the presence or absence of rationality, and therefore, we cannot justify unfair treatment of animals by claiming they lack rationality. This argument about how to treat those who lack some of the capabilities that the average adult possesses is referred to by ethicists as <em>the argument from marginal cases</em>. If we contemplate how to treat people who lack these abilities, we cannot justify discrimination against animals by claiming they lack these abilities, if we seek any kind of consistency.
</p>
<p>
	Some argue that merely being human can be a reason to respect humans, and that it can become a basis for discriminating against animals. However, this logic is no different from the rationale behind sexism or racism, which treats only individuals of the same sex or race equally and discriminates against those who do not belong to that category. Just as discriminating based on gender is called sexism and discriminating based on race is called racism, discriminating based on species is termed <em>speciesism</em>.
</p>
<p>
	The position that humans can treat animals arbitrarily is generally unconvincing. Conversely, however, if humans should not treat animals arbitrarily, what is the rationale? Ethicists believe various moral judgments made in different situations should be justified through some ultimate standard. For example, they argue that judgments about whether it is permissible to eat animals should be justified through moral theory. Peter Singer and Tom Regan are prominent philosophers who argue that animals should be granted moral status in this way. Singer argues for moral status through utilitarianism, while Regan attempts to grant moral status to animals through rights theory.
</p>
<h4>
	Animals That Feel Pleasure and Pain<br />
</h4>
<p>
	Peter Singer is the first philosopher to systematically grant moral status to animals. His 1975 book <em>Animal Liberation</em> is referred to as the bible of the animal rights movement and marks the beginning of the debate about the moral status of animals. Before its publication, people did raise issues with factory farming and animal experiments, and while veganism existed, Singer’s work made a significant impact because it was both an exposé and outlined a rational argument for liberating animals. Singer persuasively argued that animals must be granted moral status based on ethical theory, while highlighting—at the time largely unknown to the public—the horrific abuse inflicted upon animals used for experimentation and meat consumption.
</p>
<p>
	Singer uses utilitarianism to show that animals have moral status. Roughly speaking, utilitarianism considers pleasure good and pain evil. According to Singer, any being that can feel pleasure and pain is a subject of moral consideration. He calls such creatures “sentient beings,” a category that includes mammals, birds, reptiles, amphibians, and fish that possess a central nervous system. Animals raised in factory farms for meat, such as cows, pigs, and chickens, and animals used for experiments, such as mice and monkeys, live in extreme pain throughout their lives. Releasing them from this pain is a logical and moral obligation of utilitarianism.
</p>
<p>
	Singer formalizes utilitarianism as the principle of equal consideration of interests. This principle demands that the interests of every being be equally considered. This includes all sentient beings, not just humans. Singer believes there should be no special differences in interests, whether they are human or non-human animals. If an animal feels 100 units of pain and a human feels 10 units of pain, other conditions being equal, the animal’s pain should be prioritized. If we think otherwise and give priority to human pain simply because it is human, we are not considering interests equally. To do so is evidence of a specialist attitude that grants priority to humans over animals simply because they are human.
</p>
<h4>
	Animals as Subjects of a Life<br />
</h4>
<p>
	While Peter Singer used utilitarianism to justify the moral treatment of animals, Tom Regan attempted to grant animals moral status using rights theory. Regan’s rights theory, along with Singer’s position, is considered a philosophical foundation of the animal rights movement, particularly in the English-speaking world.
</p>
<p>
	Regan’s seminal work is <em>The Case for Animal Rights</em>, published in 1983. This book argues, with meticulous logic, that animals that meet certain criteria have an absolute moral status. According to Regan, any being with “inherent value” has rights. Here, rights refer to “ethical values” different from legal rights that vary by society or state. These are rights that protect the inherent value of all beings equally. This value is logically distinct from other types of value, such as utility or aesthetic value, and cannot be reduced to them. It also corresponds to what Kant posits when considering rational beings as ends in themselves; either all beings possess this value or none do, and there are no degrees of possession. Regan asserts that beings possessing this value are equal in having it.
</p>
<p>
	So, who possesses this value? According to Regan, “subjects of a life” have inherent value. These beings possess certain characteristics, such as awareness, desire, intention, purpose, belief, perception, memory, emotion, and self-awareness. Most humans meet these criteria, as do most mammals older than one year. Thus, non-human animals such as mammals older than one year have inherent value. Beings with inherent value have the right to be treated as ends in themselves, not as means to an end. This means it is not permissible to violate the inherent value of these beings for another being’s benefit. Consequently, Regan argues that using animals as food, for experiments, hunting, or display in zoos violates their inherent value and is therefore wrong.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.3/"><img fetchpriority="high" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n3_lg.jpg" alt="Skeptic 29.3 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.3<br />
		<a href="https://www.skeptic.com/productlink/magv29n3">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/267337#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	In this regard, Regan contrasts with Singer, who, in some instances, deals somewhat leniently with the issue by leaving room for the possibility of taking the lives of animals beyond certain criteria. Singer’s allowance for taking animal lives stems from the perspective of utilitarianism, where if killing animals produces significantly more benefits than pain, or if it’s possible to kill animals without any pain at all, then it may be permissible. This is an inevitable conclusion from the utilitarian standpoint, but Regan rejects such a moderate stance of utilitarianism and firmly grants moral status to animals.
</p>
<p>
	As recently reported by Peter Singer in his 2023 book <em>Animal Liberation Now</em>, research conducted in Germany, Austria, and Switzerland found that 67 percent of ethicists and 63 percent of non-ethics philosophy professors considered it morally wrong to eat meat from mammals. This shows that even though there may be a relatively high percentage of professors who believe eating meat is wrong, there is still a significant number of ethicists or philosophy professors who do not particularly see eating meat as wrong.
</p>
<p>
	When contemplating this issue, we must not base our judgments on intuition, personal preference, or habit, but rather on reason—at least if we aim to avoid logical errors and strive to maintain logical consistency. This is important not only when discussing animal rights, but also if we hope to lead a morally sound life. <span><img decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>Seong-han Kim</strong> is a professor in the Department of Ethics Education at Jeonju National University of Education in South Korea, with a keen interest in shared life and evolution. He holds a PhD in Philosophy from Korea University in Seoul. He is the author of several books on ethics, morality, and animal rights, and a contributor to the Korean edition of <em>Skeptic</em> magazine.
	</p>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Insect on the Menu:   Separating Fact From Fiction </title>
		<link>https://www.skeptic.com/reading_room/insect-on-the-menu-separating-fact-from-fiction/</link>
		
		<dc:creator><![CDATA[Arnold van Huis]]></dc:creator>
		<pubDate>Fri, 08 Nov 2024 16:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[edible insects]]></category>
		<category><![CDATA[environmental impact of meat]]></category>
		<category><![CDATA[food security]]></category>
		<category><![CDATA[insect-based products]]></category>
		<category><![CDATA[insects as food]]></category>
		<category><![CDATA[nutritional benefits of insects]]></category>
		<category><![CDATA[protein alternatives]]></category>
		<category><![CDATA[sustainable food sources]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=45380</guid>

					<description><![CDATA[Arnold van Huis explores the potential of edible insects as sustainable food, addressing common misconceptions and cultural barriers. While the West has shown resistance, citing health and safety concerns, the nutritional and environmental benefits of insects position them as a viable future food source. Could a shift in perception make insects a staple in Western diets?]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">I</span>n 2013, the Food and Agriculture Organization of the United Nations published a report titled <em>Edible Insects: Future Prospects of Insects as Food and Feed</em>.<sup><a href="#note01">1</a></sup> Despite being downloaded over seven million times and receiving widespread publicity, the report did not generate much controversy. This was likely because the consumption of insects was seen as an unfamiliar practice associated with developing countries in the tropics, and the idea of incorporating bugs into Western diets was just not taken seriously. However, that perception has been changing recently. Several edible insect products have been declared safe by international and national food safety organizations in Western countries. In response, critical coverage has emerged on social media, claiming that edible insects pose a threat to public health and even to national identities.
</p>
<p>
	Here are a few examples of how some politicians and journalists reacted to the legalization of insects as food:
</p>
<ul>
<li>
		A senator from the Republican party in France warned the Senate on January 25, 2024 that French people would be led to “eat insects without their knowledge.”<sup><a href="#note02">2</a></sup>
	</li>
<li>
		The Hungarian agriculture minister warned that “traditional eating habits may be in danger.”<sup><a href="#note02">2</a></sup>
	</li>
<li>
		A former interior minister and leader of the ABV party in Bulgaria described allowing insects in food as a “crime against Europe” and claimed it showed a “readiness to kill our European children.”<sup><a href="#note02">2</a></sup>
	</li>
<li>
		In 2023, the Italian agricultural minister said of the use of edible insects to make traditional dishes such as pasta and pizza, “it’s fundamental that these flours are not confused with food made in Italy.”<sup><a href="#note03">3</a></sup>
	</li>
<li>
		In the Netherlands in 2023, a member of parliament posted on social media that “we will not eat the bugs,” while seen pouring mealworms from a bag onto the ground.<sup><a href="#note04">4</a></sup>
	</li>
<li>
		In the UK, a news article read, “Regulations are imposed (not debated) by the European Commission. In this case, the powdered bug was imposed into the food chain of around half a billion people without debate.”<sup><a href="#note05">5</a></sup>
	</li>
<li>
		In the U.S., a newspaper article warned, “Biden admin shells out taxpayer dollars on getting Americans to eat trash-fed bugs.”<sup><a href="#note06">6</a></sup>
	</li>
</ul>
<p>
	This aversion has been connected to conspiratorial claims about a shadowy global elite that aims to control the world’s population. The World Economic Forum (Davos) has been singled out, with claims that elites such as heads of state and business leaders want people to eat bugs instead of meat to combat food insecurity.<sup><a href="#note07">7</a></sup> <span id="more-45380"></span>
</p>
<p>
	The main narratives surrounding these claims include:
</p>
<ul>
<li>
		The European Union is forcing its citizens to eat insects against their will.
	</li>
<li>
		Insects are poisonous.
	</li>
<li>
		Insect-derived ingredients are added to food because of the dramatic consequences of sanctions imposed on Russia in light of the war in Ukraine.<sup><a href="#note08">8</a></sup>
	</li>
</ul>
<p>
	Is there any truth to such allegations? These claims are largely false. Since 2021, several insect products have been approved in the European Union as safe for human consumption,<sup><a href="#note09">9</a></sup> but there is no mandate forcing citizens to eat them. The safety of these products is ensured through rigorous testing by food safety organizations. Additionally, the move towards including insects in the food supply is motivated by concerns about sustainability and food security, rather than by sanctions against Russia. So, why is there such significant resistance to eating insects?
</p>
<h4>
	Why are we reluctant to accept insects as food?<br />
</h4>
<p>
	The numerous statements by politicians and journalists quoted above, and similar sentiments shared by the public, are primarily rooted<sup><a href="#note10">10</a></sup> in two underlying reasons:
</p>
<ol>
<li>
		<strong>Food Neophobia</strong>: The reluctance to eat (or the avoidance of) new foods.
	</li>
<li>
		<strong>Disgust</strong>: The instinctual avoidance of food considered potentially to cause death or disease and because “creepy-crawly” things are, well, disgusting.
	</li>
</ol>
<p>
	Although these are closely related, disgust is the more significant barrier to acceptance of insect consumption. Food neophobia can be mitigated with information about the food item’s nutritional value, safety, and sustainability. Disgust is harder to overcome as it is based on emotion rather than rationality.<sup><a href="#note11">11</a></sup> A related reason why insects are not commonly eaten in the Western world is their association with dirt, death, disease, and contamination.
</p>
<h4>
	Food Safety and Nutritional Value<br />
</h4>
<p>
	The assumption by Westerners that insects cannot be eaten and are merely survival food in tropical countries is more of a bias than a fact. In tropical regions, over 1,500 insect species are consumed because the local population considers them nutritious, tasty, and easy to procure, rather than solely due to low living standards. In North America and Europe, insects have not been a customary food item primarily because they are not available in sufficient quantities to be considered food. In contrast, in tropical regions, larger insects are available year-round and can be easily harvested due to their tendency to occur in clumped distributions. Insects do occasionally appear in large numbers in temperate zones—for example, in the U.S., the 2024 simultaneous emergence of 13- and 17-year cycle cicadas, which happens only once every 221 years, sparked interest in using them as food.<sup><a href="#note12">12</a></sup> Historically, until the mid-1900s insects such as cockchafers were both a common pest and a delicacy in France and Germany.<sup><a href="#note13">13</a></sup>
</p>
<p>
	To make edible insects available in Western markets, they need to be reared under controlled conditions on farms. For human consumption, species such as mealworms, crickets, and locusts are used, while fly species such as the black soldier fly are used for animal feed because they can be reared on various organic side streams. And there’s an additional benefit: with 30 percent of food and agricultural produce going to waste,<sup><a href="#note14">14</a></sup> using these fly species could contribute to a self-sustaining economy. The market for edible insects as animal feed is expected to grow from about $7 billion in 2023 to $116 billion by 2033.<sup><a href="#note15">15</a></sup> This growth is being driven by the increasing prices for conventional feed ingredients such as fishmeal and soybean meal, whose sustainability is in question.<sup><a href="#note16">16</a></sup> Most insect-based ingredients are used as feed for animals (pets, fish, chickens, and pigs), while insects for human consumption remain a niche market, expected to grow from $650 million in 2023 to over $1 billion by 2033.<sup><a href="#note17">17</a></sup>
</p>
<h4>
	Conspiracy Theories<br />
</h4>
<p>
	Conspiracy theories often highlight the perceived food safety risks associated with consuming insects. The European Union requires that any insect intended for food first must be screened by the European Food Safety Authority (EFSA). This rigorous process takes considerable time and thoroughly evaluates all aspects of food safety. While several insect products have been approved, they carry a warning regarding allergies. Individuals allergic to seafood or house dust mites should be cautious due to potential cross-reactivity, as these groups are taxonomically similar to insects. Consequently, this risk must be shown on the product label, along with a clear statement that the product contains insects.
</p>
<p>
	Conspiracy theories also mention the alleged danger of consuming chitin, the material composing the exoskeleton of insects. Chitin can be present in food products containing whole insects, as the processing methods—decontamination, drying, and grinding—do not remove it. However, when insect protein (meal) is used, it is often separated from fats and chitin. Even when chitin is present, it is unlikely to be harmful and may even offer health benefits.<sup><a href="#note18">18</a>, <a href="#note19">19</a></sup> Insect products are rich in antioxidants and essential minerals such as zinc and iron, which are often deficient in large segments of the human population, and can cause conditions such as anemia.
</p>
<p>
	It is challenging to provide general numbers regarding nutritional value of insects, because they depend on various factors such as the insects’ diet and other biotic and abiotic elements. However, insect products are generally as nutritious as, or even more nutritious than, meat products.<sup><a href="#note20">20</a></sup>
</p>
<h4>
	Are insects the future of food?<br />
</h4>
<p>
	There are many misconceptions about the use of edible insects, often reinforced by negative associations. Insects provide crucial ecosystem services, such as pollination, valued at over 150 billion U.S. dollars—10 percent of the value of global agricultural production for human consumption.<sup><a href="#note21">21</a></sup> Additionally, insects play important roles as decomposers and in the biocontrol of agricultural pests. Many bird species, including chickens, as well as various fish species, naturally consume insects. Non-human primates also eat insects, as do hundreds of millions of people worldwide in tropical regions.<sup><a href="#note22">22</a></sup>
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.3/"><img decoding="async" src="https://www.skeptic.com/magazine/images/magv29n3_lg.jpg" alt="Skeptic 29.3 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.3<br />
		<a href="https://www.skeptic.com/productlink/magv29n3">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/267337#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	The publication of the 2013 edition of the Food and Agriculture Organization’s Statistical Yearbook triggered enormous publicity by highlighting the environmental benefits, nutritional value, and safety of using insects as food and feed. This coincided with increasing awareness that changes in our food habits were necessary. Eighty percent of all agricultural land worldwide is used to produce dairy and meat products.<sup><a href="#note23">23</a></sup> It is estimated that this land area will not be enough to satisfy the increasing demand for these products due to the growing world population and improving living standards. Additionally, the environmental impact of meat and dairy production is substantial, accounting for about 15 percent of all greenhouse gas emissions and almost two-thirds of all ammonia emissions.<sup><a href="#note24">24</a></sup> Water and land use requirements are also very high. Farming edible insects is environmentally better on all fronts, while their nutritional values are similar to those of conventional meats.<sup><a href="#note25">25</a></sup>
</p>
<p>
	The idea of eating insects is gradually gaining traction in Western countries. Strategies such as targeting more adventurous eaters, incorporating insect ingredients into familiar products, and providing information about food safety, nutrition, and sustainability may help shift public perception and gradually convince more people that insects deserve a place on the menu.<sup><a href="#note26">26</a></sup>
</p>
<p>
	Resistance to this new food is not unexpected, given that insects have never been considered a regular food ingredient in the West. But if insect products are proven to be safe, nutritious, tasty, and more sustainable than meat, perhaps we should reconsider our food habits? <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>Arnold van Huis</strong> is a Professor Emeritus of Tropical Entomology at Wageningen University &#038; Research in the Netherlands. From 1974 to 1979, he worked for the Food and Agriculture Organization of the United Nations (FAO) in Nicaragua. He is the author of over 300 scientific papers and co-author of <em>The Insect Cookbook: Food for a Sustainable Planet</em> (Columbia University Press). In 2014, together with FAO, he organized the first conference, <em>Insects to Feed the World</em>, attended by 450 participants from 45 countries. He is the chief editor of the <em>Journal of Insects as Food and Feed</em>.
	</p>
<h5>
		References<br />
	</h5>
<ol>
<li id="note01">
			<a href="https://bit.ly/3zApUov">https://bit.ly/3zApUov</a>
		</li>
<li id="note02">
			<a href="https://bit.ly/3LjQN2J">https://bit.ly/3LjQN2J</a>
		</li>
<li id="note03">
			<a href="https://bit.ly/3WchjkG">https://bit.ly/3WchjkG</a>
		</li>
<li id="note04">
			<a href="https://bit.ly/3zyYu2x">https://bit.ly/3zyYu2x</a>
		</li>
<li id="note05">
			<a href="https://bit.ly/3RZyLXi">https://bit.ly/3RZyLXi</a>
		</li>
<li id="note06">
			<a href="https://bit.ly/4eY2Hg4">https://bit.ly/4eY2Hg4</a>
		</li>
<li id="note07">
			<a href="https://bit.ly/3LeJ1XV">https://bit.ly/3LeJ1XV</a>
		</li>
<li id="note08">
			<a href="https://bit.ly/4bJsvdm">https://bit.ly/4bJsvdm</a>
		</li>
<li id="note09">
			<a href="https://bit.ly/4bDau0f">https://bit.ly/4bDau0f</a>
		</li>
<li id="note10">
			<a href="https://bit.ly/4eTWfXC">https://bit.ly/4eTWfXC</a>
		</li>
<li id="note11">
			<a href="https://bit.ly/3RVKvtR">https://bit.ly/3RVKvtR</a>
		</li>
<li id="note12">
			<a href="https://bit.ly/3xRpI3S">https://bit.ly/3xRpI3S</a>
		</li>
<li id="note13">
			<a href="https://is.gd/g8DxY7">https://is.gd/g8DxY7</a>
		</li>
<li id="note14">
			<a href="https://is.gd/242sgQ">https://is.gd/242sgQ</a>
		</li>
<li id="note15">
			<a href="https://is.gd/umZWVz">https://is.gd/umZWVz</a>
		</li>
<li id="note16">
			<a href="https://is.gd/P6uLeZ">https://is.gd/P6uLeZ</a>
		</li>
<li id="note17">
			<a href="https://is.gd/Xbe2oq">https://is.gd/Xbe2oq</a>
		</li>
<li id="note18">
			<a href="https://is.gd/I26KSg">https://is.gd/I26KSg</a>
		</li>
<li id="note19">
			<a href="https://is.gd/B5fnKU">https://is.gd/B5fnKU</a>
		</li>
<li id="note20">
			<a href="https://is.gd/cvOHhe">https://is.gd/cvOHhe</a>
		</li>
<li id="note21">
			<a href="https://is.gd/BP3xpt">https://is.gd/BP3xpt</a>
		</li>
<li id="note22">
			<a href="https://is.gd/cbnDRq">https://is.gd/cbnDRq</a>
		</li>
<li id="note23">
			<a href="https://is.gd/xUitAW">https://is.gd/xUitAW</a>
		</li>
<li id="note24">
			<a href="https://is.gd/soBRe7">https://is.gd/soBRe7</a>
		</li>
<li id="note25">
			<a href="https://is.gd/7qQbDj">https://is.gd/7qQbDj</a>
		</li>
<li id="note26">
			<a href="https://is.gd/PdqffL">https://is.gd/PdqffL</a>
		</li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>When Dolphins Network</title>
		<link>https://www.skeptic.com/reading_room/dolphins-humans-working-together/</link>
		
		<dc:creator><![CDATA[Lee Dugatkin]]></dc:creator>
		<pubDate>Fri, 01 Nov 2024 07:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[animal intelligence research]]></category>
		<category><![CDATA[bottlenose dolphins behavior]]></category>
		<category><![CDATA[complex animal societies]]></category>
		<category><![CDATA[dolphin communication signals]]></category>
		<category><![CDATA[dolphin social networks]]></category>
		<category><![CDATA[evolutionary biology studies]]></category>
		<category><![CDATA[human-dolphin partnerships]]></category>
		<category><![CDATA[marine animal cooperation]]></category>
		<category><![CDATA[mutualism in marine animals]]></category>
		<category><![CDATA[social network analysis in animals]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=45372</guid>

					<description><![CDATA[Explore the fascinating world of dolphin social networks, where these intelligent creatures display complex social interactions and even partner with humans to hunt. This article delves into groundbreaking research on dolphin behavior, revealing how dolphins form “small-world” social networks that enhance survival and cooperation across species.]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">D</span>avid Lusseau always wanted to be a biologist. “Well, either biologist or clown,” he adds, “but I realized there was not much money in clowning.” When Marie the dolphin entered Lusseau’s life, she sealed the deal for him becoming a biologist. A bottlenose dolphin (<em>Tursiops truncatus</em>) who swam in the waters near the village of Cerbère on the border between France and Spain in the late 1980s, Marie set seventeen-year-old Lusseau on a path that would one day lead him to study social networks in her species. “When you look in the eyes of a dolphin, you realize there is a lot going on,” Lusseau says, reminiscing on his time with his cetacean friend. “It is something that is very hard to express or grasp or explain in a factual matter, but spending time with [Marie] got me interested in … trying to understand how dolphins work, [in what] I perceived as another intelligent species on the planet.”
</p>
<p>
	As an undergraduate, Lusseau spent time as a research assistant working with a group studying bottlenose dolphins in Florida. When out in the water, he encountered dolphins swimming on their own or in pairs. On occasion he bumped into a trio, but dolphins always seemed to be doing their own thing, just in the company of one or two others. That view of dolphin sociality, or the lack of it, changed dramatically when Lusseau began his PhD research in the late 1990s at the University of Otago in New Zealand. His dissertation focused on conservation biology in bottlenose dolphins in a fjord called Doubtful Sound, but the social behavior of the dolphins there hit him like a ton of bricks. As soon as he got there, he encountered not lone dolphins, duos, or trios, but groups of thirty or more dolphins schooling and moving about in a coordinated manner. These were very different animals from the solo dolphins and very small dolphin groups he had studied in Florida.
</p>
<p>
	Each day Lusseau rose at 4 a.m., grabbed some breakfast, swatted away an endless barrage of midges, and arrived at Doubtful Sound before the sun rose. He’d board a 14-foot boat, locate a group of dolphins, and do focal animal sampling, cycling through dolphins, each recognizable by natural markings on their dorsal fins, often from shark attacks. Doubtful Sound can be stunningly beautiful, but it is at a latitude called the “roaring forties” because of the strong winds from the west and six- to eight-foot waves at times, which make for rough going when watching dolphins from a boat. <span id="more-45372"></span>
</p>
<p>
	As he spent time with the dolphins, Lusseau began thinking about how to understand their complex social dynamics, but he couldn’t quite figure out the best way to proceed. On one of his stints back at the University of Otago, Lusseau recalls reading a <em>Proceedings of the National Academy of Sciences</em> paper on social networks written by physicist Mark Newman and others. Soon after that, he emailed Newman, telling him, “I think you are doing really cool stuff and I can understand it, because you write so well. Would you like to have a look at what we’re doing?” Newman was interested. It wasn’t long before he and Lusseau were coauthoring papers on dolphin social networks. But before they penned any coauthored papers, Lusseau published a 2003 paper in the <em>Proceedings of the Royal Society of London</em> that is widely regarded as the first study explicitly on social networks in nonhumans.
</p>
<p>
	Unlike animal social network papers in today’s journals, where readers are acquainted with how networks operate, to put readers in the right frame of mind in 2003, Lusseau opened his <em>Royal Society</em> paper using a strategy that Darwin had employed in <em>On the Origin of Species</em>. The idea was to introduce a phenomenon that readers already knew about (in Darwin’s case artificial selection, as in selection of different breeds of pigeons) and then make the case that what followed (natural selection), though it appeared radical, was really just another variety of what he had just discussed. In Lusseau’s paper, the opening sentences read: “Complex networks that contain many members such as human societies … the World Wide Web (WWW) … or electric power grids … permit all components (or vertices) in the network to be linked by a short chain of intermediate vertices.” And before readers knew it, they were learning about such social networks in dolphins.
</p>
<p>
	Lusseau constructed dolphin networks based on thousands of observations, and one metric he looked at was network diameter, which measures the average shortest path between nodes. To introduce network diameter to readers, Lusseau first discussed psychologist Stanley Milgram’s “small world” research from the late 1960s. “The global human population seems to have a diameter of six,” wrote Milgram, “meaning that any two humans can be linked using five intermediate acquaintances.” The party version of Milgram’s small world is the parlor game “six degrees of Kevin Bacon.” The rules are simple: players choose a movie actor and then connect that actor to another that they played alongside in a film, repeating the process over and over, trying to link their original actor to movie star Kevin Bacon—who once quipped “he had worked with everybody in Hollywood or someone who’s worked with them”—in no more than six connections. It turns out the dolphin small world in Doubtful Sound is smaller than the human one (including Kevin Bacon’s), both in the size of the network and network diameter, the latter of which is approximately three, meaning any two dolphins in Doubtful Sound can be linked using two intermediate acquaintances.
</p>
<p>
	Lusseau wondered what would happen if the dolphin network was culled by, for example, shark predation. To do this, using the network data he had collected, he built a computer algorithm that simulated predation, reducing the network size 20 percent by randomly removing 20 percent of the dolphins. The small world of the dolphins, it turned out, was unaffected by such a reduction. But if instead of randomly selecting individuals to remove from the network, Lusseau simulated removal of the 20 percent of dolphins who had the greatest number of ties to others, network diameter increased, which had the effect of slowing information transfer within the network.
</p>
<p>
	As he came to know his dolphins better, Lusseau discovered that some individuals in Doubtful Sound give signals that affect group movement associated with finding new resources, including food. Side flopping, in which a dolphin leaps from the water and lands on its side, is seen only in males when they initiate a move to a new location, while upside-downing, in which an individual rolls onto its ventral side and slaps the water to signal an end to a group move, is seen almost exclusively in females. But only a few males do all the side flopping, and only a few females do all the upside-downing. Lusseau wanted to know if a network analysis would shed light on exactly which males and which females. It did. Males initiating and females terminating travel had higher betweenness— they were key hubs in this traveling/foraging network—than their non-signaling counterparts.
</p>
<p>
	In a few populations of bottleneck dolphins on the other side of the planet, in Brazil, signaling and networking is not <em>sometimes</em> about feeding opportunities—they are <em>always</em> about that. And the dolphins have, rather remarkably, added humans to their feeding networks.
</p>
<p>
	For more than three decades, ethologist Paulo Simões-Lopes has been studying dolphin populations in the lagoon systems along the coastline near Laguna, Brazil, about 800 kilometers south of São Paulo. The dolphins in nine populations along that stretch do something that no other dolphins—and almost no other animals, period— do. They not only network with each other, but cooperate with humans to secure more food for both themselves and their primate partner.
</p>
<p>
	Each autumn, a huge mullet migration takes place in southern Brazil. Both the dolphins and the fishermen see the fish as prize prey. Up to fifty fishers, wading waist deep in very cold water, wait for the chance to cast large circular nylon nets called tarrafa over schools of mullet. The problem for the fishers is that the water is murky, and it is next to impossible to see the fish. The problem for the sixty or so dolphins at Laguna is that compared to their other prey, mullet are large and hard to catch. But dolphins aren’t especially troubled by murky water, as they detect mullet using echolocation, a built-in sonar system that would be the envy of most engineers.
</p>
<p>
	Dolphins produce sound waves in their nasal sacs and focus those waves through fatty tissue and fluid in their foreheads. Once the sound waves are shot out into the water, they travel until they bump into an object, at which point they bounce back to the dolphins, who use their lower jaw as a receiver. From the lower jaw, the waves travel to the inner ear and then to the brain. Objects of different sizes and densities reflect back sound waves of different frequencies, and the dolphins use that information to “see” what is in the water around them. When their sonar detects mullet, dolphins signal fishers that the fish are present by curving their backs and then slapping their heads or their tails on the water surface. The fishers then cast their tarrafa and pull in loads of mullet. The confused mullet who escape the tarrafa often swim right into the mouths of waiting dolphins. It’s the perfect win-win situation.
</p>
<p>
	Laguna newspapers from the late 1890s featured articles about this dolphin-human mutualism, and so Simões-Lopes knows that, at the very least, it has been going on for more than 130 years. And though many dolphins don’t signal fishers, every fisher knows which dolphins do. “It is famous [in southern Brazil],” Simões-Lopes says. “I grew up watching those dolphins … I would sit on a rock in the canal and watch for hours. I knew it was unusual … I knew there were dolphins in a big harbor farther south where dolphins and fishermen don’t interact.”
</p>
<p>
	Today Simões-Lopes has a team of ten working with him, but he began on his own in 1988. Soon thereafter, he entered a PhD program and built his dissertation around his research on the dolphin-human foraging mutualism. Each day he brought a folding chair with him and set it up on a rock, watching the dolphins through his binoculars, taking photos—he had compiled a mug book with photos of all the dolphins in the lagoon—and filling notebook after notebook with data on dolphins signaling fishers.
</p>
<p>
	Simões-Lopes began to know the fishers, and they began to know him. He also was starting to get a good feel for which dolphins at Laguna signaled the fishers and which did not. Not surprisingly, the fishers also kept tabs, telling Simões-Lopes about the “good dolphins” (who signaled fishers) and the “bad dolphins” (who did not). The fishers know not only which dolphins signal, but which dolphin will give which signal: “Each dolphin gives the signal in a different way,” one fisher said, “and we need to know [the different signals] in order to catch the fish.” Another fisher was more of a romantic, telling Simões-Lopes and his colleagues, “This is beautiful. It doesn’t happen everywhere.”
</p>
<p>
	The more that Simões-Lopes thought about those “good” dolphins and “bad” dolphins, the more he wanted to understand them better. Years later Mauricio Cantor joined Simões-Lopes’s team; Cantor had worked with Hal Whitehead, a leader in early social network analysis. Simões-Lopes and Cantor decided that a network analysis might help them delve deeper into the between-species cooperation they observed on a daily basis. In 2008, they contacted David Lusseau, who had done the network studies on bottlenose dolphins in New Zealand, and asked if he would be interested in serving as a sort of conceptual consultant specializing in social networks. Lusseau was more than happy to join their team.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.3/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n3_lg.jpg" alt="Skeptic 29.3 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.3<br />
		<a href="https://www.skeptic.com/productlink/magv29n3">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/267337#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	Simões-Lopes and his team assumed dolphins learn how to signal humans from other signalers they associate with, so for their social network analysis, they were especially interested in whether signaling dolphins preferred spending time with other signaling dolphins, both when they were chasing mullet into nets and, just as importantly, when they were not. To test whether there were cliques of signalers and cliques of dolphins who didn’t signal, Simões-Lopes’s team looked at clustering coefficients of sixteen cooperators and nineteen dolphins who did not signal and cooperate with fishers.
</p>
<p>
	What they discovered were three cliques within the larger network of the thirty-five dolphins. Clique 1 had fifteen dolphins: each and every one of them cooperated with the local fishers. Dolphins in this clique associated with one another not just during the autumn mullet fishing season but the rest of the year as well. Clique 2 had a dozen dolphins, <em>none</em> of whom cooperated with fishers, and dolphins in this clique were not as well connected to one another as the individuals were in Clique 1. Clique 3 was made up of eight dolphins: seven never cooperated with fishers, but one—dolphin 20—did. And of all thirty-five dolphins in the network, it was dolphin 20 who spent the most time interacting <em>across</em> cliques, acting as what Simões-Lopes and his colleagues call a “social broker” between the signalers and non-signalers.
</p>
<p>
	This behavior is all wonderfully complex, and we humans—and I don’t just mean the artisanal fishers of Laguna—should be grateful to play a role in understanding it. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<p>
		Excerpted and adapted by the author from <em>The Well-Connected Animal: Social Networks and the Wondrous Complexity of Animal Societies</em> by Lee Alan Dugatkin, published by The University of Chicago Press. © 2024 by Lee Alan Dugatkin. All rights reserved.
	</p>
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>Lee Alan Dugatkin</strong> is an evolutionary biologist and a historian of science in the Department of Biology at the University of Louisville. He is the author of sixteen books and more than 200 articles in such journals as Nature, The Proceedings of the National Academy of Sciences, and The Proceedings of the Royal Society of London. Dr. Dugatkin is contributing author to Scientific American, The American Scientist, The New Scientist, and The Washington Post. His latest book is The Well-Connected Animal: Social Networks and the Wondrous Complexity of Animal Societies.
	</p>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Lost Bird</title>
		<link>https://www.skeptic.com/reading_room/great-auk-extinction/</link>
		
		<dc:creator><![CDATA[Gísli Pálsson]]></dc:creator>
		<pubDate>Fri, 25 Oct 2024 18:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[Anthropocene]]></category>
		<category><![CDATA[biodiversity loss]]></category>
		<category><![CDATA[bird conservation]]></category>
		<category><![CDATA[de-extinction]]></category>
		<category><![CDATA[endangered species]]></category>
		<category><![CDATA[extinct species]]></category>
		<category><![CDATA[genetic cloning]]></category>
		<category><![CDATA[great auk]]></category>
		<category><![CDATA[species extinction]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=45366</guid>

					<description><![CDATA[Gísli Pálsson reflects on the extinction of the great auk, a flightless bird hunted to extinction in 1844, and examines its historical, scientific, and ethical significance. Through personal narrative, museum artifacts, and the quest for de-extinction, the essay raises questions about humanity’s role in species extinction and potential resurrection efforts.]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">T</span>he great auk (<em>Pinguinus impennis</em>) displayed in the Natural History Museum of Denmark stands erect on its pedestal, its great beak jutting forward, apparently fearless. It is possessed of a certain dignity and grace. It demands my attention. It was probably killed off in Iceland, where I come from, and was one of the last of its kind. For thousands of years, these large, flightless birds swam the extensive waters of the North Atlantic and made their nests on islands and skerries, where each pair laid and incubated a single, uniquely patterned egg per year. According to most accounts, the last of the great auks were slaughtered on Eldey, an island off the southwest coast of Iceland, in June 1844. About eighty taxidermic examples of great auks exist in various museum collections, and most of them came from Eldey.
</p>
<p>
	Alongside the great auk displayed in Copenhagen are four large glass jars. One is labeled: Iceland 1844, <img src="https://s.w.org/images/core/emoji/15.0.3/72x72/2642.png" alt="♂" class="wp-smiley" style="height: 1em; max-height: 1em;" />. These jars contain the viscera of great auks killed on that famous (or infamous) expedition to Eldey. These are not all the birds’ organs; some are stored in another seven jars elsewhere in the museum, out of the public eye, along with another stuffed great auk. At my request, a museum guide takes me to see this second bird. It is posed somewhat differently than the one on display. Its beak is open, as if ready to address the visitor. Unlike the first bird’s stark black-and-white plumage, this one looks grayish and rather dull. I am told it is a true rarity; it is in winter plumage, while most great auks were captured while breeding, in early summer. Perhaps this second bird was caged alive and slaughtered in winter. Perhaps it was kept as a pet for some months, like the great auk owned by the Danish polymath Ole Worm (1588–1654), one of the leading figures of the Nordic Renaissance. Worm personally owned three great auks, one of which he sometimes walked on a leash, and he made a fine drawing of it before adding it—stuffed—to his <em>Wunderkammer</em>, or cabinet of curiosities, a precursor to the modern museum.
</p>
<p>
	In its imposing old building in Copenhagen, only a fraction of the Danish museum’s “curiosities” are on display. In full, the collection comprises millions of animals from around the globe, and boasts exemplars of several species that have become extinct in recent centuries—such as a well-preserved skull of a dodo (<em>Raphus cucullatus</em>)—as well as fossils of dinosaurs and other organisms from previous eras of the earth’s history. Here, in this old and venerable museum, it is easy to detect the ideas that lay behind the collecting of natural objects for the past three and a half centuries. The need was perceived to educate the populace of various European nations, whose empires extended around the world, about the progression of time and about their place in the expanding universe. Such collections demonstrated the might and extent of each empire, and the value of research: all things can be named, catalogued, and categorized systematically.
</p>
<p>
	Is such an approach still valid in our current era, now termed the Anthropocene, or Human Age? In our time, the “natural” habitat of the planet has been radically refashioned by humans. Vital links between species, developed over eons, have been severed swiftly, fundamentally impoverishing the living world and posing a serious threat of the mass extinction of many species. How, I wonder, can such a process possibly be cataloged or categorized, given the speed of change and the complexities involved— and what would be the point?
</p>
<p>
	The bird species that no longer exist had, and still have, a special attraction. They have much to teach us.<span id="more-45366"></span>
</p>
<h4>
	Extinction<br />
</h4>
<p>
	I never saw a great auk growing up in Iceland, a land where they had once been quite common. Neither did the nineteenth-century British naturalists John Wolley and Alfred Newton.
</p>
<p>
	Like their contemporaries, Wolley and Newton busily collected birds’ eggs and specimens, classifying and recording them in the fashion of the Victorian age. When they set off for Iceland in 1858, they hoped to visit Eldey Island and study the rare great auk. They hoped to observe its behavior and habits and, perhaps, bring home an egg, or a skin, or a stuffed bird or two for their own cabinets of curiosities—unaware of the fact that the species had already been hunted to extinction. When they left Victorian England for Iceland, they teased that this was a “genuinely awkward expedition.” And so it proved to be, in many ways. They never made it to Eldey. Like me, they never saw a great auk on Iceland, not even a stuffed one.
</p>
<p>
	Prior to the killing of the last great auks, extinction was either seen as an impossibility or trivialized as a “natural” thing. The great taxonomist Carl von Linné, or Linnaeus (1707–78), imagined that a living species could never disappear; for evolutionary theorist Charles Darwin (1809–82), species would naturally come and go in the long history of life. The great auk brought home the fact that a species could perish quite quickly and, moreover, not naturally, but primarily as a result of human activities. No other extinction had been documented as carefully.
</p>
<p>
	During their historic expedition to Iceland in 1858, Wolley and Newton collected impressions of great auk hunting, through substantial interviews with the men who took part in the latest hunts and the women who skinned and mounted the birds, along with their prices and sales on foreign markets to collectors of “curiosities.” These impressions were preserved in the set of five handwritten notebooks Wolley titled the <em>Gare-Fowl Books</em>. Now archived in Cambridge University Library in England, their hundreds of pages are written in several languages (English, Icelandic, Danish, and German). As an anthropologist and an Icelander, once I had seen the <em>Gare-Fowl Books</em>, there was no turning back: I had to dive into the text and visit zoological museums and archives. For me, the great auk opened an intellectual window into ideas of extinction and their relevance to the current mass disappearance of species.
</p>
<h4>
	De-extinction<br />
</h4>
<p>
	Many sightings of great auks were reported after 1844 on North Atlantic skerries in Iceland (1846, 1870), Greenland (1859 or 1867), Newfoundland (1852, 1853), and northern Norway (1848). Some of the reports were certainly apocryphal: people had mistaken another species for a great auk, or had seen what they wanted to see. Others were deemed credible and were probably true: evidence of a few dispersed pairs of birds continuing to breed on islands or skerries for a few years. Such tales were often unjustly dismissed, and unnecessarily strict standards of proof and corroboration were applied. The consensus among scholars today seems to be that the last living great auk was seen off Newfoundland in 1852.
</p>
<p>
	Once it seemed clear that the last great auks were dead, museums and collectors around the world scrambled to acquire skins, eggs, and bones of the extinct bird. The Victorian obsession with collecting was past its peak, but anything relating to the great auk remained a prize. There are some eighty stuffed great auks in collections around the world, and an unknown number of preserved skins and viscera. Only about twenty-four complete skeletons exist, while thousands of loose bones (some with knife marks) are kept in museum collections. The skeletons do not have the visual appeal of the stuffed birds, mounted to look so lifelike in their full plumage. However, the bones—what Wolley and Newton termed “relics”—tell a long and complex story of their own. And there are about seventy-five great auk eggs believed to be extant today, the vast majority being documented and numbered.
</p>
<p>
	Now and then over the years, various species have been said to reappear suddenly, after having been thought long exterminated. Several birds have been confirmed to be such so-called “Lazarus species,” including the Bermuda petrel (<em>Pterodroma cahow</em>), which scared Spanish explorers away with their eerie calls. Considered extinct for three centuries, it was rediscovered on one of the Bermuda Islands in 1951. Also, the flightless takahē (<em>Porphyrio hochstetter</em>) of New Zealand, which was claimed extinct late in the nineteenth century, reappeared in 1948. In recent years, with intensive searching, social media, and growing awareness of the threat of mass extinction, such reports have escalated. However, the possibility of any surviving great auk “Lazarus” can be ruled out.
</p>
<p>
	Charles Darwin made the point that species swept away by history would not return. They were gone for good. In <em>On the Origin of Species</em>, he wrote: “We can clearly understand why a species when once lost should never reappear, even if the very same conditions of life, organic and inorganic, should recur.” This has long seemed blindingly obvious. No doubt many people have wondered why Darwin saw reason to state it at all. Yet his words were perhaps necessary at the time. The meaning of extinction had not yet been fixed, and Darwin may well have felt it was time to dispel the fantasy regarding the resurrection of species.
</p>
<p>
	Alfred Newton, on the contrary, entertained the idea that extinction processes could be reversed. And in our own time, discussions of the renaissance, even resurrection, of species is taken for granted—as if Bible stories and the natural sciences had coalesced into one, after centuries of enmity and conflict. Will we live to see the resurrection of <em>Pinguinus impennis</em>? Might genetics and cloning do the trick?
</p>
<p>
	In the spring of 2015, a group of like-minded individuals met at the International Centre for Life in Newcastle, England, to discuss the possible reanimation of the great auk. The meeting was attended by more than twenty people, including scientists and others interested in bird conservation. They addressed the principal stages of “de-extinction,” from the sequencing of the full genome of the extinct animal to the successful releasing of a proxy animal population into the wild. They were interested in resurrecting the great auk quite literally, to see it thrive once more, in zoos or even on the skerries and islands of the North Atlantic.
</p>
<p>
	Thomas Gilbert, a geneticist at the University of Copenhagen who has sequenced the great auk genome was one of the scientists who attended. The de-extinction of a species, however, has proved to be a more complicated issue than was originally anticipated—both technically and ethically. Gilbert pointed out that a re-created species can never be exactly like the original, and that the question must be asked: What counts as “near enough”—ninety-five percent, ninety, …? If the element that is lacking, though it may only account for a few percent of the genome, turns out to be crucial, and makes it harder for a re-created species to survive or to reproduce, nothing will have been gained. A re-created great auk that could not swim, for instance, would not be “near enough.” Likewise, a great auk capable of flight might be “way too much.” For most people, whatever the species concept to which they subscribe—and there remains a thriving philosophical debate on that subject—a flying bird would hardly qualify for legitimate member of the great auk species.
</p>
<p>
	Yet a substitute bird that could swim would be welcomed by many, as it might fill in the large gap left by the great auk’s extinction. A substitute species might contribute to the rewilding of the oceans, a task that has barely begun; indeed “the underwater realm has been trailing behind its terrestrial counterparts.” Interestingly, this idea echoes Philip Henry Gosse’s historic aquaria project, reversing the arrows, from land to sea, and operating on a much larger scale. The grand aquarium of the planet’s oceans, including the recently discovered seabirds’ hotspot in the middle of the North Atlantic, or so the idea goes, could be repopulated by relatively large charismatic animals, territorially raised and later released into the oceans, where they would be managed and monitored by human divers. Gosse would be amused.
</p>
<p>
	The expense of such de-extinction is high, however, and it is hard to decide which species should have priority: the mammoth? the dodo? the great auk? or perhaps one of the numerous species of tiny snails that rarely generate human concern? It’s tempting, and productive, to focus on tall birds and charismatic megafauna, but invertebrates such as snails and insects, which make up most of the animal kingdom (perhaps 99 percent), deserve attention too. In the Anthropocene, this age of mass human-caused extinctions, the selection of species is clearly an urgent, but difficult, concern. The re-creation of the great auk assuredly has symbolic significance, not least in light of the attention the species has garnered from both scholars and the public since its demise. The excessive price nowadays of great auk remains is significant too.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.3/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n3_lg.jpg" alt="Skeptic 29.3 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.3<br />
		<a href="https://www.skeptic.com/productlink/magv29n3">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/267337#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	In January 2023, a great auk egg sold for $125,000 at Sotheby’s. But bringing the bird back to life is a gigantic challenge, if not an impossible one. Perhaps the funds that would be spent on the de-extinction of the great auk might be better spent elsewhere. Nor should we overlook the Law of Intended Actions, Unintended Consequences.
</p>
<p style="text-align: center; color: #bbb;">
	• • • • • •
</p>
<p>
	Now that I know the great auk’s long history, I feel as if the stuffed birds in the Copenhagen museum were once my neighbors or acquaintances. As a scientist, I know that their viscera are stored in alcohol to preserve them and to enable people to study them. Still I wonder if the organs are in a constant state of inebriation from the alcohol, existing beyond the bounds of real time, in a sort of euphoric oblivion? Generations of visitors, of all ages and many nationalities, have passed by these jars of preserved bird parts over the past century and a half. What observations did they take home?
</p>
<p>
	The hearts stored in one jar are no longer beating, but no doubt many visitors on my side of the glass have wondered, as I do, how they would have pulsed when the bird’s blood was still flowing—and whether they could be resuscitated, by electric shock or genetic reconstruction. The eyes of the last male great auk are kept in another jar. I see them staring, gazing into both the past and into my own eyes. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<p>
		This essay was excerpted and adapted by the author from <em>The Last of Its Kind: The Search for the Great Auk and the Discovery of Extinction</em>. Copyright © 2024 by Gísli Pálsson. Reprinted by permission of Princeton University Press.
	</p>
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>Gísli Pálsson</strong> is professor emeritus of anthropology at the University of Iceland. He previously held positions in the Department of Anthropology at the University of Oslo, the Centre for Biomedicine &#038; Society at King’s College, London, and at the Rosenstiel School of Marine, Atmospheric, and Earth Science at the University of Miami. His books include <em>The Last of Its Kind: The Search for the Great Auk and the Discovery of Extinction</em>, <em>Down to Earth</em>, and <em>The Man Who Stole Himself</em>.
	</p>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Known Knowns</title>
		<link>https://www.skeptic.com/reading_room/from-clay-tablets-to-artificial-intelligence-review-of-knowing-what-we-know/</link>
		
		<dc:creator><![CDATA[Skeptic]]></dc:creator>
		<pubDate>Fri, 18 Oct 2024 17:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[digital age knowledge]]></category>
		<category><![CDATA[evolution of knowledge]]></category>
		<category><![CDATA[history of information]]></category>
		<category><![CDATA[history of information systems]]></category>
		<category><![CDATA[impact of AI on learning]]></category>
		<category><![CDATA[information overload]]></category>
		<category><![CDATA[knowledge preservation methods]]></category>
		<category><![CDATA[knowledge transmission]]></category>
		<category><![CDATA[modern technology and AI]]></category>
		<category><![CDATA[review]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=45346</guid>

					<description><![CDATA[In her review of Knowing What We Know by Simon Winchester and Informatica by Alex Wright, Michelle Ainsworth explores how these books delve into the history of knowledge transmission, from ancient practices to modern technologies. The review highlights each book’s strengths in presenting interdisciplinary narratives, balancing compelling storytelling with in-depth analysis of how information systems have evolved to cope with today’s digital age.]]></description>
										<content:encoded><![CDATA[<p class="details">
	A review of <em>Informatica: Mastering Information Through the Ages</em> by Alex Wright (2023) and <em>Knowing What We Know: The Transmission of Knowledge, From Ancient Wisdom to Modern Magic</em> by Simon Winchester (2023)
</p>
<div class="imagefloatleft" style="width: 220px;">
	<img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-3/Simon-Winchester-quote-knowing.jpg" alt="Quotation from Simon Winchester" width="200" height="200" class="noBoxShadow">
</div>
<p>
	<span class="dropCap">C</span>an the history of how humans organize knowledge help us understand 21<sup>st</sup> century information overload? Two readable new books help us address these questions with interdisciplinary narratives: <em>Knowing What We Know: The Transmission of Knowledge: From Ancient Wisdom to Modern Magic</em> by Simon Winchester, and <em>Informatica: Mastering Information Through the Ages</em> by Alex Wright.
</p>
<p>
	To varying degrees and slightly different ways, both books review the history of information technologies as a helpful tool. Both cover the familiar chronology from the clay tablets and papyrus scrolls of ancient times, monks in the Middle Ages copying texts in their scriptoria, the 15th and 19<sup>th</sup> century technologies that made books cheaper and more common, the development of reference books, and the mid-20<sup>th</sup> century innovations leading to modern computers and World Wide Web. Both books are also stimulatingly interdisciplinary, discussing many more historical topics than I’ve mentioned above, but also grounded in science and technology. After these similarities, the books diverge.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-3/cuneiform-tablet-31CE.jpg" alt="Cuneiform tablet Mesopotamia, circa 31 CE" width="200" height="182" class="noBoxShadow"> </p>
<p class="caption">
		Cuneiform tablet <br /> Mesopotamia, circa 31 CE
	</p>
</div>
<p>
	Although <em>Knowing What We Know</em> is rich in history, it is not chronological. It instead progresses from the learning of information (education) to the storing of knowledge (museums, libraries, and encyclopedias), and then to the dissemination of knowledge, concluding in thoughtful discussion of the implication of new technologies, such as the AI-based Large Language Models (LLMs). These topics are corralled by Winchester’s background in journalism, and the grounding of each topic in precise examples.
</p>
<p>
	On education, for example, Winchester contrasts three striking 21<sup>st</sup> century cases. He vividly recalls the woman he interviewed who started a school in a poverty-stricken village in India. Those students’ joyous thirst for knowledge is contrasted against the high-tension stakes in China, where a single exam taken in students’ teenage years determines their job opportunities for the rest of their lives. Winchester’s third example of education is the most striking—that of an illiterate island group whose oral storytelling tradition saved them, alone, from a tsunami. <span id="more-45346"></span>
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.amazon.ca/Knowing-What-Know-Transmission-Knowledge/dp/0063142880/r?tag=skepticcom20-20"><img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-3/Knowing-What-We-Know-3D-cover.jpg" alt="Knowing What We Know (cover)" width="200" height="315" class="noBoxShadow"></a>
</div>
<p>
	Winchester progresses to knowledge summarized in encyclopedias, recalling his own love of them in his youth and summarizes the rise and cessation of the leading print encyclopedia of the 19<sup>th</sup> and 20<sup>th</sup> century, <em>Encyclopedia Britannica</em>. How can complex issues about the leading online encyclopedia Wikipedia, with its vast size and reliability, be better illustrated than by Winchester’s own experience late in his research seeing there that a pioneer of internet technology was listed as having died, the correction of which Winchester learned the next morning on social media?
</p>
<p>
	And so it goes: Winchester focuses on a few extraordinary cases to illustrate each of his points. For the preservation of knowledge in museums, it is the remarkable story of the saving of museum treasures in China during political turmoil, and how the Chinese government has viewed this precious collection. Similarly, the rise of mass media is illustrated by the BBC because, Winchester notes, its style was influential in the development of radio news around the world. This flows naturally to the following chapter’s discussion of propaganda, focusing on the chilling example of the Nazis. His penultimate chapter is about polymaths and, finally, wisdom, focusing less on religion than on whether it was wise to drop the atomic bombs in 1945. The book concludes with the implications of ChatGPT and other new technology for our brains.
</p>
<p>
	Winchester has a remarkable ability to turn what could be a dry recitation of facts into a series of compelling stories, with numbered subsections in each chapter. The one time I felt that he could have used a copy editor was during his overly long digression on Krakatoa, the subject of one of his previous books, though he did make even this topic surprisingly relevant. In his hands, such meandering is usually done masterfully.
</p>
<p>
	Like a well-structured novel, all that came before leads Winchester to his conclusion. His fear is that technology, as currently progressing, can hurt our ability to think for ourselves. Characteristically, he illustrates this with a specific example: the complex skill set he stumbled through when his small boat needed to navigate toward land rather than be lost in the ocean in the days prior to GPS. Can people even read maps anymore? In one of the book’s few missed opportunities, he does not draw an extended parallel to the people who (accurately) decried in Gutenberg’s era that if books were mass produced, people’s ability to remember vast amounts of knowledge would decline, which it did (the skill of modern mnemonists, such as the late Harry Lorrayne, notwithstanding).
</p>
<div class="imagefloatleft" style="width: 220px;">
	<img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-3/Alex-Wright-quote-Informatica.jpg" alt="Quotation from Alex Wright" width="200" height="200" class="noBoxShadow">
</div>
<p>
	If Winchester’s book is grounded in concise case studies, Wright’s contributions in <em>Informatica</em> are science and the history of structured systems for organizing knowledge. These merge when Wright discusses the biological classification scheme developed primarily by Carl Linnaeus, including an amusing anecdote involving Thomas Jefferson mailing the decaying body of a moose to acclaimed scientific theorist Comte de Buffon. Although science is mentioned several times in Winchester’s <em>Knowing</em>, Alex Wright’s <em>Informatica</em> opens with it, following the late biologist E.O. Wilson in speculating about the biological role of epigenetics in human knowledge transmission. Wright compares “networks and hierarchies” in the natural and the human worlds. He sees parallels between creations by groups that are unlikely to have communicated, such as the similarity between the plant taxonomies created by Western scholars and those formed through oral tradition in other societies.
</p>
<p>
	Using more traditional evidence, Wright explicitly links the Linnaean classification scheme to the development of librarians’ attempts to organize books, culminating in the Dewey Decimal System at the turn of the 20<sup>th</sup> century. He appropriately refers to this 19<sup>th</sup> century arc as “the industrial library,” the creation of more elaborate organizational schemes being demanded by vastly increased numbers of published books, which was in turn allowed by new technology.
</p>
<p>
	Successive chapters discuss early to mid-20<sup>th</sup> century utopian information sharing projects using then-existing technology, including index cards and telegraphy, or the briefly famous Mundaneum (an institution that aimed to gather together all the world’s knowledge and classify it according to a system called the Universal Decimal Classification). In <em>Informatica</em>, Wright’s discussion of these utopian schemes does not flow as well as it could, the reader being left to make the connections.
</p>
<p>
	Worse, Wright’s extended history of the developments leading to the modern internet is shoehorned into a subsection of the revised “Web That Wasn’t” chapter as “The Web That Was.” This combination of topics in the same chapter was tenable in <em>Glut</em>, but in <em>Informatica</em> the subsection discusses so many people and inventions, all of whose work made the World Wide Web possible, that it should have been a new chapter. Finally, Wright recycled some of his earlier writing and did not update it, such as referring to CD-Roms and America Online (AOL) as leading technologies. This could have been fixed easily.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.amazon.ca/Informatica-Mastering-Information-through-Ages-dp-1501768670/dp/1501768670/r?tag=skepticcom20-20"><img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-3/Informatica-3D-cover.jpg" alt="Informatica (cover)" width="200" height="306" class="noBoxShadow"></a>
</div>
<p>
	That said, the narrative in <em>Informatica</em> is more clearly chronological than in <em>Knowing What We Know</em>, but Simon Winchester is so skilled a writer that his book is generally a smoother narrative despite being more episodic. Except in the book’s outline: I was halfway through the book before realizing that its main chapters had a logically progressive sequence to them, from data acquisition to information display to the uses of knowledge and finally to wisdom. Winchester could have made this clearer earlier in the book with just a few words.
</p>
<p>
	One side topic bears noting: Winchester said in at least two media interviews that his discussion of the racism found in a leading mid-century encyclopedia was edited out of the published version of <em>Knowing What We Know</em>, on the grounds that it would be too controversial or offend too many of his readers. Perhaps it would have, but its inclusion would have been valuable, partly for highlighting the important point that even the most well-respected reference materials can be wrong. While it can be argued that this is excusable because <em>Knowing</em> is not written by an academic scholar, a similar edit was also made in a book by Yale historian Beverly Gage, <em>G-Man</em>, (which I reviewed in an earlier issue of <em>Skeptic</em>), with pages 62–63 twice leading the reader to guess, but never know for sure, which apparently offensive word is represented. The criticism that only elite scholars know about the history of racism will become a self-fulfilling prophecy if that history is not included in popular books.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.3/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n3_lg.jpg" alt="Skeptic 29.3 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.3<br />
		<a href="https://www.skeptic.com/productlink/magv29n3">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/267337#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	On the other hand, <em>Informatica</em> and <em>Knowing What We Know</em> both have problems with the wording of their titles, and with such vast topics, it would be easy to quibble with decisions on which topics to focus. I wonder if <em>Informatica</em>’s new title could make readers think they are getting a wholly different book, rather than an update of <em>Glut</em> (originally published in 2007), with uneven revisions and only a chapter’s worth of new material? In <em>Knowing What We Know</em>, it’s the last third of the subtitle (“From Ancient Wisdom to Modern Magic”) that could mislead: in other cases, the phrase “Ancient Wisdom” has sometimes referred to religious traditions, but here seems to refer more to any ancient writing, and the book’s late discussion of wisdom is not primarily about religion.
</p>
<p>
	The important point shared by <em>Knowing What We Know</em> and <em>Informatica</em> is that greater access to information also presents challenges. <em>Informatica</em> is more theoretical and historical, <em>Knowing</em> being more a historically informed snapshot of our present. Both are stimulating and both are informative. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>Michelle Ainsworth</strong> holds an MA in History and she is currently researching the cultural history of stage magic in the United States. She is a humanist and lives in New York City.
	</p>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Expelled!   The True Story Behind Ben Stein’s Anti-Evolution/Pro-Intelligent Design Film Expelled: No Intelligence Allowed </title>
		<link>https://www.skeptic.com/reading_room/backstage-insights-into-ben-steins-anti-evolution-documentary-expelled/</link>
		
		<dc:creator><![CDATA[Kevin Miller]]></dc:creator>
		<pubDate>Fri, 11 Oct 2024 14:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[academic censorship]]></category>
		<category><![CDATA[anti-evolution film]]></category>
		<category><![CDATA[Ben Stein documentary]]></category>
		<category><![CDATA[evolution]]></category>
		<category><![CDATA[evolution vs. intelligent design]]></category>
		<category><![CDATA[evolutionary theory suppression]]></category>
		<category><![CDATA[ID movement criticism]]></category>
		<category><![CDATA[intelligent design debate]]></category>
		<category><![CDATA[science and religion]]></category>
		<category><![CDATA[science freedom discussion]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=45284</guid>

					<description><![CDATA[Expelled: No Intelligence Allowed revealed a supposed conspiracy to suppress Intelligent Design (ID) within academia. The article reflects on the film’s production, its controversial use of deception, and the author's shift from supporting ID to embracing open scientific inquiry. It emphasizes the importance of dialogue over division in the science versus religion debate.]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">T</span>he 2008 documentary film <em>Expelled: No Intelligence Allowed</em> was released to widespread media coverage and hype. Starring Ben Stein—a conservative commentator, actor, and former speechwriter for Presidents Richard Nixon and Gerald Ford—the film argued there was a conspiracy within academia to censor Intelligent Design (ID) and to cover up evidence that belief in evolutionary theory led to everything from atheism to the Nazi Holocaust. <em>Expelled</em> opened in over 1,300 theaters and earned nearly $8 million. In addition to ID theorists, the film included interviews with noted proponents of evolutionary theory such as Richard Dawkins, Eugenie Scott, Christopher Hitchens, and Michael Shermer.
</p>
<p>
	As the film’s co-writer, I was part of the crew that came to the <em>Skeptic</em> magazine office to interview Michael Shermer. Here is how he described his experience with us:
</p>
<blockquote>
<p>
	Ben Stein came to my office to interview me about what I was told was a film about “the intersection of science and religion” called <em>Crossroads</em> (yet another deception). I knew something was afoot when his first question to me was on whether or not I think someone should be fired for expressing dissenting views. I pressed Stein for specifics: Who is being fired for what, when, and where? In my experience, people are usually fired for reasons having to do with budgetary constraints, incompetence, or not fulfilling the terms of a contract. Stein finally asked my opinion on people being fired for endorsing Intelligent Design. I replied that I know of no instance where such a firing has happened.
</p>
<p>
	This seemingly innocent observation was turned into a filmic confession of ignorance when my on-camera interview abruptly ends there, because when I saw <em>Expelled</em> at a preview screening at the National Religious Broadcasters’ convention (tellingly, the film is being targeted primarily to religious and conservative groups), I discovered that the central thesis of the film is a conspiracy theory about the systematic attempt to keep Intelligent Design creationism out of American classrooms and culture.
</p>
</blockquote>
<p>
	Although I was in agreement with the film’s agenda at the beginning, throughout the long production process, my feelings about the project and the ID movement underwent a significant shift. But I stayed on board in the hope of providing a counterbalance to the producers’ desire to create what amounted to a piece of pro-ID propaganda. I eventually realized, however, that whoever controls the money controls the point of view, so there was only so much that I could do as a fledgling screenwriter. <span id="more-45284"></span>
</p>
<p>
	In the years since <em>Expelled</em> came out, the transformation of my views has continued apace, so I wrote to Shermer to apologize for the damage the film did and the duplicitous circumstances under which some of our interviews were obtained. In response, he invited me to write an article describing my experience on <em>Expelled</em> as well as my subsequent reflections on the ID movement and the larger issue of the relation between science and religion.
</p>
<p style="text-align: center; color: #bbb;">
	• • • • • •
</p>
<p>
	During the two and a half years I spent working on <em>Expelled</em>, one of the key dynamics I observed was how bitterly divided people were over the notion of ID as a concept, and even more so as a movement. After reading countless books and articles on the subject and participating in interviews with people on all sides, I realized that no matter which way one approached the topic of Darwinian evolution versus intelligent design—and by extension, science and religion—the individuals on the frontlines were virtually all highly intelligent people of goodwill. Unfortunately, some of the leading voices were also exceedingly argumentative by nature. Thus, rather than engage in dialogue that sought to establish common ground and then work together to build bridges toward truth, interactions between the ID movement and its critics often amounted to one side lobbing a verbal grenade at the other and then hunkering down in the trenches as it exploded, all the while chuckling about how foolish the folks on the other side were. Rather than emulate that spirit, I decided I would try to engage my critics in constructive conversation. I wanted to see if it was possible to cross no man’s land and find some sort of common bond with the “enemy.”
</p>
<p>
	Over the several weeks leading up to the film’s release, I did exactly that, spending hours each day engaged with people on my personal blog and other online forums. Despite my legitimate desire to conduct a meaningful dialogue with my opponents, my efforts were often met with an unrelenting wall of bitterness and sarcasm. Perhaps not surprisingly (considering the relentless barrage of abuse) and despite my good intentions, I occasionally succumbed to a similar rhetorical approach, adding a heavy dose of sarcasm to my own barbed responses. Even so, I was truly seeking to abide by motivational speaker Steven Covey’s “highly effective” habit number five: seek first to understand, then to be understood.
</p>
<p class="pullquote">
	I hoped the film could do away with the need to “win” the debate over ID. One way or the other and instead unite these contending minds around their mutual desire to move science forward.
</p>
<p>
	Over time, I recognized a pattern across the various responses that I received, one that matched a famous quote by Richard Dawkins: “It is absolutely safe to say that if you meet somebody who claims not to believe in evolution, that person is ignorant, stupid, or insane (or wicked, but I’d rather not consider that).” On the surface, this sounds like an incredibly arrogant thing to say, relegating one’s opponents to varying levels of intellectual inferiority, insanity, or iniquity. But as I thought about it, I realized that’s how many of us treat those who don’t share our beliefs. When we encounter someone who disagrees with us, at first we assume they simply don’t know what we know, so we attempt to educate them. If that fails, we may briefly entertain the idea that the person is incapable of understanding what we recognize as truth. But if they display a reasonable level of intelligence, we seem to be left with only two options: either they know what we know to be true, and they’re purposely suppressing or obscuring that information (which puts them in the wicked category), or they’re so out of touch with reality that they’re a lost cause.
</p>
<p>
	This was exactly the continuum I found myself traveling along with my neo-Darwinian debating partners. While, in their minds, I made a brief stop at “ignorant,” once I demonstrated that I was reasonably well informed on the relevant issues, they quickly shuffled me into the “wicked” category, with brief stopovers at “stupid” and “insane.” Their favorite name for me was “liar,” which I found frustrating because, despite how one might interpret the rhetorical position of <em>Expelled</em>, a film in which I had authorial influence but no editorial control, I wasn’t trying to be deceptive at all. I was sincerely seeking the truth, not claiming to have it.
</p>
<p>
	In retrospect, though, I empathize with my opponents’ frustration. My stubborn refusal to concede my views probably led them to believe their efforts to correct my faulty thinking were in vain. As it turned out, it was the opposite. Even though I was championing a documentary that many regarded as contrary to science and truth in general, cracks had begun to form in my own beliefs about the ID movement and the branch of evangelical Christianity to which I had converted as a child.
</p>
<p>
	The process began about six years before I signed on to <em>Expelled</em> when I took a class on Science and Christianity at Regent College (a seminary in Vancouver, BC) co-led by historians Mark Noll and David N. Livingstone, author of <em>Darwin’s Forgotten Defenders</em>. That class served as a rebuttal to the commonly held belief that evolution and Christianity must be inherently at odds. As Livingstone outlines in his book, the initial Christian response to Darwin’s theory was characterized by accommodation rather than confrontation. Rather than refute Darwin’s theory, many theologians focused on harmonizing evolution with the notion of divine design instead. It wasn’t until the rise of Christian fundamentalism in the early 20th century—which lumped evolutionary theory together with higher criticism and other attacks on a literal approach to the Bible—that a split between evolutionary science and some branches of Christianity developed.
</p>
<p>
	Noll and Livingstone’s class triggered a desire to go deeper into the subject, leading me to focus on epistemology in general and the philosophy of science in particular. I was fascinated by the concept of warranted belief and the reliability of belief-producing processes. Are humans capable of discerning truth? If so, how? Does objective truth even exist? If so, is it possible to know it?
</p>
<p>
	While my belief in God was still relatively intact at that point, by the time we started development on <em>Expelled</em> in late 2005, the epistemological ground beneath me had shifted. I don’t recall when it was exactly, but sometime over the next six months, I was in a coffee shop doing research for the film when I ran into my pastor and confessed that I no longer believed in Satan, angels, or demons. I can still clearly recall the look of deep disappointment on his face.
</p>
<p>
	My confession was as much a revelation to me as it was to him. I can’t point to any one thing that led to that conclusion, but by then I had steeped myself in the writings of those at the forefront of the fight against ID, including Daniel Dennett, Richard Dawkins, Michael Shermer, Kenneth Miller, Michael Ruse, Eugenie Scott, and Sam Harris. I had also read and interacted with several leading proponents of the ID movement, including Stephen Meyer, David Berlinski, William Dembski, Philip E. Johnson, and Michael Behe. Altogether, the more my understanding of the relevant science grew, the less work there seemed to be required of God to create the universe. No matter to which gap in our knowledge one could point, claiming God’s handiwork could always be found there, even if the history of science appeared to be one long, inexorable march toward shining a light into those very gaps, revealing not God but the same natural processes that we observe today, removing the need to resort to any sort of divine intervention as a cause.
</p>
<p>
	To my way of thinking, that didn’t necessarily negate the concept of God or some sort of guiding intelligence in the universe. However, even if such a being existed, it seemed the most one could say was that “life, the universe, and everything” were the product of secondary rather than primary causes. God may have created the scale by which all things are measured, but apart from a few moments where a nudge in the right direction was required, his finger was never on it.
</p>
<p>
	This put me in an ideal frame of mind to accept the primary claims of the ID movement. Many proponents of ID accept most aspects of the neo-Darwinian synthesis, agreeing that the majority of what we observe in the universe is the product of secondary causes. However, while ID proponents agree that natural selection can account for relatively minor changes within species, they argue that it is wholly inadequate when it comes to explaining the origin of new species or of life itself, not to mention the origin of the universe. Not only do ID proponents believe life is too complex to be attributed to “blind” natural causes, they also argue that it is “irreducibly complex,” as Michael Behe puts it, wherein “a single system which is composed of several well-matched interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning” could not possibly be the product of a gradual process because the system’s function couldn’t be produced by selection until all the pieces were in place.
</p>
<p>
	Furthermore, ID proponents such as William Dembski and Stephen Meyer argue for something called “specified complexity,” whereby if something exhibits both complexity and specificity (i.e., information), one must infer that it is the product of intelligence, given that intelligence is the only source of information in the universe of which we are aware. Hence, even if blind, natural processes could account for how that information is edited (something else that ID proponents dispute), such causes could not explain how that information arose in the first place, much less how the universe in which that information is processed came into being.
</p>
<div class="imageclearall">
	<img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-2/Richard-Dawkins-and-Ben-Stein-Natural-History-Museum-by-Kevin-Miller.jpg" alt="Richard Dawkins being interviewed by Ben Stein for Expelled, at the Natural History Museum in London" width="750" height="560" class="noBoxShadow"></a> </p>
<p class="caption">
		Richard Dawkins being interviewed by Ben Stein for <em>Expelled</em>, at the Natural History Museum in London
	</p>
</div>
<p>
	Of course, opponents of ID have rebuttals to each of these arguments. Primarily, as Richard Dawkins argued in <em>The God Delusion</em>, rather than end the argument regarding origins, proposing an intelligent designer to account for irreducible or specified complexity merely punts the ball down the field because such a designer would have to be the product of the same processes as the phenomenon the designer is invoked to explain. So, as Shermer articulates in <em>Why Darwin Matters</em>, if complexity necessitates an intelligent designer, then there must be a super-intelligent designer, which itself necessitates a super-duper- intelligent designer, and so on in an infinite regress.
</p>
<p>
	Despite such objections to ID, I realized both sides of the debate faced the same sort of infinite regression when it came to explaining origins. Just as positing a designer merely postpones the problem, so does a purely materialistic point of view, with natural selection seemingly incapable of providing an account for how it came to be without invoking itself. The same goes for the seemingly immutable laws of nature within which natural selection operates. We have all sorts of theories for how these forces might have come into being and what holds them constant, but as for an ultimate explanation for the origin of the laws of nature, no one knows for sure. Accordingly, it appeared to me that on a philosophical level at least, ID’s proponents and its materialist critics were on equal footing. Each side was proceeding from a set of philosophical presuppositions about how the world came into being that cannot be proven, and each was at the same time equally certain that the other side’s philosophical presuppositions were wrong.
</p>
<p>
	To add another level of similarity, many individuals on each side claimed that their presuppositions were a scientific inference rather than a philosophical preference. That is, they insisted their axiomatic beliefs were a product of their scientific observations rather than something they brought to the table with them beforehand, only to have those beliefs consciously or affect their scientific observations. Those observations were then predisposed to conform to what they already believed.
</p>
<p class="pullquote">
	If we continue to expel, cancel, and block each other over our differences of opinion rather than dialogue and partner together to share our unique perspectives, there really is no hope for science, freedom, or truth.
</p>
<p>
	When it came to <em>Expelled</em>, it was this interplay between philosophical presuppositions and the day-to-day practice of science that interested me most. After all, any honest observer has to admit that philosophical presuppositions affect how we approach science, for example by determining what is and is not accepted as evidence. At the same time, a truly scientific person must always be willing to revise their presuppositions in light of new evidence and/or arguments. My highest hope for the film was that it could explore this reciprocal relationship between science and philosophy, leading to the very common ground that I had sought to establish with my online debating partners. Perhaps operating from a place of naïve optimism, I hoped the film could do away with the need to “win” the debate over ID one way or the other and instead unite these contending minds around their mutual desire to move science forward.
</p>
<p>
	Alas, that was not to be. For one thing, early in the process of making <em>Expelled</em>, I realized that the film’s producers weren’t interested in open-minded inquiry. They had an axe to grind against what they saw as an oppressive scientific establishment that was unwilling to “allow one divine foot in the door” (as geneticist Richard Lewontin put it), and they were determined to change that. Initially, I bought into this agenda as well, feeling like we were on the right side of history because we were fighting for free and open inquiry, not just on behalf of ID, but also on behalf of science itself. Why shouldn’t scientists be able to follow the evidence wherever it leads? And why shouldn’t intelligence be considered as a potential explanation for particular phenomena until proven otherwise? Hadn’t a presumption of theism, or at least deism, guided most of the early scientists, leading to all sorts of fruitful inquiry? If so, why couldn’t that continue?
</p>
<p>
	My interactions with some of the leading lights of ID also had a chilling effect on my belief that we were on the right side of the debate. For example, when Ben Stein asked Michael Behe how biology would be different if it had ID theory as its foundation, Behe was left groping for an answer. Then when Stein was interviewing David Berlinski outside the Berlin Wall, trying to coax him into saying that an unnecessary ideological wall had been erected to keep any notion of God out of science—just as the Berlin Wall had been erected to keep “dangerous” ideas out of the Soviet bloc—Berlinski refused to acquiesce. Instead, he insisted that we need boundaries in science to help define the field. For example, we don’t accept astrology as part of science, nor should we. Walls aren’t bad in and of themselves, Berlinski argued; it’s more a matter of where we build them and why.
</p>
<p>
	Of course, how we make such a determination is a product of our philosophical presuppositions, which are becoming increasingly impossible to agree on as we all break away from traditional meta-narratives and drift off into our own private definitions of reality. But even if we don’t agree with some or all of a field’s presuppositions, if we presume competence and goodwill amongst scientists, it’s only logical to assume that these boundaries exist not to limit the production of good science but to facilitate it. Otherwise, we find ourselves in the absurd position of arguing that scientists are working against their own self-interest.
</p>
<div class="imageclearall">
	<img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-2/Shermer-holding-Expelled-Press-Kit.jpg" alt="Shermer holding Expelled Press Kit" width="750" height="578" class="noBoxShadow"></p>
<p class="caption">
		Michael Shermer with the film’s press kit
	</p>
</div>
<p>
	I realize that a presumption of competence and goodwill is increasingly difficult to maintain these days as our confidence in the integrity of various institutions wanes. The problem is, considering the increasing complexity of the modern world, we are facing what energy theorist Vaclav Smil describes as a growing “comprehension deficit,” which makes our need to rely on experts greater than ever. This being the case, how can we determine when a dissident group, such as the ID movement, which is challenging the majority opinion in a field, is correct or whether they are a destructive force that really should be “expelled” out into the cold?
</p>
<p>
	I continue to believe that a presumption of competence and goodwill amongst experts is the most fruitful and cognitively healthy way to proceed. I’m willing to go with the majority view in any given field until given good reason to think otherwise. But I have to admit I’m far more skeptical than I used to be. And who doesn’t love the idea of a plucky group of rebels who risk everything to stand up to oppressive, corrupt authorities, and by opposing them restore freedom, truth, and justice? Everyone from political leaders such as Lenin and Hitler, to storytellers like George Lucas, have exploited this universal narrative, which is becoming increasingly attractive as we all sense a growing lack of control over our circumstances due to the increasing pace and complexity of change, technological and otherwise.
</p>
<p>
	This was exactly the narrative that we sought to tap into when making <em>Expelled</em>, knowing it would resonate with viewers on an emotional level. The question is, were we right when it came to the ID movement? Were they really courageous dissidents standing up against the evil Darwinian empire? I certainly believed it at the time, but I no longer think so now.
</p>
<p>
	Despite the radical change in my views, fifteen years after <em>Expelled</em> I can’t say I regret being involved with the film. It provided me with a blank check to indulge my passion for research, to travel the world, to meet some of the brightest minds in science, to work with people who eventually became some of my closest friends, and to establish myself in the film industry. More importantly, over the long term, it completely transformed my view of life and culture, bringing me much closer to those whom I used to regard as standing on the opposite side of the aisle. But I do have significant regrets about how the film itself turned out, the distrust it sowed amongst viewers regarding the scientific establishment, and the deceptive practices we engaged in to make the film happen.
</p>
<p>
	One example of those deceptive practices was hiring hundreds of extras to serve as Ben Stein’s “audience” during the speech he gives that bookends the film, making it seem as if he’s leading a groundswell of young people who are looking to overthrow the tyrannical Darwinian academy. This was filmed at Pepperdine University, Shermer’s alma mater, so he wrote them to ask how this happened:
</p>
<blockquote>
<p>
	The biology professors at Pepperdine assure me that their mostly Christian students fully accept the theory of evolution. So who were these people embracing Stein’s screed against science? Extras. According to Lee Kats, Associate Provost for Research and Chair of Natural Science at Pepperdine, “the production company paid for the use of the facility just as all other companies do that film on our campus” but that “the company was nervous that they would not have enough people in the audience so they brought in extras. Members of the audience had to sign in and the staff member reports that no more than two to three Pepperdine students were in attendance. Mr. Stein’s lecture on that topic was not an event sponsored by the university.” And this is one of the least dishonest parts of the film.
</p>
</blockquote>
<div class="imageclearall">
	<img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-2/Kevin-Miller-Skeptic-HQ-Bigfoot-by-Michael-Shermer.jpg" alt="Kevin Miller holding our Big Foot fake-tracks shoe" width="750" height="563" class="noBoxShadow"></p>
<p class="caption">
		Kevin Miller holding our Big Foot fake-tracks shoe for cryptozoology tomfoolery at the former <em>Skeptic</em> Magazine headquarters in Altadena, CA, where segments of <em>Expelled</em> were filmed
	</p>
</div>
<p>
	Another was creating a fake production company, complete with a website listing several dummy film projects. We used this website to mislead potential interviewees into believing we were taking an objective approach to the subject matter, which couldn’t have been further from the truth. I’ve been involved in several controversial documentaries since <em>Expelled</em>, and landing interviews with potentially hostile subjects is always a challenge. In such circumstances, I admit to being less than forthcoming about my point of view at times because I’d rather get a “clean read” than a confrontational exchange, a relaxed conversation where the subject expresses their views similar to how they might talk to a friend, but not since <em>Expelled</em> have I taken things to such an extreme.
</p>
<p>
	Like anyone who believes they have the truth (or possibly even God) on their side, while making <em>Expelled</em> we felt the ends justified the means. As history shows time and time again, though, just when we think we’re most virtuous, we’re also at our most dangerous. When facing off against what we regard as a great evil, belief in our own righteousness can blind us to the very evils we ourselves are committing in response.
</p>
<p>
	If <em>Expelled</em> had been made today, it probably would have been called <em>Canceled</em> or <em>Blocked</em> instead because too often when we encounter ideas that offend our philosophical presuppositions, our emotional sensibilities, or our fragile sense of identity, that’s exactly what we do. And unlike the way the scientific establishment is portrayed in <em>Expelled</em>, it’s not just those in authority who do this. More often than not, mobs of regular people are leading the charge. Driven by a sense of self-righteousness and/or a weaponized form of compassion, they summarily destroy people’s lives, due process be damned.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.2/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n2_lg.jpg" alt="Skeptic 29.2 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.2<br />
		<a href="https://www.skeptic.com/productlink/magv29n2">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/264050#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	Lack of common ground, a shared version of reality in which to engage, remains a problem. And with traditional means (such as religion) of establishing this common ground rapidly fading away, it seems like an impossible goal to achieve. If we continue to expel, cancel, and block each other over our differences of opinion rather than dialogue and partner together to share our unique perspectives, there really is no hope for science, freedom, or truth.
</p>
<p>
	We may never be able to achieve unanimity of belief, but if we can at least aspire toward unity of purpose and intent, agreeing to operate from a position of goodwill, charity, and curiosity rather than selfish gain or the need to bolster our identity by scapegoating others, maybe we can find a way to work together despite our differences.
</p>
<p>
	Despite this discouraging state of affairs, I still believe in the power of conversation and debate as perhaps the only way forward. It worked to change my mind (eventually), so why couldn’t it work for others? <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>Kevin Miller</strong> is an award-winning author and filmmaker. He has written, directed, and produced several documentary films, including <em>Hellbound?</em> and <em>J.E.S.U.S.A.</em> He is also the author of the best-selling Milligan Creek Series for middle-grade readers as well as numerous other books for children and adults, both fiction and non-fiction.
	</p>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Identity Politics and Its Discontents</title>
		<link>https://www.skeptic.com/reading_room/evolution-of-crt-identity-politics/</link>
		
		<dc:creator><![CDATA[Christopher Rufo and Yascha Mounk]]></dc:creator>
		<pubDate>Fri, 04 Oct 2024 14:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[BLM activism]]></category>
		<category><![CDATA[civil rights movement]]></category>
		<category><![CDATA[Critical Race Theory (CRT)]]></category>
		<category><![CDATA[cultural appropriation]]></category>
		<category><![CDATA[diversity]]></category>
		<category><![CDATA[equity]]></category>
		<category><![CDATA[identity politics]]></category>
		<category><![CDATA[Inclusion (DEI)]]></category>
		<category><![CDATA[left-wing authoritarianism]]></category>
		<category><![CDATA[Marxism and race]]></category>
		<category><![CDATA[political discourse]]></category>
		<category><![CDATA[reparations debate]]></category>
		<category><![CDATA[social justice movements]]></category>
		<category><![CDATA[systemic racism]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=45270</guid>

					<description><![CDATA[This article features two thought-provoking interviews with Christopher Rufo and Yascha Mounk, both critiquing identity politics and its effects on modern society. Rufo explores the rise of Critical Race Theory (CRT), its Marxist roots, and the growing influence of DEI policies. Mounk discusses the dangers of cultural appropriation debates, identity politics in institutions, and the risks of left-wing authoritarianism.]]></description>
										<content:encoded><![CDATA[<div class="sidebar" style="width: 200px;">
	<img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-2/Christopher-Rufo.jpg" alt="Christopher Rufo" width="200" height="200" class="noBoxShadow"> </p>
<p>
		<strong>CHRISTOPHER RUFO</strong> is a writer, filmmaker, and activist. He has directed four documentaries for PBS, including <em>America Lost</em>, which tells the story of three forgotten American cities. He is a senior fellow of the Manhattan Institute and a contributing editor of the public policy magazine <em>City Journal</em>. His reporting and activism have inspired a presidential order, a national grassroots movement, and legislation in 22 states. Rufo holds a Bachelor of Science in Foreign Service from Georgetown University and a Master’s of Liberal Arts from Harvard University.
	</p>
</div>
<p>
	<span class="dropCap">S</span><strong>keptic</strong>: You are a controversial figure for your work in the area of Critical Race Theory (CRT). What led you to this subject?
</p>
<p>
	<strong>Rufo</strong>: My professional background is in documentary filmmaking. The book writing process was totally different. I hope what I was able to do with the book is bring my narrative training to telling stories that engage people and move them at an emotional level.
</p>
<p>
	<strong>Skeptic</strong>: Well, you did that. It’s a highly readable book in which you present a history of ideas. One of the difficulties is drawing causal connections between thinkers across generations. How do you address that problem?
</p>
<p>
	<strong>Rufo</strong>: There was a lot of looking for explicit connections. For example, I profile Angela Davis, who I think is really kind of the godmother of CRT. She tied the original critical theory from the early part of the 20<sup>th</sup> century to American race politics in a deliberate way. Her thesis advisor was the critical theorist Herbert Marcuse, who is also profiled in the book. Then I connect Davis to the modern Black Lives Matter movement; she is the personal mentor to a number of BLM leaders. I tried not to make any specious connections, and I wanted to be charitable to my subjects, to see the world first through their eyes and treat them fairly. Only then did I layer on my criticism or my critique.
</p>
<p>
	<strong>Skeptic</strong>: On that political front, how do you distinguish between old-school liberals, such as Steven Pinker, and the more radical progressive thinkers of today? <span id="more-45270"></span>
</p>
<p>
	<strong>Rufo</strong>: The critical theorists I profile in my book are explicitly anti-liberal, such as Angela Davis, Paulo Freire, and Derek Bell, the father of CRT. Their whole movement is explicitly and deeply anti-liberal. It’s against the concept of individual rights, private property, and Enlightenment values. So, I hope that I can also speak to some of those estranged liberals and explain how the movement that has really taken over the institutional left in the United States has deviated from that small ‘l’ liberal tradition and really originates from something much more radical, revolutionary, and Marxist in nature.
</p>
<p>
	<strong>Skeptic</strong>: Walk us through these influences, starting with Marx.
</p>
<p>
	<strong>Rufo</strong>: Over the course of the 20<sup>th</sup> century, there was a deviation from orthodox Marxism as people became more infatuated with the new left, the more activist 1960s youth movement, and racial unrest. Angela Davis was a card-carrying member of the Communist Party USA, and she ran for vice president of the United States on the Communist Party ticket. She was deeply influenced by Marx (although she had written her graduate thesis on Kant) and was also well-versed in the Western philosophical tradition. Paulo Freire—the same. He was working with Marxist-Leninist revolutionaries in the Third World, and his idea of critical consciousness originates in Marxist concepts that he had learned when he was a student in Brazil.
</p>
<p>
	However, the most interesting case is Derek Bell, who was a Harvard Law professor, and in many ways the founding figure of CRT. His students at Harvard Law and other elite law schools around the country, inspired by Bell, established the discipline of critical race theory in the late 1980s. Bell grew up in the Pittsburgh area, served in the U.S. Air Force, went to law school, and was a very successful—even brilliant—student. Then he became a lawyer for the NAACP, handling cases in the Deep South desegregating schools in places such as Mississippi. I think he oversaw something like 300 school desegregation cases. He was a civil rights advocate and activist, a small ‘l’ liberal at the time.
</p>
<p>
	However, Bell became disillusioned with the Civil Rights Movement and utterly disillusioned with Martin Luther King-style civil rights activism that turned to the Constitution, focusing especially on the 14<sup>th</sup> Amendment’s Equal Protection Clause. He thought these were all illusions in that they provided the appearance of freedom but were actually used to reinforce secretly and covertly the structures of racial domination. It is this aspect of Bell’s work that survives and is really the foundation of what we now see as critical race theory.
</p>
<p>
	<strong>Skeptic</strong>: There’s this push to find deep root causes of specific events among politicians. Is this a useful approach?
</p>
<p>
	<strong>Rufo</strong>: It’s amazing because it’s totally backwards. Politicians say, “Well, no, we’re not going to do the thing that actually could have a significant and immediate impact, and instead we’re going to implement the 1619 Project and focus on the first arrival of African slaves in North America.” That certainly is something of historical importance and scholarly relevance, and should even be part of the public debate, but what do you do with that? Short of having a time travel machine, you can’t change the past 400 years of history. Nor can you show any real relevance to today beyond a very broad and metaphorical interpretation of current events.
</p>
<p>
	When you go back and look at the civil rights movement, against which Derrick Bell rebelled later in his life, you had, for the most part, people who wanted to cash in the promissory note of the Declaration of Independence. They wanted to conform to not only the system of individual rights in the United States as a form of law, but also conform to middle class or bourgeois values as a matter of culture. Look at these great civil rights marches in the 1960s. Men were dressed in suits and ties and the women in dresses. And these weren’t necessarily wealthy people. They were mostly working-class African Americans. However, the image that they wanted to convey was one of dignity, self-respect, and an immense hope for equal participation in American society. I’m still really moved and struck by some of those images.
</p>
<p>
	Compare those images to the kind you see of Antifa or BLM activists in 2020. You have deranged-looking mugshots of people. You have people that visually look quite disordered, committing sprees of violence. And in the name of what? It was never quite clear what they wanted beyond defunding the police or just having a justification for violence. Those two images, if you look at them side by side, reveal the kind of fundamental change in the modern left.
</p>
<p>
	<strong>Skeptic</strong>: What do you think is the right approach to social change?
</p>
<p>
	<strong>Rufo</strong>: When you ask people in surveys, “Do you support affirmative action? Do you support race-conscious college admissions? Do you support mandatory diversity, equity, and inclusion training?” They overwhelmingly say “No.” This is true for people of all political affiliations and all racial backgrounds. And yet, all of those things are now required in nearly all of our major institutions. So, you have this mismatch problem where public sentiment is <em>against</em> something, but all of our institutions and even our public policies are <em>for</em> it. Why is that? If we live in a democracy, shouldn’t majority sentiment eventually translate into public policy?
</p>
<p>
	The answer is that, in my view, there are concentric rings of influence on these issues. You have the tightest ring, which consists of the fanatics, the people who are deeply committed to it. They work in it. These are the DEI administrators. These are the critical race theorists. These are the BLM activists. Then you have another concentric ring of people that say, “Well, you know, I more or less buy into the premise of this. I want more diversity.” That’s roughly 30 percent of the public, maybe a little bit more depending on the issue. Then you have an even larger concentric ring of people who are neutral, slightly opposed, or even quite opposed to it, but they don’t speak out because they fear the consequences. This creates an opinion environment in which those very committed activists can really run up the score and impose their point of view as the de facto policy.
</p>
<p>
	That’s the environment we live in. The people who care most about it have figured out where the levers of power are. They’ve gone, in most cases, around the democratic process to impose their will. And they essentially say—as we’ve seen recently with Harvard and the University of North Carolina [the Supreme Court of the United States ruled that race-based affirmative action programs in college admissions processes violate the Equal Protection Clause of the Fourteenth Amendment]—“We know what we’re doing is unpopular. We know what we’re doing is likely illegal and unconstitutional. But we’re going to do it anyway.”
</p>
<p>
	<strong>Skeptic</strong>: Erika Chenoweth and Maria Steffen’s research on political violence demonstrates that since 1900, nonviolent campaigns worldwide were twice as likely to succeed outright as violent insurgencies. This trend has been increasing over time. In the last 50 years, civil resistance has become increasingly frequent and effective, whereas violent insurgencies have become increasingly rare and unsuccessful. No campaigns failed once they achieved the active and sustained participation of just 3.5 percent of the population, and lots of them succeeded with far less than that.
</p>
<p>
	<strong>Rufo</strong>: That’s right. I think academic critique is still valuable. However, what we really need is political opposition because this issue has moved from the realm of academia to the realm of politics. So, it also has to be fought politically. That’s what I’ve done, and I’ve gotten an unbelievable amount of criticism for this approach.
</p>
<p>
	I’ve taken the battle out of the realm of academic discourse and into the realm of practical politics. I’ve been very explicit about that. I said I want to change public perception; I want to turn critical race theory into a brand, and I want to destroy it not just in the realm of public opinion, but also in the realm of public policy.
</p>
<p>
	If it’s in the K–12 school curriculum, it’s a policy question. If it’s in a public university DEI bureaucracy, it’s a policy question. If it’s in our criminal justice system, it’s a policy question. These are political questions, and those who think that we can resolve them through discourse are really doing a disservice. They’re not grappling with the actual difficult nature of statesmanship and political activism that’s required.
</p>
<p>
	If we want to have a society that says, “No, we’re not going to engage in racial scapegoating. We’re not going to judge individuals based on a racial category. We’re not going to imbibe in notions of hereditary blood guilt,” the only way, I think, is through political pressure, by changing the laws by which our institutions are governed.
</p>
<p>
	<strong>Skeptic</strong>: What are your thoughts on systemic racism? What is your explanation for racial group differences in income, wealth, home ownership, representation in Congress and the corporate C-suite?
</p>
<p>
	<strong>Rufo</strong>: What is the standard by which we measure systemic racism? How do we define systemic racism? There’s an interesting bait and switch here, because they say, “Well, all of this is systemic racism, from chattel slavery to the fact that a Lakeisha Smith is less likely to get called back than a Lisa Smith.” [“Call back” studies submit the same resumés to businesses and compare the response to identifiably Black versus White names]. You have this transition in the mid-20<sup>th</sup> century from explicit, formal, and legal racist policies to what amounts to implicit racist policies. Well, what do they mean by that?
</p>
<p>
	They mean that when you measure things statistically, that there is a disparate impact on outcomes. Lisa versus Lakeisha Smith is just one such example. You can say that there are no outright racist policies in policing or housing or geographical distribution, but there are still disparate outcomes. Is it because people are secretly and subconsciously racist? That’s the unconscious bias theory, which has been debunked. [It has been demonstrated that The Implicit Association Test, often cited as confirming evidence, does not measure racial bias but rather reaction time to familiar versus unfamiliar terms.] Are police more likely to shoot a Black suspect than a White suspect? Roland Fryer at Harvard showed that this is not the case. [Although he did find that White police rough up Black people they pull over more than White people.]
</p>
<p>
	Then you have to ask some uncomfortable questions. If, for example, there are more African American men in jail than Asian American men, is it because our society is systemically racist against African American men and systemically giving privileges to Asian American men?
</p>
<p>
	You could make that argument, but I think that on the face of it most people realize that it’s not true. Then you ask about the rate of criminality—do African American males on average commit more crimes than Asian American males? You might find that it’s not racism that is operative. It’s another set of background variables. Robert Rector published some papers on this subject 20 years ago that are still foundational to my thinking. He showed that if you control for those background variables, you find that the argument for active systemic racism vanishes across a whole range of things, not just Lakeisha versus Lisa Smith, but for things that are especially meaningful. For example, if you control for the mother’s academic achievement, the mother’s participation in state welfare programs, and household family structure, the gap between White and Black childhood poverty disappears. It’s zero.
</p>
<p>
	If we aim our public policy towards fixing those variables, we’d be much better served and we’d be much more likely to reduce overall inequalities.
</p>
<p>
	<strong>Skeptic</strong>: Those causal variables are largely left out of the conversation. Maybe it’s taboo to talk about them right now?
</p>
<p>
	<strong>Rufo</strong>: I think it is, because it’s a very inconvenient disrupting narrative when you have minority groups that are enormously successful in the United States. The most successful ethnic groups in the United States today are majority non-White ethnic groups, including some Black ethnic groups, particularly Nigerian Americans. Part of that may be due to a selection process—immigrants from Nigeria are disproportionately better educated, have more resources, etc. So, it’s not quite a one-to-one measurement.
</p>
<p>
	Nonetheless, there’s a huge range in success among ethnic groups in the United States. The ones that have stable family structure, commitment to education, a strong work ethic, mutual support within a community, etc., are very successful. Those ethnic groups that do not have those attributes do very poorly on many measures, including income. Appalachian Whites do very poorly while Nigerian Americans or other recent immigrants are doing extraordinarily well.
</p>
<p>
	<strong>Skeptic</strong>: Are you optimistic we can achieve a colorblind society?
</p>
<p>
	<strong>Rufo</strong>: There are reasons for optimism and for pessimism. The reason for optimism is that the American people really despise the DEI affirmative action principles of governance. Even in California and Washington state, where I live, voters have rejected affirmative action policies when they’ve been put to a ballot initiative. And the majority of racial groups also oppose these kinds of policies. Despite all of the media dominance, academic dominance, and bureaucratic dominance of the DEI movement—the American people want equal treatment for each individual, regardless of group category. They want colorblind equality, not racial favoritism and enforced equity.
</p>
<p>
	The case for pessimism is that it’s going to be difficult. The problem of racial equality is a thorny one. It is one that has vexed the United States for its whole history and is, frankly, likely to continue. As long as there is visible inequality—statistically measurable inequality—the narrative of critical race theory will have a base of support. It will have the political, emotional, and intellectual grounds that can feed that narrative. This puts us in a bit of a conundrum because paradoxically, the remedies of critical race theory are actually likely to make inequality worse. And for the people who are running a critical race theory style regime, inequality justifies their claims to power. So, they have no incentive to make things better in the real world. If we go in that direction, we face a very long, very brutal, and very disillusioning politics in our future.
</p>
<p>
	<strong>Skeptic</strong>: Do you see any role for any kind of reparations for formerly oppressed peoples or even currently oppressed people?
</p>
<p>
	<strong>Rufo</strong>: I have certainly opposed any kind of race-based reparations payments. I think it’s absolutely the wrong direction to go for a host of reasons. Historically, if you look at Lyndon Johnson’s Great Society anti-poverty programs, these were to a large extent a kind of race-based reparations policy that was—they thought—backed up by the latest discoveries in social science, deployed at federal mass scale. These programs now are spending about a trillion dollars a year, disproportionately to African Americans, especially descendants of slaves.
</p>
<p>
	These are policies that sound great, and that’s why they’re often passed in legislation. But we have to be sober and level-headed in analyzing whether they actually work. Do they help us achieve the stated intentions? The evidence that it has helped in any way is lacking. In fact, the most persuasive evidence, in my view, shows that it has had negative, though unintended, consequences. In my reading of it, both statistically and as someone who spent three years researching and documenting public housing projects in Memphis, Tennessee, and getting a first-hand look at their impact, I just don’t think that reparations would work. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<p class="ImportantInfo">
	This interview was edited from a longer conversation that took place on <em>The Michael Shermer Show</em>, which you can <a href="https://bit.ly/43Dm893">watch online</a>.
</p>
<hr />
<div class="sidebar" style="width: 200px;">
	<img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-2/Yascha-Mounk.jpg" alt="Yascha Mounk" width="200" height="200" class="noBoxShadow"> </p>
<p>
		<strong>YASCHA MOUNK</strong> is a writer and academic known for his work on the rise of populism and the crisis of liberal democracy. Born in Germany to Polish parents, Mounk received his BA in history from Trinity College, Cambridge, and his PhD in government from Harvard University. He is a professor of the practice of international affairs at Johns Hopkins University, a contributing editor at <em>The Atlantic</em>, and a senior fellow at the Council on Foreign Relations. He is the author of <em>The Great Experiment: Why Diverse Democracies Fall Apart and How They Can Endure</em> (featured on former President Barack Obama’s summer reading list) and <em>The Identity Trap: A Story of Ideas and Power in Our Time</em>.
	</p>
</div>
<p>
	<strong>Skeptic</strong>: Let’s talk about identity politics. Is it really the identity or is it the politics?
</p>
<p>
	<strong>Mounk</strong>: The great civil rights leader, Bayard Rustin, a gay Black political activist—though I’m not sure that he would want to list the adjectives in that order—said that the idea of a homogeneous Black community is the invention of White elites, as well as of certain Black people who want to lead it. I think this describes the situation very well. And this is important because it speaks to our model for political solidarity.
</p>
<p>
	Let’s examine the popular ban on “cultural appropriation.” When I was growing up in Europe, the people who worried about cultural purity and the influence that other groups might have on your culture were on the right. Today, some of these concerns persist on the right, but a lot of them have moved to the left. It’s gotten to the point of absurdity, like left-leaning actors who apologize for voicing or acting in roles that don’t match their identity.
</p>
<p>
	The core example is that of White musicians in the 1950s and 1960s “stealing” the music of Black musicians or being inspired by them, and going on to have big careers while Black musicians didn’t. The injustice in the 1950s and 1960s was not that there was some White jazz saxophonist inspired by Black musicians. The injustice was that Black musicians could not travel freely across America because of racism—they could not stay at some of the hotels in which they performed, they were banned from many concert venues, they wouldn’t be played on many radio stations, and they couldn’t be signed by many record labels. That’s what was unjust about it, not something called “cultural appropriation.” If you get that wrong, you also get wrong how you solve it. The way to solve that injustice is not to make sure that White musicians don’t play jazz music or rock ‘n roll. It’s to make sure that Black musicians and African Americans more broadly overcome the deep discrimination that they faced.
</p>
<p>
	All culture is appropriation. Every element of our cuisine, the way we write, and the technology we use today is an accrual of past cultural appropriation. If we put those forms of mutual cultural influence under general suspicion, not only will we forego amazing cultural and technological innovations in the coming decades and centuries, but we’ll also fundamentally fail to celebrate positive aspects of our societies.
</p>
<p>
	<strong>Skeptic</strong>: How do you respond to people such as Noam Chomsky, who argues that critiques of identity politics such as your own are exaggerations, that things such as critical race theory are just legal or academic ideas that are not filtering down to grammar schools or up to corporations, and that what you are saying is part of a vast right-wing moral panic?
</p>
<p>
	<strong>Mounk</strong>: I think that’s a mistake. There are people on the right who brand anything they don’t like critical race theory or claim that teaching kids about slavery is critical race theory. That’s absurd. Of course we should teach American children about the terrible history of racial injustice and slavery in this country. However, as a result, a lot of people on the mainstream left, including smart people like Chomsky, end up saying, “Isn’t critical race theory just speaking critically about race in society?” Or they say it’s an academic theory and first graders aren’t reading the academic articles published by Derek Bell and Kimberlé Crenshaw.
</p>
<p>
	What I worry about is teachers walking into classrooms in the third, second, and first grade and saying, “If you’re Black, go to that classroom; If you’re Asian American, go to that classroom; If you’re Latino, you go to that classroom; and if you’re White, you go to that classroom over there.” I think it’s fine for kids to be uncomfortable at times, because everything we know from social psychology is that how we define ourselves is malleable. However, when they’re told, “This is your in-group and that over there is your out-group,” that can lead to having endless empathy for the “my group” and terrifying disregard for the suffering of the other group.
</p>
<p>
	So, while the aim may be to create White anti-racists, I think it’s much more likely to create White separatists or White supremacists. The other thing I would say is that these ideas not only now inform the norms and the practices of a great many institutions in the United States—important institutions such as schools—they also inform public policy in really worrying ways.
</p>
<p>
	A shocking example of this was when I sat in on a meeting of the ACIP—the key advisory group advising the Centers for Disease Control—on how to roll out vaccines during the COVID-19 pandemic. Now we know that by far the best predictor of how seriously sick you will get from COVID is your age. Therefore, nearly every country in the world prioritized the elderly in their distribution. You might also prioritize hospital workers, because in the middle of a pandemic you don’t want the doctors to be sick. But after that, nearly every country started with the over 85s, and then the over 80s, and then the over 75s. And that also made it easier to communicate this to the public.
</p>
<p>
	Well, ACIP said no. We have to care about racial equity. Older Americans are disproportionately White, and therefore it would be unjust to give a vaccine to them first, even though the CDC’s own model shows that deviating from prioritizing the elderly would raise the death toll by between 0.5 to 6.5 percent—in other words, by thousands of human lives. This had disastrous consequences, literally thousands of additional deaths because of identity politics.
</p>
<p>
	<strong>Skeptic</strong>: You mentioned “identity politics.” How should we think about this term?
</p>
<p>
	<strong>Mounk</strong>: The way I think about politics, there are two sets of distinctions: between liberal and authoritarian, and between the left and the right. I am a center-left liberal. I joined the German Social Democratic Party at the age of 13. I had to lie about my age to join it because by law you’re only allowed to join when you’re 14. So, I can claim as long an allegiance to the left as anybody my age. In the distinction between liberal and authoritarian, there are those who want to impose their views by force, have no tolerance for people who disagree with them, and see the world as split into the good and the evil. I want a society in which individuals have free speech and the right of free assembly, and the right of free worship. So even if I win a majority, I’m not going to impose my substantive moral views on you. And I recognize that just because you take what I consider the wrong position on some political issue, that doesn’t make you an evil person. That is what defines me as a true “liberal.” I think the liberal versus authoritarian distinction is more important than the left v. right one.
</p>
<p>
	<strong>Skeptic</strong>: So there’s left-wing authoritarianism, just as there’s right-wing authoritarianism?
</p>
<p>
	<strong>Mounk</strong>: How can anyone look at the history of the 20<sup>th</sup> century without recognizing that? Left-wing authoritarianism, which you may have more sympathy for, should also scare you. It’s much easier for people who think they’re doing good in the world to follow into believing that such forms of authoritarianism are for the good of all humanity, and so we are creating paradise on Earth, not just for one group, but for everybody. That can be very appealing. My grandparents, whom I loved and who were deeply decent human beings, were attracted to such ideas for understandable reasons. They grew up in shtetls, living in poverty and being discriminated against. They thought that we should fight for the rights of proletarians. So I have empathy for people who are tempted by that set of ideas, but I’m also aware of how easily they can seduce you in ways that eventually make you complicit in genuine evil.
</p>
<p>
	<strong>Skeptic</strong>: You often discuss corporations adopting identity politics. Do they really believe this, or do they not want to be bogged down in lawsuits? What is your sense about that?
</p>
<p>
	<strong>Mounk</strong>: I would say that there’s a real split. There are certainly true believers in Human Resources departments, and some true believers make a good living as diversity consultants. However, there are also some true believers in the elite class, some among CEOs, and so on. At the same time, there are a lot of people who have an incentive to shut up and stay quiet. People who are not that politically motivated just ask themselves, “Is it really worth my while to push back against this? You know, I’m going to be branded as a troublemaker and perhaps somebody will accuse me of being a racist or a bigot. I better just keep my mouth shut.” And there’s an interesting legal incentive for CEOs to go along with some of this, which is that if your company is sued for racial discrimination or sexual harassment, whether you have engaged in industry standard practices to avert those forms of bias constitutes a key defense. So once your competitors offer a deeply divisive diversity training, you have a legal incentive to do that too. If you don’t, a plaintiff might argue that you clearly didn’t care about discrimination.
</p>
<p>
	So, I think that there is an incentive from social sanction—that speaking up against these ideas is perilous, and there is also an incentive from the actual legal system in the United States in terms of how you can defend yourself against lawsuits, no matter how frivolous.
</p>
<p>
	<strong>Skeptic</strong>: Given how deep this trend is in education, are you worried about the next generation?
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.2/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n2_lg.jpg" alt="Skeptic 29.2 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.2<br />
		<a href="https://www.skeptic.com/productlink/magv29n2">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/264050#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	<strong>Mounk</strong>: Yes. My students are deeply and fundamentally shaped by these ideas. Especially if they went to private schools, or schools in good school districts and affluent liberal-leaning parts of the country, these ideas have been drummed into them from day one. It’s the water that they swim in, and they take much of it for granted.
</p>
<p>
	<strong>Skeptic</strong>: What can we do about it?
</p>
<p>
	<strong>Mounk</strong>: Well, the first step is to argue back against these ideas from the moral high ground. And part of that is to argue on the basis of principles that you deeply believe in and that might make the world a better place. Now, there’s a broad range of principles that are compatible with liberalism that you can embrace. Perhaps you have a religious motivation, perhaps you’re a socialist, or perhaps you’re a conservative, all of that is fine. My own conviction is that of a philosophical liberal, as well as someone in the American context that has great admiration for certain movements.
</p>
<p>
	Consider Frederick Douglass. When he was invited to hold a speech commemorating the Fourth of July, he called out his compatriots on the hypocrisy of talking about all men being created equal. He asked, “How can you celebrate that value and pat yourselves on the back when Black people around the country are enslaved right this moment?” However, he didn’t say to rip it all up. And while he recognized that newspapers and magazines said terrible things about Black people at the time, he didn’t reject free speech. He called free speech <em>the dread of tyrants</em>, because he realized that it was what allowed genuine political minorities, people who were very unpopular in their time, to fight for their rights. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<p class="ImportantInfo">
	This interview was edited from a longer conversation that took place on <em>The Michael Shermer Show</em>, which you can <a href="https://bit.ly/43l8tTA">watch online</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Blue, Green, Black, and White:  Identity Politics in History, from Byzantium to Sri Lanka</title>
		<link>https://www.skeptic.com/reading_room/identity-politics-in-history-from-byzantium-to-sri-lanka/</link>
		
		<dc:creator><![CDATA[Jens Kurt Heycke]]></dc:creator>
		<pubDate>Fri, 27 Sep 2024 14:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[affirmative action consequences]]></category>
		<category><![CDATA[Byzantine chariot races]]></category>
		<category><![CDATA[ethnic conflict in history]]></category>
		<category><![CDATA[ethnic violence]]></category>
		<category><![CDATA[historical race riots]]></category>
		<category><![CDATA[identity politics]]></category>
		<category><![CDATA[Nika riots 532]]></category>
		<category><![CDATA[race]]></category>
		<category><![CDATA[robbers cave experiment]]></category>
		<category><![CDATA[Sri Lanka civil war]]></category>
		<category><![CDATA[tribalism and conflict]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=45255</guid>

					<description><![CDATA[Explore how ancient sporting rivalries in Byzantium and modern identity politics in Sri Lanka reveal the deep dangers of tribalism. This historical analysis sheds light on how seemingly trivial divisions can fuel violence and highlights the consequences of identity politics in shaping society, governance, and conflict.]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">S</span>ixth-century Byzantium was a city divided by race hatred so intense that people viciously attacked each other, not only in the streets but also in churches. The inscription on an ancient tablet conveys the raw animus that spawned from color differences: “Bind them! … Destroy them! … Kill them!” The historian Procopius, who witnessed this race antagonism firsthand, called it a “disease of the soul,” and marveled at its irrational intensity:
</p>
<blockquote>
<p>
		They fight against their opponents knowing not for what end they imperil themselves … So there grows up in them against their fellow men a hostility which has no cause, and at no time does it cease or disappear, for it gives place, neither to the ties of marriage nor of relationship nor of friendship.<sup><a href="#note01">1</a></sup>
	</p>
</blockquote>
<p>
	This hostility sparked multiple violent clashes and riots, culminating in the Nika Riot of 532 CE, the biggest race riot of all time: 30,000 people perished, and the greatest city of antiquity was reduced to smoldering ruins.
</p>
<p>
	But the Nika Riot wasn’t the sort of race riot you might imagine. The race in question was the chariot race. The color division wasn’t between black and white but between blue and green—the colors of the two main chariot-racing teams. The teams’ supporters, who were referred to as the Blue and Green “factions,” proudly wore their team colors, not just in the hippodrome but also around town. To help distinguish themselves, many Blues also sported distinctive mullet hairstyles, like those of 1970’s rock stars. Both Blues and Greens were fiercely loyal to their factions and their colors. The chariots and drivers were a secondary concern; the historian Pliny asserted that if the drivers were to swap colors in the middle of a race, the factions would immediately switch their allegiances accordingly.
</p>
<p>
	The race faction rivalry had existed for a long time before the Nika Riot, yet Procopius writes that it had only become bitter and violent in “comparatively recent times.” So, what caused this trivial division over horse-racing teams to turn so deadly? In short, it was the Byzantine version of “identity politics.” <span id="more-45255"></span>
</p>
<p>
	Modern sociological research helps explain the phenomenon. Decades of studies have demonstrated the dangerous power of the human tribal instinct. Surprisingly, it doesn’t require “primordial” ethnic or tribal distinctions to engage that impulse. Minor differences are often sufficient to elicit acute ingroup-outgroup discrimination. The psychologist Henri Tajfel demonstrated this in a landmark series of studies to determine how minor those differences can be. In each successive study, Tajfel divided test subjects into groups according to increasingly trivial criteria, such as whether they preferred Klee or Kandinsky paintings or underestimated or overestimated the number of dots on a page. The results were as intriguing as they were disturbing: even the most trivial groupings induced discrimination.<sup><a href="#note02">2</a>, <a href="#note03">3</a></sup>
</p>
<p>
	However, the most significant and unexpected discovery was that simply telling subjects that they belonged to a group induced discrimination, even when the grouping was completely random. Upon learning they officially belonged to a group, the subjects reflexively adopted an us-versus-them, zero-sum game attitude toward members of other groups. Many other researchers have conducted related experiments with similar results: <em>a government or an authority (like a researcher) designating group distinctions is, by itself, sufficient to spur contentious group rivalry</em>. When group rewards are at stake, that rivalry is magnified and readily turns malign.
</p>
<p>
	The extent to which authority-defined groups and competition for group benefits can foment nasty factionalism was demonstrated in the famous 1954 Robbers Cave experiment, in which researchers brought boys with identical socioeconomic and ethnic backgrounds to a summer camp, dividing them randomly into two official groups. They initially kept the two groups separate and encouraged them to bond through various group activities. The boys, who had not known each other before, developed strong group cohesion and a sense of shared identity. The researchers then pitted the groups against each other in contests for group rewards to see if inter-group hostility would arise. The group antagonism escalated far beyond their expectations. The two groups eventually burned each other’s flags and clothing, trashed each other’s cabins, and collected rocks to hurl at each other. Camp staff had to intervene repeatedly to break up brutal fights. The mounting hostility and risk of violence induced the researchers to abort that phase of the study.<sup><a href="#note04">4</a></sup> Other researchers have replicated this experiment: one follow-up study resulted in knife fights, and a researcher was so traumatized he had to be hospitalized for a week.<sup><a href="#note05">5</a>, <a href="#note06">6</a></sup>
</p>
<p>
	How does this apply to the Blues and Greens? As in the Tajfel experiments, the Byzantine race factions had formed a group division based on a trivial distinction—the preference for a color and a horse racing team. However, for many years, the rivalry remained relatively benign. This was likely because the emperors had long played down the factional distinction and maintained a tradition of race neutrality: if they favored a faction, they avoided openly showing it. But that tradition ended a few years before the Nika Riot when emperors began openly supporting either one faction or the other. But more importantly, they extended their support outside the hippodrome with official policies that benefited members of their preferred faction. The emperors Marcian, Anastasius, and Justinian adopted official employment preferences, allocating positions to members of their favored faction and blocking the other faction from coveted jobs. To cast it in modern terms, they began a program of “race-based” affirmative action and identity politics.<sup><a href="#note07">7</a>, <a href="#note08">8</a></sup>
</p>
<p class="pullquote">
	In nearly all the countries where affirmative action programs have been implemented, they have an invidious effect on the group that benefits, imbuing them with a sense of insecurity and defensiveness over the benefits they receive.
</p>
<p>
	Official recognition of the group distinction enhanced the us-versus-them sense of difference between the factions, and the affirmative action scheme turned this sense of difference into bitter antagonism, which eventually exploded in violence. Procopius, our primary contemporary source, placed the blame for the mounting antagonism and the riots squarely on Justinian’s program of identity politics. It had not only promoted an us-versus-them mindset in the factions, it also incited vicious enmity between them, turning a trivial color preference and sporting rivalry into a deadly “race war.”
</p>
<p>
	Considering how identity politics could elicit violence from randomly assembled groups like the Blues and Greens, it is easy to imagine how disastrous identity politics can be when applied to groups that already have some long-standing, historic sense of difference. Indeed, there have been numerous instances of this in history, most ending tragically. For example, Tutsis and Hutus enjoyed centuries of relatively peaceful coexistence in Rwanda up until Belgian colonialists arrived; when the Belgians issued identity cards distinguishing the two groups and instituted affirmative action, it ossified a formerly porous group distinction and infused it with bitter rivalry, preparing the path to genocide. Likewise, when Yugoslavia instituted its “nationality key” system, with educational and employment quotas for the country’s constituent ethnic groups, it hardened group distinctions, pitting the groups against each other and setting the stage for genocide in the Balkans. And, when the Sri Lankan government opted for identity politics and affirmative action, it spawned violent conflict and genocide that destroyed a once peaceful and prosperous country. This last example—Sri Lanka—is so illustrative of the dangers of identity politics that we’ll examine it in more detail.
</p>
<h4>
	Sri Lanka: How Identity Politics Destroyed Paradise<br />
</h4>
<blockquote>
<p>
		She is a fabulous isle just south of India’s teeming shore, land of paradise … with a proud and democratic people … Her flag is the flag of freedom, her citizens are dedicated to the preservation of that freedom … Her school system is as progressive as it is democratic. <span class="quoteauthor">—1954 TWA TOURIST VIDEO</span>
	</p>
</blockquote>
<p>
	Sri Lanka is an island off India’s southeast coast blessed with copious amounts of arable land and natural resources. It has an ethnically diverse population, with the two main groups being Sinhalese (75 percent) and Tamils (15 percent). Before Sri Lanka’s independence in 1948, there was a long history of harmony between these groups. That history goes back at least to the fourteenth century when the Arab traveler Ibn Battuta observed how the different groups “show respect” for each other and “harbor no suspicions.” On the eve of Sri Lanka’s independence, a British governor lauded the “large measure of fellowship and understanding” that prevailed, and a British soldiers’ guide noted that “there are no historic antagonisms to overcome.” With quiescent communal relations, abundant natural resources, and one of the highest literacy rates in the developing world, newly independent Sri Lanka was poised to flourish and prosper. Nobody doubted it would outperform countries like South Korea and Singapore, with the British governor dubbing it “the best bet in Asia.”
</p>
<p>
	It turned out to be a very poor bet. A few years after Sri Lanka’s independence, violent communal conflict erupted, culminating in a protracted civil war and genocide. By the time it ended, over a million people had been displaced or killed. Sri Lanka’s per capita GDP, which was on par with South Korea’s in 1960, was only one-tenth of it by 2009. As in sixth-century Byzantium, identity politics precipitated the calamity.
</p>
<h4>
	Turning a Disparity into a Disaster<br />
</h4>
<p>
	At the end of British colonial rule in Sri Lanka, there was significant educational and income disparity between Sinhalese and Tamils. This arose by happenstance rather than because of discriminatory policy. The island’s north, where Tamils predominate, is arid and poor in resources. Because of this, the Tamils devoted their productive energy toward developing human capital, focusing on education and cultivating professional skills. This focus was abetted by American missionaries, who set up schools in the north, providing top-notch English-language education, particularly in math and the physical sciences. As a result, Tamils accounted for an outsized proportion of the better-educated people on the island, particularly in higher-paying fields like engineering and medicine.
</p>
<p>
	Because of the Tamils’ superior education, the British colonial administration hired them disproportionately compared to the Sinhalese. In 1948, for example, Tamils accounted for 40 percent of the clerical workers employed by the colonial government, greatly outstripping their 15 percent share of the overall population. This unequal outcome had nothing to do with overt discrimination against the Sinhalese; it merely reflected the different levels and types of education achieved by the different ethnic groups.
</p>
<p>
	When Sri Lanka gained independence, it passed a constitution that prohibited discrimination based on ethnicity. But a few years after that, an opportunist politician, S.W.R.D. Bandaranaike, figured he could advance his career by cynically appealing to identity politics, stoking Sinhalese envy over the Tamils’ over-representation in higher education and government. He launched a divisive campaign to eliminate the disparity, which spurred the majority Sinhalese to elect him. After his election in 1956, Bandaranaike passed a law that changed the official language from English to Sinhala and consigned students to separate Tamil and Sinhalese education “streams” rather than having them all learn English. As one Sinhalese journalist wrote, this divided Sri Lanka, depriving it of its “link language”:
</p>
<blockquote>
<p>
		That began a great divide that has widened over the years. Children now go to segregated schools or study in separate streams in the same school. They don’t get to know other people of their own age group unless they meet them outside.
	</p>
</blockquote>
<p>
	Beyond eliminating Sri Lanka’s common “link language,” this law also functioned as a de facto affirmative action program for Sinhalese. Tamils, who spoke Tamil at home and received their higher education in English, could not gain Sinhala proficiency quickly enough to meet the government’s requirement. So, many of them lost their jobs to Sinhalese. For example, the percentage of Tamils employed in government administrative services dropped dramatically: from 30 percent in 1956 to five percent in 1970; the percentage in the armed forces dropped from 40 percent to one percent.
</p>
<p>
	As has happened in many other countries, Sri Lanka’s identity politics went hand-in-hand with expanded government. Sinhalese politicians made it clear: government would be the tool to redress perceived ethnic disparities. It would allocate more jobs and resources, and that allocation would be based on ethnicity. As one historian writes: “a growing perception of the state as bestowing public goods selectively began to emerge, challenging previous views and breeding mistrust between ethnic communities.” Tamils responded to this by launching a non-violent resistance campaign. With ethnic dividing lines now clearly drawn, mobs of Sinhalese staged anti-Tamil counter-demonstrations and then riots in which hundreds—mostly Tamils—were killed. The us-versus-them mentality was setting in.
</p>
<p>
	Bandaranaike was eventually assassinated by radicals within his own movement. But his widow, Sirimavo, who was subsequently elected prime minister, resolved to maintain his top priorities—expansive government and identity politics. She nationalized numerous industries and launched development projects that were directed by ethnic and political considerations rather than actual need. She also removed the constitutional ban on ethnic discrimination so that she could aggressively expand affirmative action. The existing policies had already cost so many Tamils their jobs that they were now <em>under-represented</em> in government. However, they remained over-represented in higher education, particularly in the sciences, a disparity that Sirimavo and her political allies resolved to eliminate. In a scheme that American universities like Harvard would later emulate, the Sri Lankan universities began to reject high-scoring Tamil applicants in favor of manifestly less-qualified Sinhalese with vastly lower test scores.
</p>
<p>
	Just like Justinian’s “race” preferences, the Sri Lankan affirmative action program exacerbated us-versus-them attitudes, deepening the group divide and spurring enmity between groups. As one Sri Lankan observed:
</p>
<blockquote>
<p>
		Identity was never a question for thousands of years. But now, here, for some reason, it is different … Friends that I grew up with, [messed around] with, got drunk with, now see an essential difference between us just for the fact of their ethnic identity. And there are no obvious differences at all, no matter what they say. I point to pictures in the newspapers and ask them to tell me who is Sinhalese and who is Tamil, and they simply can’t tell the difference. This identity is a fiction, I tell you, but a deadly one.<sup><a href="#note09">9</a></sup>
	</p>
</blockquote>
<p>
	The lessons of the various affirmative action programs in Sri Lanka were clear to everyone: individuals’ access to education and government employment would be determined by ethnic group membership rather than individual merit, and political power would determine how much each group got. If you wanted your share, you needed to mobilize as a group and acquire and maintain political power at any cost. The divisive effects of these lessons would be catastrophic.
</p>
<p>
	The realization that they would forever be at the mercy of an ethnic spoils system, along with the violent attacks perpetrated against them, induced the Tamils to form resistance organizations—most notably, the Liberation Tigers of Tamil Eelam (LTTE). The LTTE attacked both Sri Lankan government forces and individual Sinhalese, initiating a deadly spiral of attacks and reprisals by both sides committing the sort of atrocities that are tragically common in ethnic conflicts: burning people alive, torture, mass killings, and so on. Over the following decades, the conflict continued to fester, periodically escalating into outright civil war. Ultimately, over a million people would be killed or displaced.
</p>
<p>
	The timeline of the Sri Lankan conflict establishes how communal violence originated from identity politics rather than the underlying income and occupational disparity between the groups. That disparity reached its apex at the beginning of the twentieth century. Yet, there was no communal violence at that point or during the next half-century. It was only after the introduction of affirmative action programs that ethnic violence erupted. The deadliest attacks on Tamils occurred an entire decade after those programs had enabled Sinhalese to surpass Tamils in both income and education. As Thomas Sowell observed: “It was not the disparities which led to intergroup violence but the politicizing of those disparities and the promotion of group identity politics.”<sup><a href="#note10">10</a></sup>
</p>
<h4>
	Consequences of Identity Politics in Sri Lanka and Beyond<br />
</h4>
<p>
	Sri Lanka’s experience highlights some underappreciated consequences of identity politics. Most notably, one would expect that affirmative action programs would have warmed the feelings of the Sinhalese toward the Tamils. After all, they were receiving preferences for jobs and education at the Tamils’ expense. Yet, precisely the opposite happened: as the affirmative action programs were implemented, Sinhalese animus toward the Tamils progressively worsened. This pattern has been repeated in nearly all the countries where affirmative action has been implemented: affirmative action programs have an invidious effect on the group that benefits, imbuing them with a sense of insecurity and defensiveness over the benefits they receive. That group tends to justify the indefinite continuation of these benefits by claiming that the other group continues to enjoy “privilege”—or by demonizing them and claiming that they are “systemically” advantaged. Thus, the beneficiaries of affirmative action are often the ones to initiate hostilities. In Rwanda, for example, it was Hutu affirmative action beneficiaries who perpetrated the violence, not Tutsis. The situation in Sri Lanka was analogous, with Sinhalese instigating all of the initial riots and pogroms against the Tamils.
</p>
<p>
	One knock-on effect of identity politics in Sri Lanka was that it ultimately benefited some of the wealthiest and most privileged people in the country. The government enacted several affirmative action schemes, each increasingly contrived to benefit well-heeled Sinhalese. The last of these implemented a regional quota system that was devised so that aristocratic Sinhalese living in the Kandy region would compete for spots against poor, undereducated Tamil farm workers. As one Tamil who lost his spot in engineering wrote: “They effectively claimed that the son of a Sinhalese minister in an elite Colombo school was disadvantaged vis-à-vis a Tamil tea plucker’s son.” This follows the pattern of many other affirmative action programs around the world: the greatest beneficiaries are typically the most politically connected (and privileged) individuals within the group receiving affirmative action. They are often wealthier and more privileged than many of the individuals against whom affirmative action is directed. This has been well documented in India, which has extensive data on the subgroups that benefit from its affirmative action programs.
</p>
<p>
	One unexpected consequence of identity politics in Sri Lanka was rampant corruption. When Sri Lanka became independent, its government was widely deemed one of the least corrupt in the developing world. However, as affirmative action programs were implemented and expanded, corruption increased in lockstep. The adoption of affirmative action set a paradigm that pervaded the government: whoever held power could steer government resources to whomever they deemed “underserved.” A baleful side effect of ethnicity-based distortion of government policy is that it undermines and erodes more general standards of government integrity and transparency, legitimating a paradigm of corruption: if it is acceptable to direct policy for the benefit of an ethnic group, is it not also acceptable to do so for the benefit of a clan or an individual? It is a small step to go from one to the other, a step that many Sri Lankan leaders and bureaucrats took. Today, Sri Lanka’s government, which once rivaled European governments in transparency, remains highly corrupt. This pattern has been repeated in other countries. For example, after the Federation of Malaysia expelled Singapore, it adopted an extensive affirmative action program, whereas Singapore prohibited ethnic preferences. Malaysia subsequently experienced proliferating corruption, whereas Singapore is one of the least corrupt countries in the world today.
</p>
<p>
	Perhaps the most profound consequence of identity politics in Sri Lanka was that it ultimately made everybody in the country worse off. After World War II, per capita income in Sri Lanka and Singapore was nearly identical. But after it abandoned its shared “link language” and adopted ethnically divisive policies, Sri Lanka was plagued by violent conflict and economic underperformance; today, one Singaporean earns more than seven Sri Lankans put together. All the group preferences devised to elevate Sinhalese brought down everyone in the country—Tamil, Sinhalese, and all the other groups alike. Lee Kuan Yew, Singapore’s “founding father,” attributed that failure to Sri Lanka’s divisive policies, saying that if Singapore had implemented similar policies, “we would have perished politically and economically.” There are echoes of this in other countries that have implemented identity politics. When I visited Rwanda, I asked Rwandans of various backgrounds whether they thought distinguishing people by race or ethnicity ever helped anyone in their country. There was complete unanimity on this point: after they got over pondering why anyone would ask such a naïve question, they made it very clear that distinguishing people by group made everyone, whether Hutu or Tutsi, distinctly worse off. In the Balkans, I got similar answers from Bosnians, Croatians, Serbians, and Kosovars.
</p>
<h4>
	The Perilous Path of Identity Politics<br />
</h4>
<p>
	Decades of sociological research and millennia of history have demonstrated that the tribal instinct is both powerful and hardwired into human behavior. As political scientist Harold Isaacs writes:
</p>
<blockquote>
<p>
		If anything emerges plainly from our long look at the nature and functioning of basic group identity, it is the fact that the we-they syndrome is built in. It does not merely distinguish, it divides … the normal responses run from … indifference to depreciation, to contempt, to victimization, and, not at all seldom, to slaughter.<sup><a href="#note11">11</a></sup>
	</p>
</blockquote>
<p>
	The history of Byzantium and Sri Lanka demonstrates that this tribal instinct is extremely easy to provoke. All it takes is official recognition of group distinctions and some group preferences to balkanize people into bitterly antagonistic groups, and the consequences are potentially dire. Even if a society that is balkanized in this way avoids violent conflict, it is still likely to be plagued by all the concomitants of social fractionalization: higher corruption, lower social trust, and abysmal economic performance.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.2/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n2_lg.jpg" alt="Skeptic 29.2 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.2<br />
		<a href="https://www.skeptic.com/productlink/magv29n2">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/264050#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	It is therefore troubling to see the U.S. government and institutions adopt Sri Lankan-style policies that emphasize group distinctions. Echoing Sri Lanka’s separate language “streams,” many American universities now have ethnically segregated orientation and graduation ceremonies. Some offer “theme houses”—dormitories segregated by ethnicity. An Illinois public high school offers separate mathematics classes for Black and Latino students. As the U.S. continues down the perilous path of identity politics, it is unlikely to devolve into another Bosnia or Sri Lanka overnight. But the example of Sri Lanka is a dire warning: a country that was once renowned for its communal harmony quickly descended into violence and economic failure—all because it sought to redress group disparities with identity politics.
</p>
<p>
	Surveys and statistics are now flashing warning signs in the United States. A Gallup poll found that while 70 percent of Black Americans believed that race relations in the United States were either good or very good in 2001, only 33 percent did in 2021.<sup><a href="#note12">12</a></sup> Other statistics have shown that hate crimes have been on the rise over that time.<sup><a href="#note13">13</a></sup> In the last year, we have also seen the spectacle of angry anti-Israel protesters hammering on the doors of a college hall, terrorizing the Jewish students locked inside, and a Stanford professor telling Jewish students to stand in the corner of a classroom. While identity politics have increasingly directed public policy and institutions, ethnic relations have deteriorated rapidly. This—and a lot of history—suggest it’s time for a different approach. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>Jens Kurt Heycke</strong> was educated in Economics and Near Eastern Studies at the University of Chicago, the London School of Economics, and Princeton University. He worked as an early employee or executive in several successful technology startups. Since retiring from tech, he has worked as a writer and researcher, conducting field research in more than forty countries, from Bosnia to Botswana. He is the author of <em>Out of the Melting Pot, Into the Fire: Multiculturalism in the World’s Past and America’s Future</em>.
	</p>
<h5>
		References<br />
	</h5>
<ol>
<li id="note01">
			Procopius (1914). <em>History of The Wars, Books I and II</em>, trans. H. B. Dewing. Macmillan.
		</li>
<li id="note02">
			<a href="https://bit.ly/49BIyJN">https://bit.ly/49BIyJN</a>
		</li>
<li id="note03">
			<a href="https://bit.ly/3OVJ8tp">https://bit.ly/3OVJ8tp</a>
		</li>
<li id="note04">
			Sherif, M. (1961). <em>The Robbers Cave Experiment: Intergroup Conflict and Cooperation</em>. University of Oklahoma Press.
		</li>
<li id="note05">
			Diab, L. N. (1970). A study of intragroup and intergroup relations among experimentally produced small groups. <em>Genetic Psychology Monographs, 82</em>(1), 49–82.
		</li>
<li id="note06">
			Rabbie, J. M. (1982). The Effects of Intergroup Competition and Cooperation on Intragroup and Intergroup Relations. In <em>Cooperation and Helping Behavior</em> (pp. 123–149). Academic Press.
		</li>
<li id="note07">
			Malalas, J., Jeffreys, E., Jeffreys, M., &#038; Scott, R. (1986). <em>The Chronicle of John Malalas: A Translation. Australian Association for Byzantine Studies</em>.
		</li>
<li id="note08">
			Procopius (1927). <em>Secret History</em>, trans. Richard Atwater. Pascal Covici, Inc.
		</li>
<li id="note09">
			McGowan, W. (1993). <em>Only Man is Vile: The Tragedy of Sri Lanka</em>. Pan Books.
		</li>
<li id="note10">
			Sowell, T. (2004). <em>Affirmative Action Around the World: An Empirical Study</em>. Yale University Press.
		</li>
<li id="note11">
			Isaacs, H. (1975). <em>Idols of the Tribe</em>. Harper &#038; Row.
		</li>
<li id="note12">
			<a href="https://bit.ly/3Td5l8W">https://bit.ly/3Td5l8W</a>
		</li>
<li id="note13">
			<a href="https://bit.ly/3IgvNZ3">https://bit.ly/3IgvNZ3</a>
		</li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Cancelling the Pop Culture of Yesteryear</title>
		<link>https://www.skeptic.com/reading_room/cancelling-pop-culture-history/</link>
		
		<dc:creator><![CDATA[John D. Van Dyke]]></dc:creator>
		<pubDate>Fri, 20 Sep 2024 14:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[Breakfast at Tiffany’s yellow-face]]></category>
		<category><![CDATA[cancel culture]]></category>
		<category><![CDATA[censorship in entertainment]]></category>
		<category><![CDATA[Disney film controversy]]></category>
		<category><![CDATA[Fantasia racism]]></category>
		<category><![CDATA[Golden Girls blackface episode]]></category>
		<category><![CDATA[historical media controversies]]></category>
		<category><![CDATA[M*A*S*H controversial jokes]]></category>
		<category><![CDATA[offensive stereotypes in media]]></category>
		<category><![CDATA[pop culture censorship]]></category>
		<category><![CDATA[pop culture of yesteryear]]></category>
		<category><![CDATA[presentism in pop culture]]></category>
		<category><![CDATA[problematic media depictions]]></category>
		<category><![CDATA[racial stereotypes in film]]></category>
		<category><![CDATA[sexual assault in 80s movies]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=45238</guid>

					<description><![CDATA[John D. Van Dyke examines how modern standards are applied to older entertainment, often leading to censorship. From Disney films to 1980s comedies, the article argues that removing offensive scenes erases history, making it harder to understand societal evolution. It promotes contextual viewing as a tool for learning rather than censorship.]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">I</span> recently wrote a piece for <em>Skeptic</em> titled “Ranking Presidents: Does It Make Any Sense?”, in which I outlined three reasons why ranking Presidents against one another is a fool’s errand: presentism, the evolving role of the presidency, and <em>sui generis</em>.<sup><a href="#note01">1</a></sup> The current trend of the first of these criteria, presentism, becomes problematic when applied to entertainment made for previous generations. Viewing and evaluating the culture of the past through a contemporary lens has led to erasing history in at least three relatively recent incidents. This is, I believe, a slippery slope toward censorship and a missed opportunity for valuable lessons about our collective past.
</p>
<p>
	In 1991, Disney released a video version of their 1940 masterpiece <em>Fantasia</em>, describing it as “a meticulously restored version of the original, full-length film.” It wasn’t, though. The version Disney released omitted an original scene in which a Black centaurette named Sunflower is shown shining shoes of a White centaur.<sup><a href="#note02">2</a></sup> Seen today, Sunflower is a patently offensive stereotype.<sup><a href="#note03">3</a></sup> Ten years later Disney released the censored version for the film’s 60th anniversary DVD.<sup><a href="#note04">4</a></sup> Disney’s use of racist stereotypes is not limited to <em>Fantasia</em>. In varying degrees, such tropes are seen in <em>Dumbo</em> (1941),<sup><a href="#note05">5</a></sup> <em>Peter Pan</em> (1953),<sup><a href="#note06">6</a></sup> <em>The Aristocats</em> (1970),<sup><a href="#note07">7</a></sup> and <em>Aladdin</em> (1992).<sup><a href="#note08">8</a></sup>
</p>
<p>
	In 2020, the company (admirably, in my view) took steps toward addressing this controversy by adding disclaimers to their films on their streaming services, noting the “harmful impact” of racist stereotypes. Unlike the quiet actions the company took censoring the re-releases of <em>Fantasia</em>, the films are viewable in their original forms.
</p>
<p>
	This begs the question: If the racism was so apparent, why weren’t these films decried upon initial release? The answer is they weren’t considered offensive by the public at the time, and applying today’s attitudes toward race crystallizes the fallacy of presentism.<span id="more-45238"></span>
</p>
<p>
	In 2014, Ruth Wise, professor emerita of Yiddish and Comparative Literatures at Harvard, criticized <em>Fiddler on the Roof</em> (1971) for sacrificing Jewish identity to make the musical more universally appealing.<sup><a href="#note09">9</a></sup> The problem with Wise’s argument is (again) presentism. In the early 1970s, <em>M*A*S*H</em> writers employed rape jokes,<sup><a href="#note10">10</a></sup> and America’s most popular sitcom (<em>All in the Family</em>) featured a working-class bigot who employed racial slurs for laughs.<sup><a href="#note11">11</a></sup> John Lennon released a song titled “Woman is The (N-word) of the World”<sup><a href="#note12">12</a></sup> and Richard Pryor would use the same racial epithet in an album title three years later.<sup><a href="#note13">13</a></sup> Our attitudes towards cultural authenticity and appropriation have evolved since the early 1970s.
</p>
<p>
	In 2020, a 1988 <em>Golden Girls</em> episode called “Mixed Feelings” was pulled from the streaming platform Hulu due to “a scene in which Betty White and Rue McClanahan are mistaken for wearing blackface.”<sup><a href="#note14">14</a></sup> In the episode, Dorothy’s (White) son introduces his fiancé, a much older Black woman. Blanche and Rose are mortified with embarrassment when they unexpectedly meet the couple wearing cosmetic mud masks.
</p>
<div class="imageclearall">
	<img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-2/Cancelling-the-Pop-Culture-of-Yesteryear-timeline.jpg" alt="timeline of events in movies and TV" width="1000" height="2160" class="noBoxShadow" style="display: block;" />
</div>
<p>
	Were Rose and Blanche revisiting a minstrel show to characterize Black Americans as lazy, hypersexual thieves, ala “Amos ‘n Andy,” as minstrel shows were in the past?<sup><a href="#note15">15</a></sup> Of course not. The joke lay in their mutual embarrassment of appearing <em>as if</em> they were in blackface.<sup><a href="#note16">16</a></sup> Each <em>Golden Girls</em> actress (Betty White, Bea Arthur, Rue McClanahan, and Estelle Getty) came of age decades before the women’s movement, but their show was considerably progressive for their time. In its seven-year run, The <em>Golden Girls</em> featured episodes centered on then-controversial topics of racism, sexual harassment, same-sex marriage, age discrimination, homelessness, the death of children, and addiction.<sup><a href="#note17">17</a></sup> Perhaps most significantly, a 1990 episode titled “72 Hours,” has Rose worried that she may have come in contact with HIV.<sup><a href="#note18">18</a></sup> It was only five years prior that President Reagan first addressed the AIDS crisis, by which time 42,600 people had died from the disease. By 1990, that number had spiked to 310,000, a third of which were deaths occurring that same year.<sup><a href="#note19">19</a></sup> When one considers the climate of the times, airing the episode was courageous.
</p>
<p>
	The same year “Mixed Feelings” was removed from Hulu, an actor named François Clemmons published <em>Officer Clemmons: A Memoir</em>. Clemmons played “Officer Clemmons” on <em>Mr. Roger’s Neighborhood</em> in the late 1960s, the first African American actor to have a recurring role on a children’s television program.<sup><a href="#note21">20</a></sup> In Clemmon’s mostly heartwarming book, he relates an incident in which Fred Rogers called him into his office. His boss said to him, “Someone has informed us that you were seen at the local gay bar downtown. Now, I want you to know, Franc, that if you’re gay, it doesn’t matter to me at all. Whatever you say and do is fine with me, but if you’re going to be on the show as an important member of the <em>Neighborhood</em>, you can’t be out as gay.”
</p>
<p>
	Was Mr. Rogers homophobic? When Rogers had the conversation with Clemmons, homosexuality was still listed as a disorder in the DSM. It wasn’t until 1974 that it was replaced with “sexual orientation disturbance.”21 In reality, Fred Rogers, a Presbyterian minister, was an LGBTQ ally. He’d intentionally hired gay men and women since the 1960s and rebuffed efforts from his viewers to renounce homosexuality.<sup><a href="#note22">22</a></sup>
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.2/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n2_lg.jpg" alt="Skeptic 29.2 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.2<br />
		<a href="https://www.skeptic.com/productlink/magv29n2">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/264050#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	In John Hughes’ <em>The Breakfast Club</em> (1985)<sup><a href="#note23">23</a></sup> and Jeff Kanus’ <em>Revenge of the Nerds</em> (1984),<sup><a href="#note24">24</a></sup> there are scenes of sexual assault upon women played for laughs. Both Gene Siskel and Roger Ebert (renowned film critics) praised each film, neither noting their discomfort with the now-troubling scenes in either review.<sup><a href="#note25">25</a>, <a href="#note26">26</a>, <a href="#note27">27</a></sup> Why did they fail to do so? Were both Siskel and Ebert misogynists willing to overlook scenes of women being sexually assaulted? Of course not. The social mores in the early 1980s didn’t apply to those we share today. Are these scenes excusable? No, but both actresses (Molly Ringwald and Julie Montgomery) have publicly reckoned with the blatant sexism in their roles and neither has insisted the scenes be omitted.<sup><a href="#note28">28</a>, <a href="#note29">29</a></sup>
</p>
<p>
	In 2022, the UK’s Channel 5 aired the 1961 classic <em>Breakfast at Tiffany’s</em>, but bowdlerized scenes of Mickey Rooney as “Mr. Yunioshi,” an over-the-top yellow-face Asian caricature.<sup><a href="#note30">30</a></sup> Should Rooney’s role be excised? No. Just like the racist characters in Disney movies of the 1940s–1990s, and the sexual assaults depicted for laughs in 1980s raunchy comedies, the climate in 1961 was different.
</p>
<p>
	Pop culture of the past is just that: <em>of the past</em>. Applying today’s standards to them is at best a fool’s errand and, at worst (as seen in the cases above) a slippery slope toward censorship. Entertainment from yesteryear should be taken in context while viewed in its entirety. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>John D. Van Dyke</strong> is an academic and science educator. His personal website is <a href="https://www.vandykerevue.org/">vandykerevue.org</a>.
	</p>
<h5>
		References<br />
	</h5>
<ol>
<li id="note01">
			<a href="https://bit.ly/3vjUhOj">https://bit.ly/3vjUhOj</a>
		</li>
<li id="note02">
			<a href="https://bit.ly/48YFeYe">https://bit.ly/48YFeYe</a>
		</li>
<li id="note03">
			<a href="https://bit.ly/49UFKaT">https://bit.ly/49UFKaT</a>
		</li>
<li id="note04">
			<a href="https://bit.ly/3TFD1w8">https://bit.ly/3TFD1w8</a>
		</li>
<li id="note05">
			<a href="https://bit.ly/3TpvgcA">https://bit.ly/3TpvgcA</a>
		</li>
<li id="note06">
			<a href="https://bit.ly/43p6fTb">https://bit.ly/43p6fTb</a>
		</li>
<li id="note07">
			h<a href="https://bit.ly/43p6fTb">ttps://bit.ly/43ltjm0</a>
		</li>
<li id="note08">
			<a href="https://bit.ly/3TEanM6">https://bit.ly/3TEanM6</a>
		</li>
<li id="note09">
			<a href="https://bit.ly/3IHU5vb">https://bit.ly/3IHU5vb</a>
		</li>
<li id="note10">
			<a href="https://bit.ly/3IJzVRu">https://bit.ly/3IJzVRu</a>
		</li>
<li id="note11">
			<a href="https://bit.ly/3IHU9uV">https://bit.ly/3IHU9uV</a>
		</li>
<li id="note12">
			Miles, B. &#038; Badman, K. (2001). <em>The Beatles Diary After the Break-Up: 1970–2001</em>. Music Sales Group.
		</li>
<li id="note13">
			<a href="https://bit.ly/3TgVYE6">https://bit.ly/3TgVYE6</a>
		</li>
<li id="note14">
			<a href="https://bit.ly/3VoMmKi">https://bit.ly/3VoMmKi</a>
		</li>
<li id="note15">
			<a href="https://bit.ly/43koCJd">https://bit.ly/43koCJd</a>
		</li>
<li id="note16">
			<a href="https://bit.ly/3IHHGYf">https://bit.ly/3IHHGYf</a>
		</li>
<li id="note17">
			<a href="https://bit.ly/43lj6G0">https://bit.ly/43lj6G0</a>
		</li>
<li id="note18">
			<a href="https://bit.ly/43DFElP">https://bit.ly/43DFElP</a>
		</li>
<li id="note19">
			<a href="https://bit.ly/3v783Ut">https://bit.ly/3v783Ut</a>
		</li>
<li id="note20">
			Clemmons, F. S. (2020). <em>Officer Clemmons: A Memoir</em>. Catapult.
		</li>
<li id="note21">
			<a href="https://bit.ly/3vg6XWj">https://bit.ly/3vg6XWj</a>
		</li>
<li id="note22">
			<a href="https://bit.ly/49XehW4">https://bit.ly/49XehW4</a>
		</li>
<li id="note23">
			<a href="https://bit.ly/4ahrvN1">https://bit.ly/4ahrvN1</a>
		</li>
<li id="note24">
			<a href="https://bit.ly/3VmOGBu">https://bit.ly/3VmOGBu</a>
		</li>
<li id="note25">
			<a href="https://bit.ly/3ViAR72">https://bit.ly/3ViAR72</a>
		</li>
<li id="note26">
			<a href="https://bit.ly/4cdaDJ4">https://bit.ly/4cdaDJ4</a>
		</li>
<li id="note27">
			<a href="https://bit.ly/4cj1RcG">https://bit.ly/4cj1RcG</a>
		</li>
<li id="note28">
			<a href="https://bit.ly/3VmZOOP">https://bit.ly/3VmZOOP</a>
		</li>
<li id="note29">
			<a href="https://bit.ly/492E65L">https://bit.ly/492E65L</a>
		</li>
<li id="note30">
			<a href="https://bit.ly/4chtAu9">https://bit.ly/4chtAu9</a>
		</li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Witch-Hunting:   A Culture War Fought with Skepticism and Compassion </title>
		<link>https://www.skeptic.com/reading_room/ending-witch-hunting-africa-malawi-persecution/</link>
		
		<dc:creator><![CDATA[Leo Igwe]]></dc:creator>
		<pubDate>Fri, 13 Sep 2024 14:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[African superstitions]]></category>
		<category><![CDATA[beliefs]]></category>
		<category><![CDATA[ending witch persecution]]></category>
		<category><![CDATA[human rights]]></category>
		<category><![CDATA[Malawi witchcraft persecution]]></category>
		<category><![CDATA[vigilante justice]]></category>
		<category><![CDATA[witch-hunting statistics]]></category>
		<category><![CDATA[witchcraft]]></category>
		<category><![CDATA[witchcraft-related violence]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=45219</guid>

					<description><![CDATA[Witch-hunting persists in Africa, particularly Malawi, where hundreds face persecution, torture, and death each year due to witchcraft accusations. Leo Igwe explores the socio-cultural, religious, and political factors fueling these brutal practices and highlights advocacy efforts to end witch-hunting by 2030, led by organizations like the Advocacy for Alleged Witches.]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">O</span>n January 1, 2024, a skeptic from Malawi named Wonderful Mkhutche shared a video<sup><a href="#note01">1</a></sup> of a witch-hunting incident that took place days before on December 28, 2023. In the video, a local mob is shown burying an elderly woman. According to local sources, the woman was accused of causing the death of a family member who had passed away the previous day. These accusations often arise after family members consult local diviners, who claim to be able to identify suspects. In this instance, a local vigilante group abducted the woman. They were in the midst of burying her alive as punishment for allegedly using witchcraft to “kill” a relative when the police intervened and rescued her.
</p>
<p>
	While witch-hunting is largely a thing of the past in the Western world, the persecution of alleged witches continues with tragic consequences in many parts of Africa. Malawi, located in Southeastern Africa, is one such place. Mr. Mkhutche reports that between 300 to 500 individuals accused of witchcraft are attacked and killed every year.
</p>
<p>
	The Malawi Network of Older Persons’ Organizations reported that 15 older women were killed between January and February 2023.<sup><a href="#note02">2</a></sup> Local sources suggest that these estimates are likely conservative, as killings related to witchcraft allegations often occur in rural communities and go unreported. Witch-hunting is not limited to Malawi; it also occurs in other African countries. In neighboring Tanzania, for example, an estimated 3,000 people were killed for allegedly practicing witchcraft between 2005 and 2011, and about 60,000 accused witches were murdered between 1960 and 2000.<sup><a href="#note03">3</a></sup> Similar abuses occur in Nigeria, Ghana, Kenya, Zambia, Zimbabwe, and South Africa, where those accused of witchcraft face severe mistreatment. They are attacked, banished, or even killed. Some alleged witches are buried alive, lynched, or strangled to death. In Ghana, some makeshift shelters—known as “witch camps”—exist in the northern region. Women accused of witchcraft flee to these places after being banished by their families and communities. Currently, around 1,000 women who fled their communities due to witchcraft accusations live in various witch camps in the region.<sup><a href="#note04">4</a></sup><span id="more-45219"></span>
</p>
<p>
	The belief in the power of “evil magic” to harm others, causing illness, accidents, or even death, is deeply ingrained in many regions of Africa. Despite Malawi retaining a colonial-era legal provision that criminalizes accusing someone of practicing witchcraft, this law has not had a significant impact because it is rarely enforced. Instead, many people in Malawi favor criminalizing witchcraft and institutionalizing witch-hunting as a state-sanctioned practice. The majority of Malawians believe in witchcraft and support its criminalization,<sup><a href="#note05">5</a></sup> and many argue that the failure of Malawian law to recognize witchcraft as a crime is part of the problem, because it denies the legal system the mechanism to identify or certify witches. Humanists and skeptics in Malawi have actively opposed proposed legislation that recognizes the existence of witchcraft.<sup><a href="#note06">6</a></sup> They advocate for retaining the existing legislation and urge the government to enforce, rather than repeal, the provision against accusing someone of practicing witchcraft.
</p>
<p>
	Islam<sup><a href="#note07">7</a></sup> and Christianity<sup><a href="#note08">8</a></sup> were introduced to Malawi in the 16<sup>th</sup> and 19<sup>th</sup> centuries by Western Christian missionaries and Arab scholars/jihadists, respectively. They coerced the local population to accept foreign mythologies as superior to traditional beliefs. Today, Malawi is predominantly Christian,<sup><a href="#note09">9</a></sup> but there are also Muslims and some remaining practitioners of traditional religions. And while the belief in witchcraft predates Christianity and Islam, religious lines are often blurred, as all the most popular religions contain narratives that sanctify and reinforce some form of belief in witchcraft. As a result, Malawians from various religious backgrounds share a belief in witchcraft.
</p>
<p>
	Witch-hunting also has a significant health aspect, as accusations of witchcraft are often used to explain real health issues. In rural areas where hospitals and health centers are scarce, many individuals lack access to modern medical facilities and cannot afford modern healthcare solutions. Consequently, they turn to local diviners and traditional narratives to understand and cope with ailments, diseases, death, and other misfortunes.<sup><a href="#note10">10</a></sup>
</p>
<p>
	While witch-hunting occurs in both rural and urban settings, it is more prevalent in rural areas. In urban settings, witch-hunting is mainly observed in slums and overcrowded areas. One contributing factor to witch persecution in rural or impoverished urban zones is the limited presence of state police. Police stations are few and far apart, and the law against witchcraft accusations is rarely enforced<sup><a href="#note11">11</a></sup>due to a lack of police officers and inadequate equipment for intervention. Recent incidents in Malawi demonstrate that mob violence, jungle justice, and vigilante killings of alleged witches are common in these communities.
</p>
<p>
	Another significant aspect of witch-hunting is its highly selective nature. Elderly individuals, particularly women, are usually the targets. Why is this the case? Malawi is a patriarchal society where women hold marginalized sociocultural positions. They are vulnerable and easily scapegoated, accused, and persecuted. In many cases, children are the ones driving these accusations. Adult relatives coerce children to “confess” and accuse the elderly of attempting to initiate them into the world of witchcraft. Malawians believe that witches fly around at night in “witchcraft planes” to attend occult meetings in South Africa and other neighboring countries.<sup><a href="#note12">12</a></sup>
</p>
<p>
	The persistence of witch-hunting in Africa can be attributed to the absence of effective campaigns and measures to eliminate this unfounded and destructive practice. The situation is dire and getting worse. In Ghana, for example, the government plans on shutting down safe spaces for victims, and the president has declined to sign a bill into law that would criminalize witchcraft accusations and the act of witch-hunting.
</p>
<p>
	For this reason, in 2020 I founded Advocacy for Alleged Witches (AfAW) with the aim of combating witch persecution in Africa. Our mission is to put an end to witch-hunting on the continent by 2030.<sup><a href="#note13">13</a></sup> AfAW was created to address significant gaps in the fight against witch persecution in Africa. One of our primary goals is to challenge the misrepresentation of African witchcraft perpetuated by Western anthropologists. They have often portrayed witch-hunting as an inherent part of African culture, suggesting that witch persecution serves useful socioeconomic functions. (This perspective arises from a broader issue within modern anthropology, where extreme cultural relativism sometimes leads to an overemphasis on the practices of indigenous peoples. This stems from an overcorrection of past trends that belittled all practices of indigenous peoples). Some Western scholars tend to present witchcraft in the West as a “wild” phenomenon, and witchcraft in Africa as having domestic value and benefit. The academic literature tends to explain witchcraft accusations and witch persecutions from the viewpoint of the accusers rather than the accused. This approach is problematic and dangerous, as it silences the voices of those accused of witchcraft and diminishes their predicament.
</p>
<p>
	Due to this misrepresentation, Western NGOs that fund initiatives to address abuses linked to witchcraft beliefs have waged a lackluster campaign. They have largely avoided describing witchcraft in Africa as a form of superstition, instead choosing to adopt a patronizing approach to tackling witch-hunting—they often claim to “respect” witchcraft as an aspect of African cultures.<sup><a href="#note14">14</a></sup> As a result, NGOs do not treat the issue of witch persecution in Africa with the urgency it deserves.
</p>
<p>
	Likewise, African NGOs and activists have been complicit. Many lack the political will and funding to effectively challenge this harmful practice. In fact, many African NGO actors believe in witchcraft themselves! Witch-hunting persists in the region due to lack of accurate information, widespread misinformation, and insufficient action. To end witch-hunting, a paradigm shift is needed. The way witchcraft belief and witch-hunting are perceived and addressed must change.
</p>
<p>
	AfAW aims to catalyze this crucial shift and transformation. It operates as a practical and applied form of skepticism, employing the principles of reason and compassion to combat witch-hunting. Through public education and enlightenment efforts, we question and debate witchcraft and ritual beliefs, aiming to dispel the misconceptions far too often used to justify abuses. Our goal is to try to engage African witchcraft believers in thoughtful dialogue, guiding them away from illusions, delusions, and superstitions.
</p>
<p>
	The persistence of abuses linked to witchcraft and ritual beliefs in the region is due to a lack of robust initiatives applying skeptical thinking to the problem. To effectively combat witch persecution, information must be translated into action, and interpretations into tangible policies and interventions. To achieve this, AfAW employs the “informaction” theory of change, combining information dissemination with actionable steps.
</p>
<p>
	At the local level, we focus on bridging the information and action gaps. Accusers are misinformed about the true causes of illnesses, deaths, and misfortunes, often attributing these events to witchcraft due to a lack of accurate information. Many people impute misfortunes to witchcraft because they are unaware of where to seek help or who or what is genuinely responsible for their troubles. This lack of understanding extends to what constitutes valid reasons and causal explanations for their problems.
</p>
<p>
	As part of the efforts to end witch-hunting, we highlight misinformation and disinformation about the true causes of misfortune, illness, death, accidents, poverty, and infertility. This includes debunking the falsehoods that charlatans, con artists, traditional priests, pastors, and holy figures such as mallams and marabouts exploit to manipulate the vulnerable and the ignorant. At AfAW, we provide evidence-based knowledge, explanations, and interpretations of misfortunes.
</p>
<p>
	Our efforts include educating the public on existing laws and mechanisms to address allegations of witchcraft. We conduct sensitization campaigns targeting public institutions such as schools, colleges, and universities. Additionally, we sponsor media programs, issue press releases, engage in social media advocacy, and publish articles aimed at dispelling myths and misinformation related to witch-hunting in the region.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.2/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n2_lg.jpg" alt="Skeptic 29.2 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.2<br />
		<a href="https://www.skeptic.com/productlink/magv29n2">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/264050#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	We also facilitate actions and interventions by both state and non-state agencies. In many post-colonial African states, governmental institutions are weak with limited powers and presence. One of our key objectives is to encourage institutional collaboration to enhance efficiency and effectiveness. We petition the police, the courts, and state human rights institutions. Our work prompts these agencies to act, collaborate, and implement appropriate measures to penalize witch-hunting activities in the region.
</p>
<p>
	Additionally, AfAW intervenes to support individual victims of witch persecution based on their specific needs and the resources available. For example, in cases where victims have survived, we relocate them to safe places, assist with their medical treatment, and facilitate their access to justice. In situations where the accused have been killed, we provide support to the victims’ relatives and ensure that the perpetrators are brought to justice.
</p>
<p>
	We get more cases than we can handle. With limited resources, we are unable to intervene in every situation we become aware of. However, in less than four years, our organization has made a significant impact through our interventions in Nigeria and beyond. We are deploying the canon of skeptical rationality to save lives, awaken Africans from their dogmatic and superstitious slumber, and bring about an African Enlightenment.
</p>
<p>
	This is a <em>real</em> culture war, with <em>real</em> consequences, and skepticism is making a <em>real</em> difference. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>Leo Igwe</strong> is a skeptic and director of the Advocacy for Alleged Witches which aims to end witch-hunting in Africa by 2030. His human rights fieldwork has led to his arrest on several occasions in Nigeria.
	</p>
<h5>
		References<br />
	</h5>
<ol>
<li id="note01">
			<a href="https://bit.ly/3TyhZyc">https://bit.ly/3TyhZyc</a>
		</li>
<li id="note02">
			<a href="https://bit.ly/3xiOmtD">https://bit.ly/3xiOmtD</a>
		</li>
<li id="note03">
			<a href="https://bit.ly/3PMiovX">https://bit.ly/3PMiovX</a>
		</li>
<li id="note04">
			<a href="https://bit.ly/3vAPIze">https://bit.ly/3vAPIze</a>
		</li>
<li id="note05">
			<a href="https://bit.ly/43Iv24N">https://bit.ly/43Iv24N</a>
		</li>
<li id="note06">
			<a href="https://bit.ly/3IZamfu">https://bit.ly/3IZamfu</a>
		</li>
<li id="note07">
			<a href="https://bit.ly/3IYJvjR">https://bit.ly/3IYJvjR</a>
		</li>
<li id="note08">
			<a href="https://bit.ly/3TVdzTz">https://bit.ly/3TVdzTz</a>
		</li>
<li id="note09">
			<a href="https://bit.ly/3TVdCPf">https://bit.ly/3TVdCPf</a>
		</li>
<li id="note10">
			<a href="https://bit.ly/3J0W8uz">https://bit.ly/3J0W8uz</a>
		</li>
<li id="note11">
			<a href="https://bit.ly/3PHS7yM">https://bit.ly/3PHS7yM</a>
		</li>
<li id="note12">
			<a href="https://bit.ly/4awJzTs">https://bit.ly/4awJzTs</a>
		</li>
<li id="note13">
			<a href="https://bit.ly/3IX6g7I">https://bit.ly/3IX6g7I</a>
		</li>
<li id="note14">
			<a href="https://bit.ly/3Q7NIpp">https://bit.ly/3Q7NIpp</a>
		</li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>What Type of Social Justice Do We Want?</title>
		<link>https://www.skeptic.com/reading_room/what-type-of-social-justice-do-we-want/</link>
		
		<dc:creator><![CDATA[Michael Mills, Robert Maranto, and Richard Redding]]></dc:creator>
		<pubDate>Fri, 06 Sep 2024 07:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[academic freedom]]></category>
		<category><![CDATA[campus indoctrination]]></category>
		<category><![CDATA[cancel culture on campus]]></category>
		<category><![CDATA[critical pedagogy]]></category>
		<category><![CDATA[critical theory in academia]]></category>
		<category><![CDATA[diversity of thought in higher education]]></category>
		<category><![CDATA[free speech on college campuses]]></category>
		<category><![CDATA[identity politics in universities]]></category>
		<category><![CDATA[intellectual diversity in universities]]></category>
		<category><![CDATA[liberal vs critical social justice]]></category>
		<category><![CDATA[Marxist methodology in academia]]></category>
		<category><![CDATA[open inquiry in education]]></category>
		<category><![CDATA[political activism in classrooms]]></category>
		<category><![CDATA[social justice in higher education]]></category>
		<category><![CDATA[student rights and free expression]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=45203</guid>

					<description><![CDATA[What if the pursuit of social justice on college campuses was silencing free speech and stifling intellectual diversity? This provocative article delves into the rise of critical theories in academia, exposing a growing tension between traditional liberal values and a new, more radical approach to social justice. As professors increasingly use classrooms for political activism, students face a crucial dilemma: speak up or stay silent? Discover how this ideological shift is reshaping higher education and what it means for the&#8230;]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">W</span>e are three college professors who wish to call attention to a growing problem, namely the erosion of the foundational values of a college education: free inquiry and free speech, rationality and empiricism, civil discussion and debate, and openness to new ideas.
</p>
<h4>
	The Rise of Critical Theories<br />
</h4>
<p>
	Critical theory is a school of thought that has its roots in Marxist theories of human nature and society. It originally developed in Germany in the 1920s among a group of scholars at the Institute for Social Research. They were attempting to salvage some of the failed ideas of Marxism by extending theory to embrace non-economic forms of inequality and oppression.
</p>
<p>
	Critical theorists believe that mainstream knowledge is used to promote the interests of the powerful. Unlike traditional social science, which aims to objectively describe human nature and society by carrying out scientific research, critical theory promotes ideological narratives as self-evidently true. Based on their theories about human nature and social justice, critical theorists promote political activism (or “praxis”), and at times even violent revolution, to achieve their goals.
</p>
<p>
	The predecessor to critical theory, Marxism, simplistically divided people into groups labeled as <em>oppressors</em> or <em>oppressed</em>. Marxism’s original group division was economic—the groups were the oppressive Bourgeois (those who controlled the means of production) and the oppressed Proletariat (the workers). It tried to explain the systemic causes of these group divisions (capitalism) and it developed a set of proposed solutions, including violent revolution, that it presumed would lead to a utopian communist society. These steps, which we will call “Marxist methodology, subsequently became part of critical theories that then focused on additional ways of dividing people into categories of oppressors and oppressed. The Marxist methodology follows the steps shown in Table 1.<span id="more-45203"></span>
</p>
<div class="imageclearall">
	<a href="https://www.skeptic.com/reading_room/images/29-2/Table-1-Social-Movements-Based-on-Critical-Theories-That-Have-Used-Marxist-Methodology.png"><img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-2/Table-1-Social-Movements-Based-on-Critical-Theories-That-Have-Used-Marxist-Methodology.png" alt="Table 1. Social Movements Based on Critical Theories That Have Used Marxist Methodology" width="750" height="672" class="noBoxShadow" style="display: block;" /></a></p>
<p class="caption">Click table to enlarge</p>
</div>
<ol>
<li>
		Divide people into two (or more) groups.
	</li>
<li>
		Label one group as the oppressed (“the good”), the other as the oppressors (“the bad”).
	</li>
<li>
		Develop a theory that purports to explain the systemic causes of this division. The theory is often conspiratorial in nature—oppressors are seen as scheming to keep the oppressed in their place.
	</li>
<li>
		Identify and implement solutions. Encourage group solidarity among the oppressed group, promote an “us-versus-them” mentality, groupthink, and adherence to the party line, invent insulting labels to brand members of the out-group, and engage in activism and/or revolution that will purportedly end those social injustices.
	</li>
</ol>
<p>
	Many social movements based on critical theories have used this Marxist methodology, as noted in Table 1.
</p>
<p>
	All these ideological movements have restricted free speech, encouraged an “us” versus “them” political tribalism, employed personal <em>ad hominem</em> attacks against opponents, and promoted cancellation campaigns. While it is important to respect diversity and historical injustices, we should keep in mind that truly liberal worldviews <em>emphasize</em> our common humanity—which is far less divisive.
</p>
<h4>
	What Does “Social Justice” Mean?<br />
</h4>
<p>
	The new higher education mantra, “social justice” sounds good, but it in fact can refer to either of two often mutually exclusive philosophies: <em>liberal social justice</em> or <em>critical social justice</em>. Though few acknowledge it, increasingly social justice is sold as the former, but practiced as the latter. Consider how they compare in the Table 2.
</p>
<div class="imageclearall">
	<a href="https://www.skeptic.com/reading_room/images/29-2/Table-2-A-Comparison-of-Liberal-Social-Justice-and-Critical-Social-Justice-Philosophies.png"><img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-2/Table-2-A-Comparison-of-Liberal-Social-Justice-and-Critical-Social-Justice-Philosophies.png" alt="Table 2. A Comparison of Liberal Social Justice and Critical Social Justice Philosophies" width="750" height="573" class="noBoxShadow" style="display: block;" /></a></p>
<p class="caption">Click table to enlarge</p>
</div>
<p>
	As is evident, <em>liberal social justice</em> and <em>critical social justice</em> employ two very different methods in determining what constitutes social justice.
</p>
<h4>
	Language Revisionism<br />
</h4>
<p>
	Critical social justice activists often use the “Motte and Bailey strategy” (see Table 3) to make extreme proposals appear moderate. In this gambit a highly defensible “Motte” position is promoted, while successively working toward a more radical “Bailey” position. This gambit is used often in postmodernist discourses. For example, by asserting that morality is socially constructed, the Motte is that our beliefs are socially influenced, and the Bailey is that there is no such thing as morality or truth. Another example:
</p>
<ul>
<li>
		<em>A (non-controversial) Motte position</em>: “We want social justice.” This statement is difficult to argue against, as just about everyone believes in fairness (though they may differ in how they define the term or how to achieve it).
	</li>
<li>
		<em>A (controversial) Bailey position</em>: “We need to re-define social justice as <em>critical social justice</em> and we need to make fundamental (sometimes radical) changes to our society including ending the rule of law, free speech, property rights, and merit systems.” This statement is much more controversial than the Motte, as it suggests that the current social order is inherently unjust and so needs to be overthrown.
	</li>
</ul>
<p>
	Here are more examples of the Motte and Bailey strategies with respect to the re-definition of some commonly used words.
</p>
<div class="imageclearall">
	<a href="https://www.skeptic.com/reading_room/images/29-2/Table-3-More-Examples-of-the-Motte-and-Bailey-Strategies-With-Respect-to-the-Re-Definition-of-Some-Commonly-Used-Words.png"><img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-2/Table-3-More-Examples-of-the-Motte-and-Bailey-Strategies-With-Respect-to-the-Re-Definition-of-Some-Commonly-Used-Words.png" alt="Table 3. More Examples of the Motte and Bailey Strategies With Respect to the Re-Definition of Some Commonly Used Words" width="750" height="413" class="noBoxShadow" style="display: block;" /></a> </p>
<p class="caption">Click table to enlarge</p>
</div>
<h4>
	What is social justice when re-interpreted from a critical social justice lens?<sup><a href="#note01">1</a></sup><br />
</h4>
<p>
	Again, the term “social justice” in common language refers to the <em>liberal social justice</em> conceptions of individual rights and responsibilities, equal opportunity, blind justice, equality before the law, etc., as noted above. These ideas evolved from historic common law, the Enlightenment (particularly the Scottish Enlightenment), and U.S. constitutionalism.
</p>
<p>
	However, over recent decades the term <em>social justice</em> has come to be redefined in terms of <em>critical</em>, not <em>liberal</em>, social justice. This re-definition was accomplished surreptitiously through the Motte and Bailey gambit, and it also allowed the more radical philosophy of <em>critical theory</em> itself to be covertly introduced into college campuses while flying under the academic radar. By analogy, the term “social justice” has been used as a terminological Trojan horse to insert <em>critical social justice</em> and <em>critical theory</em> into the academy under the guise of <em>liberal social justice</em>.
</p>
<h4>
	Restrictions on Freedom of Speech and Open Inquiry<br />
</h4>
<p>
	This sort of critical social justice <em>activism and indoctrination</em> (as opposed to exposing students to these perspectives in the context of discussing and debating the respective strengths and weaknesses of a range of perspectives) is the opposite of free expression and open inquiry, and thus it is the antithesis of the foundational values of traditional higher education.
</p>
<p>
	Often mere attempts to question how, why, or whether <em>X-injustice</em> is happening leads to accusations that the questioner must be a bigoted “<em>X-ist</em>” or “<em>X-phobe</em>.” Questioning is often dismissed by critical theorists as defensive rhetoric employed to defend one’s privilege and power. The questioner needs thus be silenced, ostracized, and/or canceled. As documented by the Foundation for Individual Rights and Expression (FIRE), this has in fact happened thousands of times.
</p>
<h4>
	Some Examples of Restriction of Speech and Open Inquiry<br />
</h4>
<p>
	A series of large-scale empirical studies beginning in the year 2000 found that both students and professors report fearing to express or explore political and ideological viewpoints that are critical of critical theory.<sup><a href="#note02">2</a></sup> Further, campuses have little ideological diversity among faculty and administrators, with typically a 12:1 ratio of liberal/progressive to conservative/libertarian, and many departments and some whole fields lacking any conservative or libertarian faculty members. Studies document that many professors freely admit to discriminating against colleagues and students who support liberal, rather than critical, conceptions of social justice. Here are some recent representative examples:
</p>
<ul>
<li>
		Bret Weinstein, a former professor of biology at Evergreen State College in Olympia, WA, was deplatformed and forced to resign after he was accused of racism and white privilege for refusing to participate in a “Day of Absence,” which asked white faculty to stay off campus for the day to celebrate people of color.
	</li>
<li>
		Lindsay Shepherd, a teaching assistant at Wilfrid Laurier University in Canada, was suspended after she showed her class a video of a debate about gender pronouns. She was accused of violating the university’s policy on discrimination and harassment.
	</li>
<li>
		Dorian Abbot, an associate professor of geophysics at the University of Chicago, was disinvited from giving a prestigious lecture at the Massachusetts Institute of Technology (MIT) after he wrote an opinion piece criticizing diversity initiatives and instead supporting meritbased college admissions, hiring, and promotions.
	</li>
<li>
		Nicholas Christakis, a professor of sociology at Yale University, was placed on administrative leave after he sent an email to students urging them to be civil to those with whom they disagree.
	</li>
<li>
		Ayaan Hirsi Ali, a former Dutch politician and women’s rights activist who herself suffered female genital mutilation as a child in a traditional Islamic society, was disinvited from speaking at the University of California, Berkeley, after students protested her views on Islam.
	</li>
</ul>
<h4>
	Critical Pedagogy: Political Activism in the Classroom<br />
</h4>
<p>
	Critical pedagogy is an ideological approach to teaching that attempts to impose political views and activism in the classroom that are consistent with critical theory. It was founded by the Brazilian philosopher and educator Paulo Freire, who promoted it through his 1968 book <em>Pedagogy of the Oppressed</em>. It pressures students to adopt a specific political ideology and rejects dissenting views. Doing so takes time away from developing core academic skills, including critical thinking skills.
</p>
<p>
	At its worst, critical pedagogy can produce an environment where some professors and administrators try to tell students not <em>how</em> to think, but <em>what</em> to think. Professors should not be using the lectern as an activist bully pulpit to push their personal ideological or political beliefs. Since professors are in positions of power relative to their students, such activism in the classroom is unethical and constitutes professional misconduct. Students should not be expected to conform to ideologies or dogmas in the classroom.
</p>
<p>
	It is unfortunate that students will very likely be subject to activism on the part of some of their professors and even some fellow students. If they disagree with them, they may at times feel that they should keep their thoughts to themselves. But do not. Speak up!
</p>
<h4>
	Spotting Education v. Indoctrination<br />
</h4>
<p>
	To be clear, although we do not subscribe to critical theory because of the difficulties with it that we (and many others) have identified, we do not object to a professor teaching or discussing critical theory and critical social justice and presenting his or her opinions about matters based on those perspectives. College is all about exposing students to a range of ideas and opinions. However, professors should not attempt to indoctrinate their students with critical theory or anything else, and they should expose students to a range of perspectives on various issues. Below are a few pointers to help students to identify whether a course or a professor is promoting critical theory through indoctrination rather than education.
</p>
<p>
	Courses that educate tend to have:
</p>
<ul>
<li>
		<em>A Balance of Perspectives</em>: Look for courses that present multiple viewpoints and encourage open discussion. A balanced education should expose students to various ideas, even those that challenge the prevailing consensus.
	</li>
<li>
		<em>Freedom of Thought</em>: If a course or professor discourages questioning or presents information as absolute truth without room for debate, it might be indoctrination. In contrast to a fundamentalist religion, education encourages students to question, analyze, and form their own conclusions.
	</li>
<li>
		<em>An Evidence-Based Approach</em>: Knowledge should be grounded in research, evidence, and historical context. If a course lacks substantial evidence to support its claims or relies primarily on emotional appeals, it is less education than indoctrination.
	</li>
<li>
		<em>Engagement with Opposing Views</em>: A healthy educational environment should encourage engaging with opposing viewpoints. If a course dismisses or demonizes dissenting perspectives, it could be leaning towards indoctrination.
	</li>
</ul>
<p>
	Whereas courses that indoctrinate tend to:
</p>
<p><dir> </p>
<li>
		<em>Discourage Disagreement</em>: The professor discourages students from challenging their favored concepts and ideals.
	</li>
<li>
		<em>Rely on Emotional Appeals and Ad Hominem Attacks</em>: The professor uses emotional appeals and personal attacks to silence dissent.
	</li>
<p></dir></p>
<p>
	Campus activism can be covert rather than overt, with the professor communicating to students what is acceptable or not based on their reactions to student comments, the topics they select to discuss and to omit, their grading practices and feedback, how they interact with and treat students having different opinions, or even their body language when talking about various topics.
</p>
<p>
	A 2007 American Association of University Professors (AAUP) subcommittee report stated such activist professors present their favored worldview “dogmatically, without allowing students to challenge their validity or advance alternative understandings” and such instructors “insist that students accept as truth propositions that are in fact professionally contestable.” Given that professors are in a position of power over their students, this type of behavior is especially inappropriate. And, as far back as 1915, the AAUP advised that professors “should, in dealing with [controversial] subjects, set forth justly, without suppression or innuendo, the divergent opinions” on the issue. This 1915 advisory is still in effect. Indeed, any failure to do so may constitute an ethical breach. Professors should teach students about different sides of an issue and do so fairly, rather than pretending there is just one permitted viewpoint, as in a Marxist or authoritarian organization or system.
</p>
<h4>
	What Should Be Done?<br />
</h4>
<p>
	First, if students encounter a professor that they believe is using the classroom to engage in ideological or political activism, students should speak up. That may be less risky than students think. Remember, education should empower students to engage in critical thinking and constructive dialogue. If students encounter concerning situations, approach professors for respectful discussions. If needed, seek guidance from department chairs or administrators who value open inquiry. There usually are some.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.2/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n2_lg.jpg" alt="Skeptic 29.2 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.2<br />
		<a href="https://www.skeptic.com/productlink/magv29n2">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/264050#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	Often, students cannot rely on their institution’s hierarchy alone—indeed they may be part of the problem. Moreover, there is safety in numbers. Enlist parents and outside organizations to lobby college or university to ensure that it is promoting intellectual diversity, open inquiry, and free thought. (A very simple change is to ask that course evaluations include questions on whether students felt free to voice their opinions in class, whether the professor dealt fairly with students having divergent views, and whether different sides of controversial issues were presented or discussed.)
</p>
<p>
	Today, there are numerous organizations to help students. Currently, the most prominent bipartisan protectors and promoters of free thought are the Foundation for Individual Rights and Expression (FIRE) and Speech First. The important thing to remember is that you are not alone. Aside from those organizations, it is certain that many others at their institution will be rooting for students, even if they feel that they can only do so privately.
</p>
<p>
	Second, know that when confronted with transparency (sometimes supplemented with attorneys), bullies tend to back down.
</p>
<p>
	Third, know that the trials students are facing now can make them stronger, and further, are nothing like those faced by Alexander Solzhenitsyn, Vaclav Havel, Martin Luther King, Jr., Jackie Robinson, James Meredith, and thousands of others who faced suppression for their beliefs or their identity. The worst fate awaiting students would be having to transfer from a school which does not value free thought to one that does. Students have choices. Make them wisely. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>Michael Mills</strong> is an evolutionary psychologist at Loyola Marymount University (LMU). He earned his B.A. from UC Santa Cruz and his Ph.D. from UC Santa Barbara. He has served as Chair, and as the Director of the Graduate Program, at the LMU Psychology Department. He serves on the editorial boards of several academic journals and on the executive board of the Society for Open Inquiry in the Behavioral Sciences (SOIBS).
	</p>
<p>
		<strong>Robert Maranto</strong> is the 21st Century Chair in Leadership in the Department of Education Reform at the University of Arkansas, where he studies bureaucratic reform and edits the <em>Journal of School Choice</em>. He has served on the Fayetteville School Board (2015-20) and currently serves on the executive board of the Society for Open Inquiry in the Behavioral Sciences (SOIBS). With others, he has produced about 100 refereed publications and 17 scholarly books so boring his own mother refused to read them, including <em>President Obama and Education Reform</em> (Palgrave/Macmillan, 2012), <em>Educating Believers: Religion and School Choice</em> (Routledge, 2021), and <em>The Free Inquiry Papers</em> (AEI, 2024). He can be reached at <a href="mailto:rmaranto@uark.edu">rmaranto@uark.edu</a>.
	</p>
<p>
		<strong>Richard E. Redding</strong> is the Ronald D. Rotunda Distinguished Professor of Jurisprudence and Associate Dean, and Professor of Psychology and Education at Chapman University. He has written extensively on the importance of viewpoint and sociopolitical diversity in teaching, research, and professional practice. Notable publications include <em>Ideological and Political Bias in Psychology: Nature, Scope, and Solutions</em> (Springer, 2023); <em>Sociopolitical Values at the Deep Culture in Culturally-Competent Psychotherapy</em> (Clinical Psychological Science, 2023); and, <em>Sociopolitical Diversity in Psychology: The Case for Pluralism</em> (American Psychologist, 2001). He is the founding President of the Society for Open Inquiry in the Behavioral Sciences (<a href="https://www.soibs.com/">soibs.com</a>).
	</p>
<h5>
		Organizations<br />
	</h5>
<ul>
<li>
			The Foundation for Individual Rights and Expression (FIRE). In particular, see its College Free Speech Rankings: <a href="https://bit.ly/3T0kbyn">https://bit.ly/3T0kbyn</a>
		</li>
<li>
			Campus Reform: campusreform.org
		</li>
<li>
			Heterodox Academy: <a href="https://heterodoxacademy.org/">heterodoxacademy.org</a>
		</li>
<li>
			Academic Freedom Alliance: <a href="https://academicfreedom.org/">academicfreedom.org</a>
		</li>
<li>
			Society for Open Inquiry in the Behavioral Sciences (SOIBS): <a href="https://www.soibs.com/">soibs.com</a>
		</li>
</ul>
<h5>
		Additional Resources<br />
	</h5>
<ul>
<li>
			Infographic: Critical Theory and Classic Liberalism: <a href="https://bit.ly/3uMwOVP">https://bit.ly/3uMwOVP</a>
		</li>
<li>
			“Universities must choose between truth or social justice, not both” by Jonathan Haidt <a href="https://bit.ly/3Il7ucw">https://bit.ly/3Il7ucw</a>
		</li>
<li>
			“The Two Fiduciary Duties of Professors” by Jonathan Haidt <a href="https://bit.ly/49FJCwq">https://bit.ly/49FJCwq</a>
		</li>
</ul>
<h5>
		References<br />
	</h5>
<ol>
<li id="note01">
			This section relies heavily on three works. For the best conceptual analyses of various branches of critical theory, see Helen Pluckrose and James Lindsay, <em>Cynical Theories: How Activist Scholarship Made Everything About Race, Gender, and Identity</em>. (Pitchstone Publishing). For a sound journalistic account about how critical theory spread across institutions, see Christopher F. Rufo, <em>America’s Cultural Revolution: How the Radical Left Conquered Everything</em> (Broadside Books). Though Rufo has become a political activist, we can attest that his journalistic work is sound, as least as regards those areas with which we are familiar. Finally, the best single work about how critical theory and other moral panics spread, particularly in higher education, remains Greg Lukianoff and Jonathan Haidt’s <em>The Coddling of the American Mind</em> (Penguin). Also see Kenny Xu’s <em>School of Woke: How Critical Race Theory Infiltrated American Schools and Why We Must Reclaim Them</em>. (Center Street) and Isaac Gottesman’s <em>The Critical Turn in Education</em> (Routledge)
		</li>
<li id="note02">
			<em>Coddling of the American Mind</em>; Eric Kaufmann, <a href="https://cspicenter.org/reports/academicfreedom/">https://cspicenter.org/reports/academicfreedom/</a>. Analyzing largescale surveys, political scientist Eric Kaufmann found over a third of conservative professors and doctoral students facing threats of discipline for their views. So have one in ten liberals, often censored by those farther left. Also see Richard E. Redding, “Psychologists’ Politics,” in Craig Frisby, Richard Redding, William O’Donohue, and Scott Lilienfeld, <em>Ideological and Political Bias in Psychology: Nature, Scope, and Solutions</em> (Springer), comprehensively reviewing all historical and current studies on the political views of professors and university administrators.
		</li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Behavioral Science Needs to Return to the Basics</title>
		<link>https://www.skeptic.com/reading_room/behavioral-science-dei-critique-scientific-principles/</link>
		
		<dc:creator><![CDATA[April Bleske-Rechek, Michael H. Bernstein, and Robert O. Deaner]]></dc:creator>
		<pubDate>Fri, 30 Aug 2024 14:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[academic ideological diversity]]></category>
		<category><![CDATA[adversarial collaboration]]></category>
		<category><![CDATA[behavioral science]]></category>
		<category><![CDATA[cognitive biases]]></category>
		<category><![CDATA[correlation vs. causation]]></category>
		<category><![CDATA[DEI]]></category>
		<category><![CDATA[diversity equity inclusion]]></category>
		<category><![CDATA[expectancy effects]]></category>
		<category><![CDATA[gender dysphoria]]></category>
		<category><![CDATA[Implicit Association Test (IAT)]]></category>
		<category><![CDATA[lived experience]]></category>
		<category><![CDATA[measurement validity]]></category>
		<category><![CDATA[microaggressions]]></category>
		<category><![CDATA[out-group antipathy]]></category>
		<category><![CDATA[scientific principles]]></category>
		<category><![CDATA[truth-seeking in science]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=45179</guid>

					<description><![CDATA[In this thought-provoking piece, three behavioral scientists argue that the field has strayed from fundamental scientific principles in its pursuit of Diversity, Equity, and Inclusion (DEI) goals. They contend that an overreliance on "lived experience," poor measurement practices, and confusion between correlation and causation have led to flawed research and misguided policies. The authors offer a critical examination of popular DEI concepts and suggest ways to get behavioral science back on track with rigorous, objective methods.]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">O</span>ver the past decade behavioral science, particularly psychology, has come under fire from critics for being fixated on progressive political ideology, most notably Diversity, Equity, and Inclusion (DEI). The critics’ evidence is, unfortunately, quite strong. For example, a recent volume, <em>Ideological and Political Bias in Psychology</em>,<sup><a href="#note01">1</a></sup> recounts many incidents of scholarly censorship and personal attacks that a decade ago might have only been conceivable as satire.
</p>
<p>
	We believe that many problems plaguing contemporary behavioral science, especially for issues touching upon DEI, can best be understood, at their root, as a failure to adhere to basic scientific principles. In this essay, we will address three fundamental scientific principles: (1) Prioritize Objective Data Over Lived Experience; (2) Measure Well; and (3) Distinguish Appropriately Between Correlation and Causation. We will show how DEI scholarship often violates those principles, and offer suggestions for getting behavioral science back on track. “Getting back to the basics” may not sound exciting but, as athletes, musicians, and other performers have long recognized, reinforcing the fundamentals is often the best way to eliminate bad habits in order to then move forward.
</p>
<h4>
	The Failure to Adhere to Basic Scientific Principles<br />
</h4>
<h5>
	Principle #1: Prioritize Objective Data Over Lived Experience<br />
</h5>
<p>
	A foundational assumption of science is that objective truth exists and that humans can discover it.<sup><a href="#note02">2</a>, <a href="#note03">3</a>, <a href="#note04">4</a>, <a href="#note05">5</a></sup> We do this most effectively by proposing testable ideas about the world, making systematic observations to test the ideas, and revising our ideas based on those observations. A crucial point is that this process of proposing and testing ideas is open to everyone. A fifth grader in Timbuktu, with the right training and equipment, should be able to take atmospheric observations that are as valuable as those of a Nobel Prize-winning scientist from MIT. If the fifth grader’s observations are discounted, this should only occur because their measurement methods were poor, not because of their nationality, gender, age, family name, or any other personal attribute.
</p>
<p>
	A corollary of science being equally open to all is that an individual’s personal experience or “lived experience” carries no inherent weight in claims about objective reality. It is not that lived experience doesn’t have value; indeed, it has tremendous value in that it provides a window into individuals’ <em>perceptions</em> of reality. However, perception can be wildly inaccurate and does not necessarily equate to reality. If that Nobel Prizewinning scientist vehemently disputed global warming because his personal experience was that temperatures have not changed over time, yet he provided no atmospheric measurements or systematic tests of his claim, other scientists would rightly ignore his statements—at least as regards the question of climate change. <span id="more-45179"></span>
</p>
<p>
	The limited utility of a person’s lived experience seems obvious in most scientific disciplines, such as in the study of rocks and wind patterns, but less so in psychology. After all, psychological science involves the study of <em>people</em>—and they think and have feelings about their lived experiences. However, what is the case in other scientific disciplines is also the case in psychological science: lived experience does not provide a foolproof guide to objective reality.
</p>
<p>
	To take an example from the behavioral sciences, consider the Cambridge-Somerville Youth Study.<sup><a href="#note06">6</a></sup> At-risk boys were mentored for five years, from the ages of 10 to 15. They participated in a host of programs, including tutoring, sports, and community groups, and were given medical and psychiatric care. Decades later, most of those who participated claimed the program had been helpful. Put differently, their lived experience was that the program had a positive impact on their life. However, these boys were not any better in important outcomes relative to a matched group of at-risk boys who were not provided mentoring or extra support. In fact, boys in the program ended up <em>more</em> likely to engage in serious street crimes and, on average, they died at a younger age. The critical point is that giving epistemic authority to lived experience would have resulted in making inaccurate conclusions. And the Cambridge-Somerville Youth Study is not an isolated example. There are many programs that people <em>feel</em> are effective, but when tested systematically turn out to be ineffective, at best. These include programs like DARE,<sup><a href="#note07">7</a></sup> school-wide mental health interventions,<sup><a href="#note08">8</a></sup> and—of course—many diversity training programs.<sup><a href="#note09">9</a></sup>
</p>
<p class="pullquote">
	DEI over-reach in behavioral science is intimately related to a failure within the scientific community to adhere to basic principles of science and appreciate important findings from the behavioral science literature.
</p>
<p>
	Indeed, when it comes to concerns related to DEI, the scientific tenet of prioritizing testable truth claims over lived experience has often fallen to the wayside. Members of specific identity groups are given privilege to speak about things that cannot be contested by those from other groups. In other words, in direct contradiction of the scientific method, some people are granted epistemic authority based solely on their lived experience.<sup><a href="#note10">10</a></sup>
</p>
<p>
	Consider gender dysphoria. In the past decade, there has been a drastic increase in the number of people, particularly children and adolescents, identifying as transgender. Those who express the desire to biologically transition often describe their lived experience as feeling “born in the wrong body,” and express confidence that transition will dramatically improve their lives. We argue while these feelings must be acknowledged, they should not be taken as objective truth; instead, such feelings should be weighed against objective data on life outcomes of others who have considered gender transition and/or transitioned. And those data, while limited, suggest that many individuals who identify as transgender during childhood, but who do not medically transition, eventually identify again with the gender associated with their birth sex.<sup><a href="#note11">11</a>, <a href="#note12">12</a></sup> Although these are small, imperfect studies, they underscore that medical transition is not always the best option.
</p>
<p>
	Caution in automatically acceding to a client’s preference to transition is particularly important among minors. Few parents and health care professionals would affirm a severely underweight 13-year-old’s claim that, based on their lived experience, they are fat and will only be happy if they lose weight. Nevertheless, many psychologists and psychiatrists make a similar mistake when they affirm a transgender child’s desire to transition without carefully weighing the risks. In one study, 65 percent of people who had detransitioned reported that their clinician, who often was a psychologist, “did not evaluate whether their desire to transition was secondary to trauma or a mental health condition.”<sup><a href="#note13">13</a></sup> The concern, in other words, is that lived experience is being given <em>too much weight</em>. How patients feel is important, but their feelings should be only one factor among many, especially if they are minors. Mental health professionals should know this, and parents should be able to trust them to act accordingly.
</p>
<h5>
	Principle #2: Measure Well<br />
</h5>
<p>
	Another basic principle of behavioral science is that anything being measured must be measured reliably and validly. <em>Reliability</em> refers to the consistency of measurement; <em>validity</em> refers to whether the instrument is truly measuring what it claims to measure. For example, a triple beam balance is <em>reliable</em> if it yields the same value when repeatedly measuring the same object. The balance is <em>valid</em> if it yields a value of exactly 1 kg when measuring the reference kilogram (i.e., the International Prototype of the Kilogram), a platinum-iridium cylinder housed in a French vault under standardized conditions.
</p>
<p>
	Behavioral scientists’ understanding of any concept is constrained by the degree to which they can measure it consistently and accurately. Thus, to make a claim about a concept, whether about its prevalence in a population or its relation to another concept, scientists must first demonstrate both the reliability and the validity of the measure being used. For some measures of human behavior, such as time spent listening to podcasts or number of steps taken each day, achieving good reliability and validity is reasonably straightforward. Things are generally more challenging for the self-report measures that psychologists often use.
</p>
<p>
	Nevertheless, good measurement can sometimes be achieved, and the study of personality provides a nice model. In psychology, there are several excellent measures of the Big Five personality factors (Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness).<sup><a href="#note14">14</a></sup> Individuals’ responses are highly reliable: people who rate themselves as highly extraverted as young adults rate themselves similarly years later. Moreover, personality assessments are valid: individuals’ responses correlate with their actual day-to-day behaviors, as reported by themselves and as observed by others.<sup><a href="#note15">15</a></sup> In other words, people who rate themselves as high (versus low) in extroversion on psychological questionnaires, for example, really do spend more time socializing.
</p>
<p>
	However, not all psychological measures turn out to have solid reliability and validity. These include the popular Myers Briggs Type Indicator personality test and projective tests such as the Rorschach. Unfortunately, in the quest to support DEI, some concepts that fail the requirements of good measurement are used widely and without reservation. The concept of <em>microaggressions</em>, for example, has gained enormous traction despite its having fundamental measurement issues.
</p>
<p>
	“Microaggressions” were brought to psychologists’ attention by Derald Wing Sue and colleagues.<sup><a href="#note16">16</a></sup> Originally described as “brief and commonplace daily verbal, behavioral, or environmental indignities, whether intentional or unintentional, that communicate hostile, derogatory, or negative racial slights and insults toward people of color” (p. 271),<sup><a href="#note17">17</a></sup> the concept has since expanded in use to describe brief, verbal or nonverbal, indignities directed toward a different “other.”<sup><a href="#note18">18</a>, <a href="#note19">19</a></sup>
</p>
<p>
	In 2017, Scott Lilienfeld discussed how the failure to adhere to the principles of good measurement has rendered the concept of microaggression “wide open,” without any clear anchors to reality.<sup><a href="#note20">20</a></sup> The primary weakness for establishing validity, that is, for establishing evidence of truly measuring what scientists claim to be measuring, is that “microaggression” is defined in the eye of the beholder.<sup><a href="#note21">21</a></sup> Thus, any person at any point can say they have been “microaggressed” against, and no one can test, let alone refute, the claim because it is defined solely by the claimant’s subjective appraisal—their lived experience.
</p>
<p>
	As Scott Lilienfeld explained, the end result is that essentially anything, including opposing behaviors (such as calling on a student in class or <em>not</em> calling on a student in class) can be labeled a microaggression. A question such as, “Do you feel like you belong here?” could be perceived as a microaggression by one person but not by someone else; in fact, even the same person can perceive the same comment differently depending on their mood or on who asks the question (which would indicate poor reliability). Our criticism of microaggressions, then, spans concerns related to both weak measurement and an undue reliance on lived experience.
</p>
<p>
	Another of psychology’s most famous recent topics is the Implicit Association Test (IAT), which supposedly reveals implicit, or subconscious, bias. The IAT measures an individual’s reaction times when asked to classify pictures or text spatially. A video<sup><a href="#note22">22</a></sup> may be the best way to appreciate what is happening in the IAT, but the basic idea is that if a person more quickly pairs pictures of a Black person than those of a White person with a negative word (for example, “lazy” or “stupid”) then they have demonstrated their unconscious bias against Black people. The IAT was introduced by Anthony Greenwald and colleagues in the 1990s.<sup><a href="#note23">23</a></sup> They announced that their newly developed instrument, the race IAT, measures unconscious racial prejudice or bias and that 90 to 95 percent of Americans, including many racial minorities, demonstrated such bias. Since then, these scholars and their collaborators (plus others such as DEI administrators) have enjoyed tremendous success advancing the claim that the race IAT reveals pervasive unconscious bias that contributes to society-wide discrimination.
</p>
<p>
	Despite its immense influence, the IAT is a flawed measure. Regarding reliability, the correlation between a person’s response when taking the test at two different times hovers around 0.5.<sup><a href="#note24">24</a></sup> This is well below conventionally acceptable levels in psychology, and far below the test-retest reliabilities for accepted personality and cognitive ability measures, which can reach around .8, even when a person takes the tests decades later.<sup><a href="#note25">25</a>, <a href="#note26">26</a></sup>
</p>
<p class="pullquote">
	The best path forward is to get back to the basics: understand the serious limitations of lived experience, focus on quality measurement, and be mindful of the distinction between correlation and causation.
</p>
<p>
	As for the IAT’s validity, nobody has convincingly shown that patterns of reaction times actually reflect “unconscious bias” (or “implicit prejudice”) as opposed to cultural stereotypes.<sup><a href="#note27">27</a></sup> Moreover, in systematic syntheses of published studies, the association between scores on the race IAT and observations or measurements of real-world biased behavior is inconsistent and weak.<sup><a href="#note28">28</a>, <a href="#note29">29</a></sup> In other words, scores on the IAT do not meaningfully correlate with other ways of measuring racial bias or real life manifestations of it.
</p>
<h5>
	Principle #3: Distinguish Appropriately Between Correlation and Causation<br />
</h5>
<p>
	“Correlation does not equal causation” is another basic principle of behavioral science (indeed, all science). Although human brains seem built to readily notice and even anticipate causal connections, a valid claim that “X” has a causal effect on “Y” needs to meet three criteria, and a correlation between X and Y is only the first. The second criterion is that X precedes Y in time. The third and final criterion is the link between X and Y is not actually due to some other variable that influences both X and Y (“confounders”). To test this final point, researchers typically need to show that when X is manipulated in an experiment, Y also changes.
</p>
<p>
	Imagine, for instance, that a researcher asks students about their caffeine intake and sleep schedule, and upon analyzing the data finds that students’ caffeine consumption is negatively correlated with how much they sleep—those who report consuming more caffeine tend to report sleeping less. This is what many psychologists call correlational research (or associational or observational research). These correlational data <em>could</em> mean that caffeine consumption reduces sleep time, but the data could also mean that a lack of sleep causes an increase in caffeine consumption, or that working long hours causes both a decrease in sleep and an increase in caffeine. To make the case that caffeine <em>causes</em> poor sleep, the researcher must <em>impose</em>, by random assignment, different amounts of caffeine on students to determine how sleep is affected by varying doses. That is, the researcher would conduct a true experiment.
</p>
<p>
	Distinguishing between correlation and causation is easier said in the abstract than practiced in reality, even for psychological scientists who are specifically trained to make the distinction.<sup><a href="#note30">30</a></sup> Part of the difficulty is that in behavioral science, many variables that are generally thought of as causal cannot be manipulated for ethical or practical reasons. For example, researchers cannot impose neglect (or abuse, corporal punishment, parental divorce, etc.) on some children and not others to study how children are affected by the experience. Still, absent experiments, psychologists bear the responsibility of providing converging, independent lines of evidence that indicate causality before they draw a causal conclusion. Indeed, scientists did this when it came to claiming that smoking causes cancer: they amassed evidence from national datasets with controls, discordant twin designs, correlational studies of exposure to second-hand smoke, non-human experiments, and so on—everything <em>but</em> experiments on humans—before coming to a consensus view that smoking causes cancer in humans. Our point is that investigating causal claims without true experiments is possible, but extremely difficult and time consuming.
</p>
<p>
	That said, the conflation of correlation with causation seems especially prevalent when it comes to DEI issues. In the context of microaggressions, for example, a Google search quickly reveals many scholars claiming that microaggressions <em>cause</em> psychological harm. Lilienfeld has been a rare voice suggesting that it is dangerous to claim that microaggressions cause mental health issues when there are no experimental data to support such a claim. Moreover, there is a confounding variable that predicts both (1) perceiving oneself as having been “microaggressed” against and (2) struggling with one’s mental health—namely, the well-documented personality trait of neuroticism. In other words, individuals who are prone to experience negative emotions (those who are high in neuroticism) often perceive that more people try to inflict harm on them than actually do, and these same individuals also struggle with mental health.
</p>
<p>
	Assuming we were able to develop a workable definition of “microaggressions,” what <em>would</em> a true experiment look like? An experiment would require that participants be exposed to microaggressions (or not), and then be measured or observed for indications of psychological harm. There are valid ethical concerns for such a study, but we believe it can be done. There is a lengthy precedent in psychological research where temporary discomfort can be inflicted with appropriate safeguards. For instance, a procedure called the “trier social stress test” (TSST) is widely used, where participants make a speech with little preparation time in front of judges who purposefully avoid any non-verbal reaction. This is followed by a mental arithmetic task.<sup><a href="#note31">31</a></sup> If the TSST is acceptable for use in research, then it should also be acceptable to expose study participants to subtle slights.
</p>
<p>
	This fallacy of equating correlation with causation also arises in the context of gender transitioning and suicide. To make the point that not being able to transition is deeply damaging, transgender individuals, and sometimes their professional supporters, may ask parents something such as, “would you rather have a dead daughter or a living son?” One logical flaw here is in assuming that because gender distress is associated with suicidal ideation, then the gender distress must be causing the suicidal ideation. However, other psychological concerns, such as depression, anxiety, trauma, eating disorders, ADHD, and autism, could be causing both the gender distress and the suicidal ideation—another case of confounding variables. Indeed, these disorders occur more frequently in individuals who identify as transgender. Thus, it is quite possible that someone may suffer from depression, and this simultaneously raises their likelihood of identifying as transgender and of expressing suicidal ideation.
</p>
<p>
	It is not possible (nor would it be ethical if possible) to impose gender identity concerns on some children and not others to study the effect of gender dysphoria on suicidality. However, at this point, the correlational research that does exist has not offered compelling evidence that gender dysphoria causes increased suicidality. Studies have rarely attempted to rule out third variables, such as other mental health diagnoses. The few studies that have tried to control for other variables have yielded mixed results.<sup><a href="#note32">32</a>, <a href="#note33">33</a></sup> Until researchers have consistently <em>isolated</em> gender dysphoria as playing an independent role in suicidality, they should not claim that gender dysphoria increases suicide risk.
</p>
<p>
	Over three decades ago, the psychologist David Lykken wrote, “Psychology isn’t doing very well as a scientific discipline and something seems to be wrong somewhere” (p. 3).<sup><a href="#note34">34</a></sup> Sadly, psychology continues to falter; in fact, we think it has gotten worse. The emotional and moral pull of DEI concerns are understandable but they may have short-circuited critical thinking about the limitations of lived experience, the requirement of using only reliable and valid measurement instruments, and the need to meet strict criteria before claiming that one variable has a causal influence on another variable.
</p>
<h4>
	DEI Concepts Contradict Known Findings about Human Cognition<br />
</h4>
<p>
	The empirical bases for some DEI concepts contradict social scientific principles. Additionally, certain DEI ideas run counter to important findings about human nature that scientists have established by following the required scientific principles. We discuss three examples below.
</p>
<h5>
	Out-Group Antipathy<br />
</h5>
<p>
	Humans are tribal by nature. We have a long history of living in stable groups and competing against other groups. Thus, it’s no surprise that one of social psychology’s most robust findings is that in-group preferences are powerful and easy to evoke. For example, in studies where psychologists create in-groups and out-groups using arbitrary criteria such as shirt color, adults and children alike have a large preference for their group members.<sup><a href="#note35">35</a>, <a href="#note36">36</a></sup> Even infants prefer those who are similar to themselves37 and respond preferentially to those who punish <em>dissimilar</em> others.<sup><a href="#note38">38</a></sup>
</p>
<p class="pullquote">
	Constructive disagreement about ideas should be encouraged rather than leveraged as an excuse to silence those who may see the world differently.
</p>
<p>
	DEI, although generally well-intentioned, often overlooks this tribal aspect of our psychology. In particular, in the quest to confront the historical mistreatment of certain identity groups, it often instigates zero-sum thinking (i.e., that one group owes a debt to another; that one group cannot gain unless another loses). This type of thinking will exacerbate, rather than mitigate, animosity. A more fruitful approach would emphasize individual characteristics over group identity, and the common benefits that can arise when all individuals are treated fairly.
</p>
<h5>
	Expectancies<br />
</h5>
<p>
	When people expect to feel a certain way, they are more likely to experience that feeling.<sup><a href="#note39">39</a>, <a href="#note40">40</a></sup> Thus, when someone, especially an impressionable teenager or young adult, is told that they are a victim, the statement (even if true) is not merely a neutral descriptor. It can also set up the <em>expectation</em> of victimhood with the downstream consequence of making one feel themselves to be even more of a victim. DEI microaggression workshops may do exactly this—they prime individuals to perceive hostility and negative intent in ambiguous words and actions.<sup><a href="#note41">41</a></sup> The same logic applies to more pronounced forms of bigotry. For instance, when Robin DiAngelo describes “uniquely anti-black sentiment integral to white identity” (p. 95),<sup><a href="#note42">42</a></sup> the suggestion that White people are all anti-Black might have the effect of exacerbating both actual and perceived racism. Of course, we need to deal honestly with any and all racism when it does exist, but it is also important to understand potential costs of exaggerating such claims. Expectancy effects might interact with the “virtuous victim effect,” wherein individuals perceive victims as being more moral than non-victims.<sup><a href="#note43">43</a>, <a href="#note44">44</a></sup> Thus, there can be a social value gained simply in presenting oneself as a victim.
</p>
<h5>
	Cognitive Biases<br />
</h5>
<p>
	Cognitive biases are one of the most important and well-replicated discoveries of the behavioral sciences. It is therefore troubling that, in the discussion of DEI topics, psychologists often fall victim to those very biases.
</p>
<p>
	A striking example is the American Psychological Association’s (APA) statement shortly after the death of George Floyd, which provides a textbook illustration of the <em>availability bias</em>, the tendency to overvalue evidence that easily comes to mind. The APA, the largest psychological organization in the world, asserted after Floyd’s death that “The deaths of innocent black people targeted specifically because of their race—often by police officers—are both deeply shocking and shockingly routine.”<sup><a href="#note45">45</a></sup> How “shockingly routine” are they? According to the <em>Washington Post</em> database of police killings, in 2020 there were 248 Black people killed by police. By comparison, over 6,500 Black people were killed in traffic fatalities that year—a 26-fold difference.<sup><a href="#note46">46</a></sup> Also, some portion of those 248 victims were not innocent—given that 216 were armed, some killings would probably have been an appropriate use of force by the police to defend themselves or others. Some portion was also not killed specifically <em>because of</em> their race. So why would the APA describe a relatively rare event as “shockingly routine”? This statement came in the aftermath of the widely publicized police killings of Floyd and those of Ahmaud Arbery and Breonna Taylor. In other words, these rare events were seen as common likely because widespread media coverage made them readily available in our minds.
</p>
<p>
	Unfortunately, the APA also recently fell prey to another well-known bias, the <em>base rate fallacy</em>, where relevant population sizes are ignored. In this case, the APA described new research that found “The typical woman was considered to be much more similar to a typical White woman than a typical Black woman.”<sup><a href="#note47">47</a></sup> Although not stated explicitly, the implication seems to be that, absent racism, the typical woman would be roughly midway between typical White woman and typical Black woman. That is an illogical conclusion given base rates. In the U.S., White people outnumber Black people by roughly 5 to 1; hence the <em>typical</em> woman <em>should</em> be perceived as more similar to a typical White woman than to a typical Black woman.
</p>
<h4>
	What Happened? Some Possible Causes<br />
</h4>
<p>
	At this stage, we expect that many readers may be wondering how it can be that social scientists regularly violate basic scientific principles—principles that are so fundamental that these same social scientists routinely teach them in introductory courses. One possible reason is myside bias, wherein individuals process information in a way that favors their own “team.” For example, in the case of the race Implicit Association Test, proponents of the IAT might more heavily scrutinize the methodology of studies that yield negative results compared to those that have yielded their desired results. Similarly, although lived experience is a limited kind of evidence, it certainly is a source of evidence, and thus scholars may elevate its importance and overlook its limitations when doing so bolsters their personal views.
</p>
<p>
	A related challenge facing behavioral scientists is that cognitive biases are universal and ubiquitous—everyone, including professional scientists, is susceptible.<sup><a href="#note48">48</a></sup> In fact, one might say that the scientific method, including the three principles we emphasize here, is an algorithm (i.e., a set of rules and processes) designed to overcome our eternally pervasive cognitive biases.
</p>
<p>
	A third challenge confronting behavioral scientists is the current state of the broader scientific community. Scientific inquiry works best when practiced in a community adhering to a suite of norms, including organized skepticism, that incentivize individuals to call out each other’s poor practices.<sup><a href="#note49">49</a>, <a href="#note50">50</a></sup> In other words, in a <em>healthy</em> scientific community, if a claim becomes widely adopted without sufficient evidence, or if a basic principle is neglected, a maverick scientist would be rewarded for sounding the alarm by gaining respect and opportunities. Unfortunately, the scientific community does not act this way with respect to DEI issues, perhaps because the issues touch widely held personal values (e.g., about equality between different groups of people). If different scientists held different values, there would probably be more healthy skepticism of DEI topics. However, there is little ideological diversity within the academy. In areas such as psychology, for example, liberal-leaning scholars outnumber conservative-leaning scholars by at least 8 to 1, and in some disciplines the ratio is 20 to 1 or even more.<sup><a href="#note51">51</a>, <a href="#note52">52</a></sup> A related concern is that these values are more than just personal views. They often seem to function as sacred values, non-negotiable principles that cannot be compromised and only questioned at risk to one’s status within the community.
</p>
<p>
	From this perspective,<sup><a href="#note53">53</a></sup> it is easy to see how those who question DEI may well face moral outrage, even if (or maybe <em>especially</em> if) their criticisms are well-founded. The fact that this outrage sometimes translates into public cancellations is extremely disheartening. Yet there are likely even more <em>de facto</em> cancellations than it seems. Someone can be cancelled directly or indirectly. Indirect cancellations can take the form of contract nonrenewal, pressure to resign, or having one’s employer dig for another offense to use as the stated grounds of forcing someone out of their job. This latter strategy is a very subtle, yet no less insidious, method of cancellation. As an analogy, it is like a police officer following someone with an out-of-state license plate and then pulling the car over when they fail to use a turn signal. An offense was committed, but the only reason the offense was observed in the first place is because the officer was looking for a reason to make the stop and therefore artificially enhanced the time window in which the driver was being scrutinized. The stated reason for the stop is failure to signal; the real reason is the driver is from out of town. Whether direct or indirect, the key to a cancellation is that holding the same job becomes untenable after failing to toe the party line on DEI topics.
</p>
<p>
	It is against this backdrop that DEI scholarship is conducted. Academics fear punishment (often subtle) for challenging DEI research. Ideas that cannot be freely challenged are unfalsifiable. Those ideas will likely gain popularity because the marketplace of ideas becomes the monopoly of a single idea. An illusory consensus can emerge about a complex area for which reasonable, informed, and qualified individuals have highly differing views. An echo chamber created by forced consensus is the breeding ground for bad science.
</p>
<h4>
	How to Get Behavioral Science Back on Track<br />
</h4>
<p>
	We are not the first ones to express concern about the quality of science in our discipline.<sup><a href="#note54">54</a>, <a href="#note55">55</a></sup> However, to our knowledge, we are the first to discuss how DEI over-reach goes hand-in-hand with the failure to engage in good science. Nonetheless, this doesn’t mean it can’t be fixed. We offer a few suggestions for improvement.
</p>
<p>
	First, disagreement should be normalized. Advisors should model disagreement by presenting an idea and explicitly asking their lab members to talk about its weaknesses. We need to develop a culture where challenging others’ ideas is viewed as an integral (and even enjoyable) part of the scientific process, and not an ad hominem attack.
</p>
<p>
	Second, truth seeking must be re-established as the fundamental goal of behavioral science. Unfortunately, many academics in behavioral science seem now to be more interested in advocacy than science. Of course, as a general principle, faculty and students should not be restricted from engaging in advocacy. However, this advocacy should not mingle with their academic work; it must occur on their own time. The tension between advocacy and truth seeking is that advocates, by definition, have an <em>a priori</em> position and are tasked with convincing others to accept and then act upon that belief. Truth seekers must be open to changing their opinion whenever new evidence or better analyses demand it.
</p>
<p>
	To that end, we need to resurrect guardrails that hold students accountable for demonstrating mastery of important scientific concepts, including those described above, before receiving a PhD. Enforcing high standards may sound obvious, but actually failing students who do not meet those standards is an <em>exclusionary</em> practice that might be met with resistance.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.2/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n2_lg.jpg" alt="Skeptic 29.2 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.2<br />
		<a href="https://www.skeptic.com/productlink/magv29n2">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/264050#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	Another intriguing solution is to conduct “adversarial collaborations,” wherein scholars who disagree work together on a joint project.<sup><a href="#note56">56</a></sup> Adversarial collaborators explicitly spell out their competing hypotheses and together develop a method for answering a particular question, including the measures and planned analyses. Stephen Ceci, Shulamit Kahn, and Wendy Williams,<sup><a href="#note57">57</a></sup> for example, engaged in an adversarial collaboration that synthesized evidence regarding gender bias in six areas of academic science, including hiring, grant funding, and teacher ratings. They found evidence for gender bias in some areas but not others, a finding that should prove valuable in decisions about where to allocate resources.
</p>
<p>
	In conclusion, we suggest that DEI over-reach in behavioral science is intimately related to a failure within the scientific community to adhere to basic principles of science and appreciate important findings from the behavioral science literature. The best path forward is to get back to the basics: understand the serious limitations of lived experience, focus on quality measurement, and be mindful of the distinction between correlation and causation. We need to remember that the goal of science is to discover truth. This requires putting ideology and advocacy aside while in the lab or classroom. Constructive disagreement about ideas should be encouraged rather than leveraged as an excuse to silence those who may see the world differently. The scientific method requires us to stay humble and accept that we just might be wrong. That principle applies to all scientists, including the three authors of this article. To that end, readers who disagree with any of our points should let us know! Maybe we can sort out our differences—and find common ground— through an adversarial collaboration. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<p>
		The views presented in this article are solely those of the authors. They do not represent the views of any author’s employer or affiliation.
	</p>
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>April Bleske-Rechek</strong> is a Professor of Psychology at the University of Wisconsin-Eau Claire. Her teaching and research efforts focus on scientific reasoning and individual and group differences in cognitive abilities, personality traits, and relationship attitudes.
	</p>
<p>
		<strong>Michael H. Bernstein</strong> is an experimental psychologist and an Assistant Professor at Brown University. His research focuses on the overlap between cognitive science and medicine. He is co-editor of <em>The Nocebo Effect: When Words Make You Sick</em>.
	</p>
<p>
		<strong>Robert O. Deaner</strong> is a Professor of Psychology at Grand Valley State University. He teaches courses on research methods, sex differences, and evolutionary psychology. His research addresses sex differences in competitiveness.
	</p>
<h5>
		References<br />
	</h5>
<ol>
<li id="note01">
			Frisby, C.L., Redding, R.E., O’Donohue, W.T., &#038; Lilienfeld, S.O. (2023). <em>Ideological and Political Bias in Psychology</em>. Springer Nature.
		</li>
<li id="note02">
			<a href="https://bit.ly/4aJLRyO">https://bit.ly/4aJLRyO</a>
		</li>
<li id="note03">
			Merton, R.K. (1993). <em>The Sociology of Science: Theoretical and Empirical Investigations</em>. University of Chicago Press.
		</li>
<li id="note04">
			Rauch, J. (2013). <em>Kindly Inquisitors: The New Attacks on Free Thought</em>. University of Chicago Press.
		</li>
<li id="note05">
			Rauch, J. (2021). <em>The Constitution of Knowledge: A Defense of Truth</em>. Brookings Institution Press.
		</li>
<li id="note06">
			<a href="https://bit.ly/3xATvNI">https://bit.ly/3xATvNI</a>
		</li>
<li id="note07">
			<a href="https://bit.ly/4cTS4Kq">https://bit.ly/4cTS4Kq</a>
		</li>
<li id="note08">
			<a href="https://bit.ly/4cXcRNe">https://bit.ly/4cXcRNe</a>
		</li>
<li id="note09">
			<a href="https://bit.ly/3Q15SZU">https://bit.ly/3Q15SZU</a>
		</li>
<li id="note10">
			<a href="https://bit.ly/3xCzeY8">https://bit.ly/3xCzeY8</a>
		</li>
<li id="note11">
			<a href="https://bit.ly/43W5bGW">https://bit.ly/43W5bGW</a>
		</li>
<li id="note12">
			<a href="https://bit.ly/3TUw0GR">https://bit.ly/3TUw0GR</a>
		</li>
<li id="note13">
			<a href="https://bit.ly/4401VKr">https://bit.ly/4401VKr</a>
		</li>
<li id="note14">
			<a href="https://bit.ly/3Ufx4q1">https://bit.ly/3Ufx4q1</a>
		</li>
<li id="note15">
			Funder, D. C. (2019). <em>The Personality Puzzle</em> (8th ed.). W.W. Norton &#038; Company.
		</li>
<li id="note16">
			<a href="https://bit.ly/3UhIOsn">https://bit.ly/3UhIOsn</a>
		</li>
<li id="note17">
			Ibid.
		</li>
<li id="note18">
			<a href="https://bit.ly/3W0liBc">https://bit.ly/3W0liBc</a>
		</li>
<li id="note19">
			<a href="https://bit.ly/3VShodH">https://bit.ly/3VShodH</a>
		</li>
<li id="note20">
			Ibid.
		</li>
<li id="note21">
			<a href="https://bit.ly/3UhIOsn">https://bit.ly/3UhIOsn</a>
		</li>
<li id="note22">
			<a href="https://bit.ly/49vFle5">https://bit.ly/49vFle5</a>
		</li>
<li id="note23">
			<a href="https://bit.ly/3JmZxUw">https://bit.ly/3JmZxUw</a>
		</li>
<li id="note24">
			<a href="https://bit.ly/3Jifb3O">https://bit.ly/3Jifb3O</a>
		</li>
<li id="note25">
			<a href="https://bit.ly/3Q37UZc">https://bit.ly/3Q37UZc</a>
		</li>
<li id="note26">
			<a href="https://bit.ly/3Q0Oe8h">https://bit.ly/3Q0Oe8h</a>
		</li>
<li id="note27">
			<a href="https://bit.ly/49zSTFk">https://bit.ly/49zSTFk</a>
		</li>
<li id="note28">
			<a href="https://bit.ly/3xrWU15">https://bit.ly/3xrWU15</a>
		</li>
<li id="note29">
			<a href="https://bit.ly/49QWBux">https://bit.ly/49QWBux</a>
		</li>
<li id="note30">
			Bleske-Rechek, A., Gunseor, M.M., &#038; Maly, J.R. (2018). Does the Language Fit the Evidence? Unwarranted Causal Language in Psychological Scientists’ Scholarly Work. <em>The Behavior Therapist, 41</em>(8), 341–352.
		</li>
<li id="note31">
			<a href="https://bit.ly/49DQZmW">https://bit.ly/49DQZmW</a>
		</li>
<li id="note32">
			<a href="https://bit.ly/49zKdif">https://bit.ly/49zKdif</a>
		</li>
<li id="note33">
			<a href="https://bit.ly/49JeECQ">https://bit.ly/49JeECQ</a>
		</li>
<li id="note34">
			Lykken, D.T. (1991). What’s Wrong With Psychology Anyway? In D. Cicchetti &#038; W.M. Grove (Eds.), <em>Thinking Clearly About Psychology: Essays in Honor of Paul E. Meehl</em>. University of Minnesota Press.
		</li>
<li id="note35">
			Tajfel, H. (2020). Experiments in Intergroup Discrimination. <em>Scientific American, 223</em>, 96–102.
		</li>
<li id="note36">
			<a href="https://bit.ly/3xC9on5">https://bit.ly/3xC9on5</a>
		</li>
<li id="note37">
			<a href="https://bit.ly/4aO5dTe">https://bit.ly/4aO5dTe</a>
		</li>
<li id="note38">
			<a href="https://bit.ly/4aSLamR">https://bit.ly/4aSLamR</a>
		</li>
<li id="note39">
			<a href="https://bit.ly/3Q2m9gO">https://bit.ly/3Q2m9gO</a>
		</li>
<li id="note40">
			Bernstein, M., Blease, C., Locher, C., &#038; Brown, W. (2024). <em>The Nocebo Effect: When Words Make You Sick</em>. Mayo Clinic Press.
		</li>
<li id="note41">
			<a href="https://bit.ly/4aQmv2e">https://bit.ly/4aQmv2e</a>
		</li>
<li id="note42">
			DiAngelo, R. (2018). <em>White Fragility: Why It’s So Hard for White People to Talk About Racism</em>. Beacon Press.
		</li>
<li id="note43">
			<a href="https://bit.ly/4awG3sR">https://bit.ly/4awG3sR</a>
		</li>
<li id="note44">
			<a href="https://bit.ly/4cSOEYn">https://bit.ly/4cSOEYn</a>
		</li>
<li id="note45">
			<a href="https://bit.ly/43XhN0k">https://bit.ly/43XhN0k</a>
		</li>
<li id="note46">
			<a href="https://bit.ly/3UfKH8L">https://bit.ly/3UfKH8L</a>
		</li>
<li id="note47">
			<a href="https://bit.ly/43ZM1zH">https://bit.ly/43ZM1zH</a>
		</li>
<li id="note48">
			Stanovich, K. E. (2021). <em>The Bias That Divides Us: The Science and Politics of Myside Thinking</em>. The MIT Press.
		</li>
<li id="note49">
			<a href="https://bit.ly/4aJLRyO">https://bit.ly/4aJLRyO</a>
		</li>
<li id="note50">
			Ritchie, S. (2020). <em>Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth</em>. Metropolitan Books.
		</li>
<li id="note51">
			<a href="https://bit.ly/43XRkzI">https://bit.ly/43XRkzI</a>
		</li>
<li id="note52">
			<a href="https://bit.ly/3TXsw6n">https://bit.ly/3TXsw6n</a>
		</li>
<li id="note53">
			<a href="https://bit.ly/3TXsxar">https://bit.ly/3TXsxar</a>
		</li>
<li id="note54">
			Lykken, D.T. (1991). What’s Wrong With Psychology Anyway? In D. Cicchetti &#038; W.M. Grove (Eds.), <em>Thinking Clearly About Psychology: Essays in Honor of Paul E. Meehl</em>. University of Minnesota Press.
		</li>
<li id="note55">
			<a href="https://bit.ly/4aybGSy">https://bit.ly/4aybGSy</a>
		</li>
<li id="note56">
			Clark, C.J., &#038; Tetlock, P.E. (2023). Adversarial Collaboration: The Next Science Reform. In C.L. Frisby, R.E. Redding, W. T. Donohue, &#038; S.O. Lilienfeld (Eds.), <em>Ideological and Political Bias in Psychology</em> (pp. 905–927). Springer.
		</li>
<li id="note57">
			<a href="https://bit.ly/3vQQ5FW">https://bit.ly/3vQQ5FW</a>
		</li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>A Scientific Perspective on the Patriarchy:  The Gender Pay Gap and Unequal Opportunity</title>
		<link>https://www.skeptic.com/reading_room/gender-pay-gap-unequal-opportunity/</link>
		
		<dc:creator><![CDATA[Marc Defant]]></dc:creator>
		<pubDate>Fri, 23 Aug 2024 14:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[educational-gender-equality paradox]]></category>
		<category><![CDATA[gender bias in hiring]]></category>
		<category><![CDATA[gender differences in employment]]></category>
		<category><![CDATA[gender equality]]></category>
		<category><![CDATA[gender pay gap]]></category>
		<category><![CDATA[gender quotas in business]]></category>
		<category><![CDATA[gender roles in the workplace]]></category>
		<category><![CDATA[glass ceiling]]></category>
		<category><![CDATA[labor market economics]]></category>
		<category><![CDATA[occupational segregation]]></category>
		<category><![CDATA[patriarchy]]></category>
		<category><![CDATA[STEM careers]]></category>
		<category><![CDATA[wage disparity]]></category>
		<category><![CDATA[women in STEM]]></category>
		<category><![CDATA[work flexibility and gender]]></category>
		<category><![CDATA[workplace discrimination]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=45159</guid>

					<description><![CDATA[New research challenges common assumptions about the gender pay gap and women’s career advancement. This article examines economic studies revealing how factors like career choices, work hours, and job preferences contribute to wage disparities between men and women. It explores the “educational-gender-equality paradox” and questions the effectiveness of quotas in addressing workplace inequality. Marc Defant shares insights into the complex interplay of personal decisions, societal expectations, and labor market dynamics shaping gender differences in the workforce.]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">D</span>o we live in a patriarchy? Does toxic masculinity permeate our society? Are they the reasons why women are paid less than men, and fewer women are working in STEM? There is a long-documented history of these differences, and they go deeper than you might think. Let’s consider some causal explanations.
</p>
<h4>
	The Gender Pay Gap<br />
</h4>
<p>
	One statistic offered as evidence of male hegemony (i.e., the patriarchy) is the ubiquitous gender pay gap that exists between men and women. For example, self-described feminist economists<sup><a href="#note01">1</a></sup> Xuan Pham, Laura Fitzpatrick, and Richard Wagner suggest:
</p>
<blockquote>
<p>
		…the two overarching institutions that enable the persistence of the GPG [gender pay gap] in the USA are capitalism and patriarchy. Capitalism is a production system is [sic] driven by the profit motive, meaning firms seek to cost minimize. If employers can pay whole segments of workers lower wages, it is easier to boost profit margins and preserve capitalist production. The incentive to do this is quite powerful and a society that relegates women to a lesser position through non-labor market forces enhances the potential to increase exploitation of women through differential wages relative to men… Capitalism alone cannot create the GPG, however. <em>The other crucial institution, patriarchy—allowing for gender-based disparate treatment—has deep roots in U.S. society</em>. At the country’s founding, women were no more than the property of men.<sup><a href="#note02">2</a></sup>
	</p>
</blockquote>
<p>
	The ratio of annual earnings between women and men has gone from a low of 59 cents on the dollar in 1963 to 84 cents for every dollar earned by men in 2024.<sup><a href="#note03">3</a></sup> Pham et al. claim the patriarchy is responsible for the gender pay gap—that is, wholesale discrimination against women. Most leading economists, regardless of their gender, disagree. Cornell University economists Francine Blau and Lawrence Kahn point out<sup><a href="#note04">4</a></sup> most of the reduction in the pay gap came in the 1980s and early 1990s during a “sharp increase in female participation rates” in the work force—increasing from 32 percent in 1947 to 57 percent in 2014 while the male participation rate fell from 87 to 69 percent over the same period.
</p>
<p>
	Countering the oppressive patriarchy argument, an extraordinary and remarkable natural experiment that demonstrates the lack of discrimination in the differences in pay between men and women was conducted by researchers at Stanford University in 2018 using data from over one million Uber drivers. Uber pays their drivers according to a rigid “non-negotiated formula” (i.e., invariant among drivers), and they do not offer any employee benefits. The drivers also determine when and where they work. Each driver earns a specific base fare in addition to a “per-minute and per-distance” fare beginning with the pickup and ending with the drop off of a customer. During periods of high customer demand, drivers can receive a “surge multiplier.” More importantly as related to the gender pay gap, there are no promotions, work overtime pay, or any ability to negotiate higher pay for drivers. In other words, earnings are directly proportional to productivity. The Stanford researchers have also demonstrated that there is no meaningful customer discrimination toward female or male drivers. In other words, there are no statistically significant differences in customer ratings of men and women drivers nor do riders prefer one gender to the other.<sup><a href="#note05">5</a></sup> <span id="more-45159"></span>
</p>
<div class="sidebar">
	*Men make more on a weekly basis, but much of the weekly difference is due to men working more hours. The percent difference in the pay gap is presented as per hour to avoid work amount variations.
</div>
<p>
	The results of the study show unequivocally that men earn about 7 percent more than women on an hourly basis.* The “entire gap” can be explained by three factors, all unrelated to discrimination:
</p>
<ol>
<li>
		 Men tend to drive faster than women. This leads to an increase in pay because they handle more customers per hour.
	</li>
<li>
		 Uber pays a “compensating differential” for driving in high-crime areas and areas with high concentrations of bars. Men tend to drive in the more lucrative areas not only because they accept the risk but also because they are inclined to live nearer to or within these areas compared with women.
	</li>
<li>
		 Drivers who work 30 or more hours per week compared to those that work fewer than 10 hours per week make 9 percent more per hour. Why is that? There is much to gain from experience at Uber such as when and where to drive, or which trips a driver should accept or reject. The researchers stated, “Male drivers accumulate more experience than women by driving more each week and being less likely to stop driving with Uber.”
	</li>
</ol>
<div class="sidebar">
	<sup>†</sup>The Stanford team noted in 2018 that “nearly every one of the parameters favors men earning more. Men have shorter trips to the rider, longer trips, faster speed, higher surge, and more incentives.”
</div>
<p>
	Experience in other studies is typically measured by years of employment or a worker’s age which are notoriously poor ways of ascertaining work experience. As the Uber study suggests, experience differentials between men and women may be underestimated in previous studies and “can lead to biased estimates of the job-flexibility penalty.” Put another way, by working fewer hours, women are not only earning less pay than men but also accumulating less experience.<sup>†</sup> The Stanford team concludes:
</p>
<blockquote>
<p>
		Even in the absence of discrimination and in flexible labor markets, women’s relatively high opportunity cost of non-paid work time and gender-based differences in preferences and constraints can sustain a gender pay gap.
	</p>
</blockquote>
<p>
	A study by two Harvard economists on bus and train operators produced similar results—the pay gap in favor of men is due to the differential choice preferences of men and women.<sup><a href="#note06">6</a></sup> Even though “in a unionized environment where work tasks are similar, hourly wages are identical, and tenure dictates promotions, female workers earn $0.89 on the male-worker dollar.” The same study revealed that women were also less likely than men to game the scheduling system by trading off work hours at regular wages for overtime hours at premium wages.
</p>
<p>
	Economists at Cornell University<sup><a href="#note07">7</a>, <a href="#note08">8</a></sup> have recently completed two extremely detailed and extensive reviews of the research literature demonstrating factors that influence the gender pay gap. A few of them are obvious. Gender differences in choice of college majors funnel women into lower paying careers. For example, women tend to avoid majoring in science, technology, engineering, and math (STEM) programs resulting in fewer women in these relatively high-paying careers. Women also tend to avoid jobs requiring extensive training specific to the company they are working for, i.e., training that does not help them with other companies.
</p>
<p>
	Although both men and women quit their jobs at about the same rate “all else being equal,” they quit for different reasons. Men quit for reasons primarily related to the job; in contrast, women quit, for the most part, because of family-related reasons. As a result, women’s wages are affected adversely compared to men probably because women miss out on experience through training. Married women and mothers focus on home and family reducing the number of hours they spend in the labor market. Not surprisingly, research has found that the more hours women spend doing housework, the lower their wages in the market.
</p>
<p class="pullquote">
	We should foster working environments that reward the most qualified and competent candidates, encouraged to participate irrespective of their gender.
</p>
<p>
	The choice families make regarding their working locations also impact wages. Men are still the primary wage earners in families, and families tend to choose the location of the husband’s work as opposed to where the wife works (probably because, on average, men tend to focus on careers while women tend to prioritize family). Recent research in the U.S. and Great Britain has revealed that total family earnings increase significantly <em>while the wife’s earnings decline</em> when the family relocates.<sup><a href="#note09">9</a></sup> As a result, although women may enter into traditionally male-oriented occupations, they often select careers that are flexible geographically (e.g., physicians, pharmacists, managers, accountants, etc.).<sup><a href="#note10">10</a>, <a href="#note11">11</a></sup>
</p>
<p>
	But the largest single impact on the gender wage gap appears to be the difference in pay between the careers men and women choose (as much as a third of the gap!). The companies women tend to gravitate toward are those that pay <em>both men and women</em> lower wages whereas men tend to be concentrated in firms that pay more to <em>both men and women</em>. Women may consciously be choosing companies that are less stressful and offer more flexible work hours, but pay less than those where greater demands are made. As Blau and Kahn explain:
</p>
<blockquote>
<p>
		Men are found to place a higher value on money, to have higher self-esteem, to be less risk averse, more competitive, self-confident and disagreeable, and to believe that they control their own fate (an internal, as opposed to external, locus of control) to a greater extent than women.
	</p>
</blockquote>
<p>
	From a broader perspective, men may have traditionally needed to excel in these arenas not only to provide for their families but also to succeed in competing with other men for mates. It is well established that men spend longer hours in their jobs, tend to place work over family, and take less time off from their jobs, which has a large impact on wages. One study presented men and women in the laboratory with a task to solve under two conditions of compensation—in a “noncompetitive piece rate and then a competitive tournament incentive scheme.”<sup><a href="#note12">12</a></sup> There were no differences between the performance of men and women, but 73 percent of the men preferred the competitive tournament scenario compared to 35 percent of the women.
</p>
<p>
	The competitiveness of men translates from the laboratory into real-life performance benefits. Researchers have found that high school boys and girls have, on average, similar academic abilities. However, boy’s higher level of competitiveness correlates with their choosing to go into “more prestigious academic tracks” than do girls.<sup><a href="#note13">13</a></sup> Field research substantiates these results. In a large study, economists posted online job advertisements in 16 major cities that randomly varied the advertisements in their compensation regimes. Based on the 9,000 people assessing the job advertisements, the researchers were able to conclude that, “women disproportionately shy away from competitive work settings.”<sup><a href="#note14">14</a></sup> Increasing the competition within the workplace also appears to increase the performance of men relative to women.<sup><a href="#note15">15</a>, <a href="#note16">16</a></sup> Many studies have also found that on average not only do women shy away from risk, but their wages are lower due to having greater risk aversion than men.<sup><a href="#note17">17</a></sup> Controlling for extraneous effects, employers tend to pay more to entice workers to accept risk.
</p>
<p>
	Yet another primary influence on the gender pay gap is a preference by women for what economists refer to as “work-force interruptions,” which include flexibility (such as working at home or at convenient times) and working fewer hours (a decision not to put in the long hours required by some jobs). Numerous recent studies have explored the impact of workforce interruptions and shorter hours, and it is worth discussing them in detail because they are pertinent within the context of the “glass ceiling,” a term that refers to the discriminatory barriers hindering women from attaining top-level, high-paying jobs in the labor market.
</p>
<p>
	One of these studies followed MBA graduates from a distinguished program and found that women and men began their careers at nearly the same pay, but their pay diverged over time and men were paid more. The conclusion of the research posited that the gender pay gap can be attributed almost entirely to the fewer weekly hours women worked and the larger number of “career interruptions” women took compared to men.<sup><a href="#note18">18</a></sup> Another study, conducted over a fifteen-year period and focusing on lawyers, revealed an interesting trend: while gender had little impact on initial salaries, the gender pay gap significantly widened over time. This was attributed to women working shorter hours and taking time off for childbirth.<sup><a href="#note19">19</a></sup> A recent study by Ghazala Azmat and her colleague Rosa Ferrer found a similar disparity between male and female lawyers and attributed the difference to men obtaining more clients and receiving twice as much revenue from those clients compared to women. The authors concluded the disparities between men’s and women’s earnings and promotions were due to higher workplace performance by men compared to women.<sup><a href="#note20">20</a></sup> It is worth quoting their findings:
</p>
<blockquote>
<p>
		Possible channels of direct discrimination in law firms—whereby, for instance, senior lawyers (i.e., law firm partners) could interfere with performance—are not strong determinants of performance gaps. The presence of preschool children in the household contributes to the gaps in performance; however, it is not the only key determinant. A substantial share of the gender gap in performance is explained by aspirations to become a partner, which are likely to reflect more general career concerns as well as traditional gender roles… We find that the distribution of career aspirations differs across genders, which is reflected in the differences in performance [i.e., women do not aspire to become partners as much as their male counterparts]… One potential implication is that gender-based inequality in earnings and career outcomes might not decrease in the near future—and could even increase—as more high-skilled workers are explicitly compensated on the basis of performance.
	</p>
</blockquote>
<p>
	In 2014, Harvard economist Claudia Goldin showed that the gender pay gap increases over the lifespan of laborers particularly for college-educated employees.<sup><a href="#note21">21</a></sup> She explained that the gender pay gap can “almost entirely be explained by various factors such as hours worked, time out of the labor force, and years spent in part-time employment.” In 2017, economist Erling Barth and his colleagues evaluated the gender pay gap over the time span of employee careers by analyzing data from the 2000 Decennial Census of the United States and the Longitudinal Employer Household Dynamics. Their findings? The gender pay gap starts out relatively small but widens over time for both college-educated and non-college educated men and women. The largest gap is among the college educated men and women. The researchers found the gap (in both college and non-college scenarios) is primarily attributed to married women earning less and “most of the loss in earnings growth for married women, relative to married men, occurs concurrently with the arrival of children.”<a href="#note22">22</a>
</p>
<p>
	David Lubinski and his colleagues conducted a 35-year longitudinal study following some of the most intellectually gifted people in the United States.<sup><a href="#note23">23</a>, <a href="#note24">24</a></sup> This research led to two major conclusions that tracked with other findings here. (1) Intellectually exceptional women prefer to work with people rather than “things,” unlike their male counterparts, who often exhibit the opposite preference. This aligns with other findings indicating that both women and men choose careers based on their individual strengths and interests. For example, women score higher than men on verbal abilities, while men tend to excel in mathematical abilities. And even though fewer women go into STEM (women received only 25.1 percent and 23.4 percent of the doctorates in mathematics/computer science and engineering, respectively), those that do, score similar to men in ability and interest. In other words, women who pursue careers in STEM fields exhibit exceptional mathematical and spatial reasoning abilities, and their mathematical and spatial abilities are typically greater than their verbal abilities. (2) On average, gifted men earn higher salaries than their female counterparts after 35 years. The main reason for this is that men work more hours than women suggesting once again that men put more emphasis on work than women. Not only do men work longer hours but when both genders are asked “How many hours would you choose to work if you were in the job of choice” [i.e., desired job, place of work, and the pay required] women chose fewer hours than men.
</p>
<p>
	Regarding the quote at the beginning of this article, Steve Horwitz, Distinguished Professor of Free Enterprise, commented on Pham and colleagues’ supply/ demand hypothesis (for example, the abundance of women as teachers in grades K–12) this way: “Those jobs tend to pay less because they are jobs where many people have the relevant skills to do them, thus employers can always find another person to fill them (male or female!), which keeps wages low. The same is true of garbage collectors, who are almost all male. Their wages are much lower than those of teachers and nurses because even more people have the relevant skills. So, perceptions of the femininity of a job [i.e., social constructionism] can’t really explain why wages are low.”<sup><a href="#note25">25</a></sup> If more people, whether they are men or women (supply), go into specific fields, those fields will be able to pay less for workers (demand).
</p>
<h4>
	The Glass Ceiling<br />
</h4>
<p>
	The National Science Foundation is pouring money into programs established to encourage women to enter STEM fields. For example, Howard University recently received $1.3 million for a proposal entitled “Multiple Consciousnesses: Investigating the Identities (Academic, Gender, Race, and Disability) of Black Women Undergraduate Students in STEM and Their Impact on Persistence.”<sup><a href="#note26">26</a></sup> Funding such as this presumes the gender pay gap and the glass ceiling are due to discrimination against women. However, as we have seen, economists—many of whom are women, as cited—have challenged these assumptions.
</p>
<p>
	A recent study demonstrates the impact of women’s choice on the glass ceiling. Psychologists Gijsbert Stoet and David Geary published a paper<sup><a href="#note27">27</a></sup> documenting the gender gap in STEM fields has remained relatively constant for decades despite heroic efforts to bring women into STEM fields.<sup><a href="#note28">28</a></sup> They discovered the largest STEM gender gaps exist in countries that test high on the Global Gender Gap Index (GGGI)—a measure of the degree of parity between men and women based on 14 indicators, which include earnings, seats in parliament, the number of women relative to men that enroll in universities, life expectancy, etc. The GGGI uses a scale from 0 to 1, where 1.0 represents complete gender parity (see Figure 1). The data comes from the Programme for International Student Assessment (PISA)<sup><a href="#note29">29</a></sup>—an educational survey of 519,334 students from 72 countries. Upon reviewing the graph, you may observe that the data might seem somewhat counterintuitive. This unexpected correlation is referred to as the educational-gender-equality paradox.
</p>
<div class="imageclearall">
	<img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-2/Gender-Gap-in-STEM-Education.png" alt="Figure 1. The Gender Gap in Science, Technology, Engineering, and Mathematics Education" width="750" height="643" class="noBoxShadow" style="display: block;" /> </p>
<p class="caption">
		<strong>Figure 1</strong>. A graph of gender equality (GGGI where 1.0 = gender parity) versus the percentage of women that graduate from STEM-related fields in each country. The line represents the best-fit least squares regression.
	</p>
</div>
<p>
	The Nordic countries (Denmark, Finland, Iceland, Norway, and Sweden) exemplify this paradox. They have established more than generous opportunities for women in maternity leave, first-rate state-provided childcare, and gender quotas for stock-market company boards.<sup><a href="#note30">30</a></sup> Yet they have some of the largest gender gaps in the world! For example, Finland ranks second in science literacy, and girls outperform boys on the tests. However, paradoxically, the number of women graduating with STEM degrees only approaches 20 percent. In contrast, countries with treatment of women ranging from fair to poor, such as Algeria, the United Arab Emirates, and Tunisia, have over 35 percent women graduating in STEM on average.
</p>
<p>
	Stoet and Geary found that “girls performed similarly or better than boys in science” in 66 percent of the countries “and in nearly all the countries, more girls appeared capable of college-level STEM study than had enrolled.” They attribute the anomaly to personal academic choice related to what each gender perceives as their personal strength. Girls do better on literacy testing than they do in mathematics and science. Even though girls do better than boys in science and mathematics in many countries such as Finland, they choose fields outside of STEM. The opposite is true for boys. They do better in science and mathematics than literacy, and consequently choose STEM more than girls. The researchers also emphasize that women in less gender-equal countries may be more prone to choose STEM fields based on economic stress than personal preference.
</p>
<p>
	It may be time to move away from activist ideology and acknowledge that girls and women freely choose their interests; they are not discriminated against in STEM fields, nor are they discouraged from pursuing a career in these fields. Similar holds true regarding men, even when countries such as Finland and Sweden go to extraordinary lengths to get more men into nursing and other fields traditionally dominated by women.
</p>
<p>
	For a long time, radical feminists have advocated for quotas to break up the perceived patriarchal havens (often referred to as “good-ole-boy networks”) in the hopes of creating environments that support the advancement of women into senior management positions. Norway provides a notable example of the drawbacks of implementing such quotas. In 2003, Norway passed a law mandating that all publicly traded Norwegian corporations must ensure that their corporate boards comprise at least 40 percent women (or men, if the board was predominantly women). Five economists—all women—led by Marianne Bertrand have assessed the impact of the law:
</p>
<blockquote>
<p>
		…within firms that were mandated to increase female participation on their board, there is no evidence that these gains at the very top trickled down. Moreover the reform had no obvious impact on highly qualified women whose qualifications mirror those of board members but who were not appointed to boards. We observe no statistically significant change in the gender wage gaps or in female representation in top positions… Finally, there is little evidence that the reform affected the decisions of women more generally; it was not accompanied by any change in female enrollment in business education programs, or a convergence in earnings trajectories between recent male and female graduates of such programs.<sup><a href="#note31">31</a></sup>
	</p>
</blockquote>
<p>
	It appears that Norway was trying to correct for a glass ceiling that did not actually exist. As reported by <em>The Economist</em>, the law led to a significant number of Norwegian corporations leaving the Norway stock exchange to avoid the mandated quota requirements. Of the 563 companies on the Norway stock exchange in 2003, only 179 remained by 2008.<sup><a href="#note32">32</a></sup> Meanwhile, the observed increase in women’s leadership was the same as in neighboring Denmark, which did not implement quotas.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.2/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n2_lg.jpg" alt="Skeptic 29.2 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.2<br />
		<a href="https://www.skeptic.com/productlink/magv29n2">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/264050#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	Simply stated, economists—many of whom are women—have found that women are more risk averse (i.e., less willing to place themselves in highly competitive job environments) and more inclined toward occupations that offer flexible hours, often in order to prioritize time with their children. Women are certainly as competent as men in STEM, but gravitate toward college majors and jobs that highlight their superior verbal and social skills. There is much more fascinating data—most of it largely absent from the public discourse—that shed light on these differences. I will discuss such scholarship from disciplines other than economics in future articles.
</p>
<p>
	Why is a scientific approach so important in this case? If political activists succeed in convincing the public that pay disparities between men and women are due to discrimination through an ominous patriarchy and toxic masculinity, not only will the data be ignored, but hardworking men will be discriminated against in favor of parity. I am not suggesting women should be discouraged from entering competitive fields; on the contrary, I am arguing for fairness. We should foster working environments that reward the most qualified and competent candidates, encouraged to participate irrespective of their gender. There is no glass ceiling, i.e., the purported discriminatory barrier that keeps qualified women from achieving top-level, high-paying jobs in the labor market in the United States, Canada, and many other Western countries subject to extensive research. The available evidence reveals that the primary hindrance to upward mobility is often the choices made by women—whether consciously or subconsciously—with regard to employment flexibility.<sup><a href="#note33">33</a></sup> <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		About the Author<br />
	</h5>
<p>
		Marc J. Defant is a professor of geology at the University of South Florida specializing in the study of volcanoes—more specifically, the geochemistry of volcanic rocks. He has been funded by the NSF, <em>National Geographic</em>, the American Chemical Society, and the National Academy of Sciences and has published in many international journals including Nature. His book <em>Voyage of Discovery: From the Big Bang to the Ice Age</em> is in the 2nd edition.
	</p>
<h5>
		References<br />
	</h5>
<ol>
<li id="note01">
			<a href="https://bit.ly/4d2amZW">https://bit.ly/4d2amZW</a>
		</li>
<li id="note02">
			Ibid.
		</li>
<li id="note03">
			<a href="https://bit.ly/49XFxDb">https://bit.ly/49XFxDb</a>
		</li>
<li id="note04">
			<a href="https://bit.ly/4aEQZog">https://bit.ly/4aEQZog</a>
		</li>
<li id="note05">
			<a href="https://bit.ly/3W6UvmO">https://bit.ly/3W6UvmO</a>
		</li>
<li id="note06">
			<a href="https://bit.ly/3Q7Q1sg">https://bit.ly/3Q7Q1sg</a>
		</li>
<li id="note07">
			<a href="https://bit.ly/4aEQZog">https://bit.ly/4aEQZog</a>
		</li>
<li id="note08">
			Blau, F., &#038; Winkler, A.E. (2018). <em>The Economics of Women, Men, and Work</em>, Eighth Edition. Oxford University Press.
		</li>
<li id="note09">
			<a href="https://bit.ly/447kEUM">https://bit.ly/447kEUM</a>
		</li>
<li id="note10">
			<a href="https://bit.ly/3vXe5XZ">https://bit.ly/3vXe5XZ</a>
		</li>
<li id="note11">
			https://bit.ly/3vM6gVe
		</li>
<li id="note12">
			<a href="https://bit.ly/3W3PBHm">https://bit.ly/3W3PBHm</a>
		</li>
<li id="note13">
			<a href="https://bit.ly/3QaLKoa">https://bit.ly/3QaLKoa</a>
		</li>
<li id="note14">
			<a href="https://bit.ly/4aGtXgA">https://bit.ly/4aGtXgA</a>
		</li>
<li id="note15">
			<a href="https://bit.ly/3W6Z4NY">https://bit.ly/3W6Z4NY</a>
		</li>
<li id="note16">
			<a href="https://bit.ly/4d3H1y9">https://bit.ly/4d3H1y9</a>
		</li>
<li id="note17">
			Bertrand, M. (2011). New Perspectives On Gender. In O. Ashenfelter, &#038; D. Card, <em>Handbook of Labor Economics, Volume 4B</em>. Elsevier.
		</li>
<li id="note18">
			<a href="https://bit.ly/3Q7sYOs">https://bit.ly/3Q7sYOs</a>
		</li>
<li id="note19">
			<a href="https://bit.ly/4490wkT">https://bit.ly/4490wkT</a>
		</li>
<li id="note20">
			<a href="https://bit.ly/49K7jCX">https://bit.ly/49K7jCX</a>
		</li>
<li id="note21">
			<a href="https://bit.ly/3UmrBxF">https://bit.ly/3UmrBxF</a>
		</li>
<li id="note22">
			<a href="https://bit.ly/3U4bovN">https://bit.ly/3U4bovN</a>
		</li>
<li id="note23">
			<a href="https://bit.ly/4d6cqAj">https://bit.ly/4d6cqAj</a>
		</li>
<li id="note24">
			<a href="https://bit.ly/4cWi7kh">https://bit.ly/4cWi7kh</a>
		</li>
<li id="note25">
			Personal communication, 2019
		</li>
<li id="note26">
			<a href="https://bit.ly/442RZ34">https://bit.ly/442RZ34</a>
		</li>
<li id="note27">
			<a href="https://bit.ly/444y5Vc">https://bit.ly/444y5Vc</a>
		</li>
<li id="note28">
			National Science Foundation. (2017). <em>Women, Minorities, and Persons With Disabilities in Science and Engineering</em>. National Center for Science and Engineering Statistics.
		</li>
<li id="note29">
			OECD. (2016). <em>PISA 2015 Results: Excellence and Equity in Education (Vol. 1)</em>. OCED.
		</li>
<li id="note30">
			<a href="https://bit.ly/3xGI1s1">https://bit.ly/3xGI1s1</a>
		</li>
<li id="note31">
			<a href="https://bit.ly/3W5jKGa">https://bit.ly/3W5jKGa</a>
		</li>
<li id="note32">
			<a href="https://bit.ly/3xGI1s1">https://bit.ly/3xGI1s1</a>
		</li>
<li id="note33">
			<a href="https://bit.ly/4aEQZog">https://bit.ly/4aEQZog</a>
		</li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The New Archaeology Wars:  How Cancel Culture and Identity Politics Have Corrupted Science </title>
		<link>https://www.skeptic.com/reading_room/how-cancel-culture-identity-politics-corrupted-science/</link>
		
		<dc:creator><![CDATA[Elizabeth Weiss]]></dc:creator>
		<pubDate>Fri, 16 Aug 2024 14:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[academic freedom]]></category>
		<category><![CDATA[anthropology]]></category>
		<category><![CDATA[archaeological research]]></category>
		<category><![CDATA[cultural sensitivity]]></category>
		<category><![CDATA[NAGPRA]]></category>
		<category><![CDATA[native american remains]]></category>
		<category><![CDATA[repatriation]]></category>
		<category><![CDATA[scientific censorship]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=45135</guid>

					<description><![CDATA[This article explores the controversial debate surrounding repatriation laws and their impact on archaeological research. Dr. Elizabeth Weiss recounts her experiences facing academic censorship and professional retaliation after publishing a book critical of repatriation policies. She argues that prioritizing Native American oral traditions over scientific evidence hinders objective study of human remains and artifacts. The piece raises thought-provoking questions about academic freedom, the role of identity politics in science, and the future of anthropological research.]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">N</span>AGPRA (the Native American Graves Protection and Repatriation Act)<sup><a href="#note01">1</a></sup> is a federal law that requires skeletal remains and certain artifacts (such as grave goods and sacred objects) from past populations to be turned over to culturally affiliated present-day Native American tribes. The cultural affiliations can be determined through a variety of means including scientific, historic, and oral traditions, but the determination of affiliation should be by a preponderance of evidence, which means that half or more of the evidence should support the link between the past and the present peoples. All federally funded institutions in the U.S., such as universities and museums (even private ones that accept federal funding) are required to follow NAGPRA. This includes the requirement that they create inventory lists so that Native American tribes can request repatriation of previously discovered and curated items.
</p>
<p>
	In 2017, I decided to reach out to now-retired attorney James W. Springer to see if he’d like to co-author a book on the topic of repatriation that took a critical perspective on the law and the ideology behind repatriation. Jim and I, though never having met face-to-face, had corresponded over the years based on our mutual concern that NAGPRA and similar laws would seriously hinder our ability to accurately understand the past—including the intriguing and ongoing mystery of how the Americas were first peopled.
</p>
<p>
	Jim started his career as an anthropologist after receiving his PhD at Yale University, teaching, excavating, and researching for ten years prior to beginning his law career; throughout the years, he remained interested in anthropology, and continued to read and write about anthropological issues. I’m an anthropology professor at San José State University (SJSU); my focus is on studying skeletal remains and radiographic images (such as X-rays and CT scans). I’ve investigated diseases, such as leprosy in a Byzantine collection, and osteoarthritis patterns from Native Americans. I’ve also tried to reconstruct past people’s activities, looking at stress fractures<sup><a href="#note02">2</a></sup> and bone strength in skeletal remains from 18<sup>th</sup> century European Canadians<sup><a href="#note03">3</a></sup> to Paleoindians,<sup><a href="#note04">4</a></sup> whose skeletal remains in the Americas are 7,500 years old or older.
</p>
<p>
	Jim and I wrote our book proposal and received a contract, after which I submitted a leave request to my department chair Roberto Gonzalez and my dean Walt Jacobs. Roberto provided an exceedingly supportive statement that also demonstrated that he understood the controversial nature of my position. “Dr. Weiss’s proposed project is likely to benefit the anthropology department in multiple ways,” he wrote. “Finally, since Dr. Weiss holds a controversial position on NAGPRA— focusing upon the ways in which the interpretation and implementation of repatriation and reburial laws may impede intellectual inquiry—her new project is likely to spark lively discussions among various stakeholders.” <span id="more-45135"></span>
</p>
<p>
	My leave was approved, and Jim and I set out to write our book, <em>Repatriation and Erasing the Past</em>.<sup><a href="#note05">5</a></sup> It’s organized into three parts. The first focuses on debunking myths taught in K-12 and at universities, especially the conventional dogma that pre-Columbian populations in the Americas were peaceful. Examination of the skeletal remains throughout the Americas revealed clear evidence of violence, particularly the embedded arrowheads and severed limbs that were taken as war trophies. A chapter on biological relatedness discussed the frequent inability to tie past peoples to modern tribes. For example, the DNA of the 11,500-year-old Paleo-Indian Sunrise Girl could not be matched to that of any modern population, suggesting that she came from a group of lost or replaced Beringinians—those who walked across the Bering land bridge while it was in existence.
</p>
<p>
	The second part of the book reviews NAGPRA, the history of Indian Laws and other conservation laws, and explores the complex issues in determining modern tribal affiliation to past peoples. One example is the Kennewick Man case, in which a claimed link between the over 8,000-year-old discovery and one of the modern tribes led to President Obama signing a congressional order to have Kennewick Man repatriated and reburied, preventing all future study. A closer look at the DNA evidence shows that Kennewick Man is, in fact, more closely related to South American Native Americans than the North American tribe who destroyed the discovery through reburial.
</p>
<p>
	The third part of <em>Repatriation and Erasing the Past</em> delves into the problem with repatriation laws and repatriation ideology. It stems from the post-modern movement in which there is no such thing as truth and <em>all</em> conclusions are considered equally subjective. Repatriation ideology places importance on who is providing the information and whether that individual can claim a victim status; thus, information from Native Americans is considered to have greater validity than information coming from a European American, regardless of the latter’s scientific qualifications or demonstrable truthfulness of the information. We also explored NAGPRA’s violation of the U.S. Constitution’s First Amendment with regard to the separation of church and state in requiring each review committee to have at least two traditional Indian religious leaders and thereby promoting a specific religion—traditional Indian religion—as a required component of the law.
</p>
<p>
	Further violation of the First Amendment includes NAGPRA’s acceptance of creation myths, in the form of oral tradition, as evidence for cultural affiliation (i.e., the connection between a present tribe and past peoples) to support repatriation events. We also looked at how NAGPRA and repatriation ideology encourage censorship. Due to the default acceptance that the Native American culture should be held in deference, repatriation ideology also allows for discrimination; anthropologists eager to continue collaboration with tribes must, for instance, observe menstrual taboos that prohibit women who are menstruating from engaging in research and fieldwork. In short, we called for a return to the primacy of demonstrable, objective knowledge and the abandonment of anti-scientific values.
</p>
<p>
	When I returned to campus after my writing leave, I received SJSU’s College of Social Sciences highest scholarly prize for my research on and curation of the Ryan Mound collection. My university had always been happy to celebrate my achievements. In 2008, I was chosen by then Provost Gerry Selter to speak about my scientific research and my work criticizing repatriation and reburial of skeletal remains. Even earlier than that, when I was hired in 2004, I spoke out against the loss of scientific data through repatriation of remains. I point all this out because shortly after Repatriation and Erasing the Past was published, administrators, including my department chair and the university president, started to take a different view—rather than praising my accomplishments and wowing at the imagery highlighting the importance of studying skeletal remains, they started to condemn my scholarship by taking retaliatory actions to derail my career.
</p>
<p>
	The first sign of trouble was in mid-December 2020 upon receiving a panicked email from my publisher, followed by a phone call, in which they lamented that they were “in crisis mode” because of an open letter<sup><a href="#note06">6</a></sup> calling for the banning of Repatriation and Erasing the Past crafted by Siân Halcrow (University of Otago, New Zealand), Amber Aranui (Te Papa Tongarewa, Museum of New Zealand), Stephanie Halmhofer (University of Alberta, Canada), Annalisa Heppner (Brown University, USA), Kristina Killgrove (University of North Carolina at Chapel Hill, USA), and Gwen Robbins Schug (University of North Carolina at Greensboro/Appalachian State University, USA).
</p>
<p>
	The open letter, which was signed by nearly 900 academics, called the book racist and wanted it to be retracted, or at least removed from open access to universities and libraries—in other words, they didn’t want to give people the opportunity to read it. Through several phone calls with the publisher, Jim and I were able to convince them not to pull the book, but the publishers nevertheless issued an apology for its publication, written by Director Romi Gutierrez:
</p>
<blockquote>
<p>
		I write to address the current discussion surrounding a book the University of Florida Press published several months ago: <em>Repatriation and Erasing the Past</em>, by Elizabeth Weiss and James W. Springer. (…) Because the gears of the publication process do move gradually, a flawed editorial decision made in the past has consequences today.
	</p>
<p>
		Those consequences, in this case, include harm to voices we sincerely value, the potential to undo important progress, and injury to relationships we have long worked to foster. I apologize for the pain this publication has caused. It was not our intent to publish a book that uses arguments and terminology associated with scientific racism. I assure you that, months ago, changes to our editorial program had already started to take place, including greater focus to inclusivity and sensitivity, and we will continue and redouble these efforts.<sup><a href="#note07">7</a></sup>
	</p>
</blockquote>
<p>
	Jim and I stood up for what we had written; we wrote articles on our perspective, and fought back in the public press and on social media.<sup><a href="#note08">8</a>, <a href="#note09">9</a>, <a href="#note10">10</a></sup> Our book remains available and can be found in nearly 1,000 libraries.
</p>
<p>
	When this crisis was brought to my attention, I reached out to my chair Roberto Gonzalez and my dean Walt Jacobs. Gonzalez informed me that he had already known about the cancellation attempt. What I didn’t know was that Gonzalez and Jacobs would become my biggest foes in what turned out to be the start of a campaign to cancel me.
</p>
<p>
	At this time, there were two other incidents involving my university that would play into their efforts to retaliate against me. One faculty member, A.J. Faas, wrote in an email to the anthropology department:
</p>
<blockquote>
<p>
		I would like to share an important resource that just went live a few weeks ago. Cite Black Authors, a searchable database of Black scholarly journal articles, books, and expert listings, will launch to the public on November 16, 2020, at CiteBlackAuthors.com. The website is an interdisciplinary effort to enhance and recognize Black academic voices for better representation in scholarship. Sparked by the death of George Floyd and ongoing racial conflicts in the United States, a team of nine people, including researchers, developers, and graduate students, curated citations and designed a searchable website for researchers, educators, and the general public. We are overwhelmingly grateful for the support of contributors and the team behind the initiative. Please share the attached release with your friends and groups—and help us to spread the word and the WORK of Black, academic professionals.
	</p>
</blockquote>
<p>
	I responded in writing that:
</p>
<blockquote>
<p>
		Although the intent of Cite Black Authors may be well-meaning, as a scholar in search of objective knowledge, I encourage researchers to look for the best source material and realize that an author’s ethnicity, race, or color of their skin has no actual bearing on the validity of their contribution.
	</p>
</blockquote>
<p>
	Second, I attended a webinar on creating a Native American Studies Center, put on by Provost Vincent Del Casino. The speakers (two of the three stated that they were Native American scholars) expressed what I submit are inherently racist sentiments, including that Native Americans were better than any other people in all topics (the speaker then started to list them: math, biology, literature…) and that only Native Americans should teach and work in the Native American Studies Center because it would be an insult if a Mexican was mistaken for a Native American or vice versa. In the comments bar of the seminar video stream, I wrote that these statements were racist; why would it be an insult to be mistaken for a Mexican, unless you thought that there was something wrong with being Mexican? Would we want to restrict Shakespeare to being taught by Britons and British Americans only? The responses to my criticisms included an email from a professor of Chicano and Chicana Studies that concluded, “I’m exhausted and disgusted by Karen antics that silence BIPOC voices in the academy” and a phone call from chair Gonzalez in which he implored me not to attend such events, stating that my attendance could harm the department’s junior faculty’s chances at tenure.
</p>
<p>
	Calls for the banning of Repatriation and Erasing the Past increased, and Gonzalez wrote a letter to all faculty, staff and graduate students in the department “reminding everyone that the opinions expressed in your new book don’t represent the position of the SJSU anthropology department, and that the principle of academic freedom allows us all to freely pursue our ideas—even if they’re controversial or unpopular.” He stated that because he received “numerous emails and calls from colleagues who are quite concerned and even upset about your book,” he thought that this was the best step to take. He concluded:
</p>
<blockquote>
<p>
		On a personal note, I want to let you know that after having carefully read the book, I disagree with both the substance and style of <em>Repatriation and Erasing the Past</em>, including its dismissal of Native American epistemologies and indigenous scholarship, its Victorian-era approach to anthropological inquiry, and its linear, pre-Kuhnian view of scientific progress. Our discipline has long played a signal role in recognizing the insight and benefits that can come from understanding different lifeways and cultural perspectives, and I’m confident that we will continue doing so in the future.
	</p>
</blockquote>
<p>
	I replied, making sure that all who received his letter also received mine:
</p>
<blockquote>
<p>
		Roberto’s use of the term “Victorian-era” is to misunderstand our perspective though I would like to remind everyone Charles Darwin was from the Victorian era—and where would we be without him?! Both Thomas Kuhn and Karl Popper (the philosopher James and I cite) had remarkable careers with thought-provoking works that helped shape modern social sciences. They were also contemporaries and, thus, although their approaches to understanding science differed, to portray Repatriation and Erasing the Past as antiquated because we don’t use Kuhn, but rather cite Popper, is an unfair representation. Popper’s view of scientific progress is not a linear view and neither do James and I support a linear view of scientific progress. We even highlight the unpredictability and uncontrollability of knowledge while noting that knowledge is not an unfolding sequence, but rather a messy endeavor that is in constant flux. Science is the search for truth, perhaps never getting to the truth on many topics. New data or old data reanalyzed, we point out, can upturn previous explanations.
	</p>
</blockquote>
<p>
	The end outcome of this exchange was that Gonzalez decided to put together a webinar series: “Emancipatory Theory &#038; Praxis: Confronting Racism in American Society and American Anthropology.” His chosen speakers included Agustín Fuentes of Princeton University. In his talk, Fuentes focused on White supremacy and violence. I asked about the recent slew of Black on Asian crimes that were occurring and how he would define these acts; he stated that they too were due to White supremacy—if Blacks weren’t discriminated against by Whites, then they wouldn’t be attacking Asians. The other invited speaker was William White from UC Berkeley. White’s talk was actually quite illuminating; he pointed out that students of color were often directed to study populations that they were related to and that this didn’t always match their interests; thus, arguing against students being shoehorned into specific areas of study based on their ethnicity. A Black student may want to study Celts while a White student might wish to learn more about Caribbean archaeology.
</p>
<p>
	In response to this webinar, I asked my chair whether I could put together a similar webinar to focus on academic freedom and diversity of thought. His initial response was that there was no funding and no time to do so; then, when I suggested we move it to the next semester, he dug up some rules on guest speakers that we never used and weren’t used for the equity webinar, and even had us retroactively vote on whether we approved of the topic and speakers of his webinar! Unsurprisingly, I haven’t been able to get department support to hold a webinar or invite a guest speaker.
</p>
<p>
	Over the following months, the controversies simmered down, until Jim and I submitted an abstract to the 2021 Society for American Archaeology (SAA) conference. Our talk, “Has Creationism Crept Back into Archaeology?”, called for a relying on data rather than creation myths to ensure that repatriation can be done as fairly as possible. I compared creation myths of Native Americans to other creation myths while pointing out that the SAA has previously taken a stand against the teaching and use of biblical interpretations, such as intelligent design, to understand the past. The SAA was founded in 1934 and states that it is “dedicated to research about and interpretation and protection of the archaeological heritage of the Americas.” The SAA has about 7,000 members that include professional and avocational archaeologists, archaeology students in colleges and universities, and archaeologists working at Tribal agencies, museums, government agencies, and the private sector.
</p>
<p>
	Although our topic did not seem particularly controversial to me, there was a movement to prevent the talk (which was pre-recorded) from being played.<sup><a href="#note11">11</a></sup> It aired anyway, and the comment box quickly filled up with name-calling and accusations of racism. But even though all talks were supposed to stay on the SAA platform for two months, the incoming president apologized for the harm caused, removed the talk from the platform, and formed a committee to ensure that those critical of repatriation ideology would not be allowed on the program again. Two years later, I submitted an abstract that discussed whether X-rays were to be included in repatriation. At SJSU, X-rays are now being labeled as “sacred objects” and upon repatriation of the human remains and artifacts, the tribes plan to burn the X-rays. This abstract was rejected; the California Society for Archaeology rejected it too.
</p>
<p>
	On June 3, 2021, Roberto Gonzalez—hosted by my dean Walt Jacobs—presented a review of my situation at the Council of Colleges of Art and Sciences called “What to Do When a Tenured Colleague is Branded a Racist.” In this presentation, Gonzalez painted me as being manipulative, racist, strange, and professionally incompetent. He stated that my anti-repatriation position, coupled with my reaction to the Cite Black Authors incident, were reasons to withhold resources, and that my next review would state that I am incompetent. He suggested that if I taught my perspective to students or assigned my books in classes, he would consider removing me from the classroom. Although he employed a pseudonym for me, it was clear that this was a personal attack—describing me as odd, not “warm and fuzzy,” and bizarrely suggesting that I might lock myself into the curation facility to prevent a repatriation from occurring. (While I disagree with NAGPRA, I have always followed the law.) Further, he painted my ideas as “scientific racism” and stated that he would have signed the open letter calling for my book to be banned if he wasn’t chair. Perhaps most surreally, Gonzalez painted himself as a victim, claiming that he had no idea of the “skeletons in my closet”—that I am an anti-repatriation anthropologist. Yet, this was clearly a lie, as seen in the approval for my leave to write the book, his support for my award, and the many discussions we held about repatriation over the years. Gonzalez was even on my hiring committee— so he knew that one of my first papers was on Kennewick Man and how repatriation and reburial of Paleo-Indians is an affront to scientific inquiry and hinders our ability to reconstruct the past.
</p>
<p>
	It was clear that my job was on the line. Even a tenured professor can be fired for incompetence, which is what they were accusing me of!
</p>
<p>
	The latest cancellations started up again after I published an op-ed critical of California’s repatriation law called CalNAGPRA.<sup><a href="#note12">12</a></sup> The recent changes to CalNAGPRA all but set the stage for repatriation events that would hollow out collections in all of the state’s universities. CalNAGPRA states that Native American knowledge must be deferred to; that is, if there is a disagreement between scientific evidence and the Native American oral tradition, the Native American oral tradition must be given priority. All that is needed for repatriation is for a Native American to say that there is a link between the modern tribe and the past peoples—this would then trump <em>all</em> the DNA and forensic evidence. There must also be continuous consultation with all tribes of the area and their requests for handling, access, and repatriation must be followed.
</p>
<p>
	Shortly after the op-ed, I posted on social media a now infamous photo of me holding a skull (above left), with the caption “So happy to be back with some old friends.” Provost Vincent Del Casino wrote a scathing letter condemning me, stating: “This image has evoked shock and disgust from our Native and Indigenous community on campus and from many people within and outside of SJSU.”<sup><a href="#note13">13</a></sup> He went on to criticize me: “in what context is it ever ethically appropriate for an academic to handle remains while smiling with ungloved hands while calling these remains ‘friends’? I doubt many colleagues in the fields of Forensic Science or Physical Anthropology would find this palatable.” I reached out to the provost and suggested that we discuss the issue and address the public together, but he didn’t wish to do this and, thus, I provided him with a letter to send to the same people on his email list, which he did.<sup><a href="#note14">14</a></sup> I clarified a variety of misunderstandings; for instance, “handling remains with gloves is only necessary if these remains have always been treated with gloved hands and other sterile conditions.” Use of gloves is actually actively <em>discouraged</em> by the National Park Society because it leads to people dropping materials more often. I ended the letter with:
</p>
<blockquote>
<p>
		We have no way of telling what the individuals whose remains we curate would think about this issue, but when one looks at the Egyptian mummies, Ötzi the iceman, or the bog bodies of northern Europe, public display celebrates these individuals, telling their stories in a respectful way that gives them a voice they never had in life. The same is true of our collection, and we should be celebrating the lives of these first occupants of Silicon Valley—not allowing their voices to be silenced by a vociferous campaign orchestrated by woke activists whose strategy is to try to shut down debate, and promote superstition over science. (…) Finally, it’s an attack on a genuine photo that celebrates our collection, my admiration for the collection, and my joy at being able to do my job. After your strong statement regarding academic freedom, I am disappointed that you were not courageous enough—as those reporting on me—to talk to me first, to have a rational discussion about these occurrences.
	</p>
</blockquote>
<p>
	My response led to many colleagues reaching out to me; finding similar photos (from SJSU and many other institutes, including the Smithsonian); and providing support. One very supportive colleague faced an onslaught of criticism after he was quoted in the <em>Mercury News</em>, which led him to withdraw his support of my freedom of expression. He called to let me know that he was frightened. Others have expressed quiet support—a former student and lecturer let me know that she didn’t think that I was racist at all, but spent all day debating whether to email me from her university account or her private account, ultimately deciding on the latter. This fear of cancellation permeates academia.
</p>
<div class="imageclearall">
	<img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-2/Elizabeth-with-skull.jpg" alt="Controversial photo of anthropologist with skull vs. university-approved promotional image" width="750" height="269" class="noBoxShadow" style="display: block;" /></p>
<p class="caption">
		On the left, the image that Provost Del Casino said did “not align with the values of SJSU” and “evoked shock and disgust from our Native and Indigenous community on campus and from many people within and outside of SJSU.” On the right, the image that the University used for multiple websites and promotional material. This image was removed from the University website in July 2023.
	</p>
</div>
<p>
	All of this led the then President Mary Papazian to remove my access to the curation facility. <em>They literally changed the locks!</em> She also stated that no photos were allowed of the Native American skeletal collection (a key aspect to understanding past diseases is sharing and comparing images) or even of the boxes that held the bones.<sup><a href="#note15">15</a></sup> I responded by noting that non-Native American remains and other collections were in the facility and not subject to repatriation law. One cultural anthropologist asked whether I had written permission from these individuals to take the photos, knowing full well that they had been dead for centuries! I fought all these restrictions. Tribal liaison Alisha Ragland was perhaps one of the most vitriolic, claiming that I was a eugenicist and shouldn’t get access to any collections. They moved the protocol for access to skeletal collections to the Institute Review Board, which I fought against. The chair held meetings to try to remove me from my duties as curator, although it is part of my job contract.
</p>
<p class="pullquote">
	How did we get to a situation in which opposing the reburial of human remains is automatically deemed racist and can derail an anthropologist’s career?
</p>
<p>
	I decided the time had come to initiate legal action against the SJSU for their various retaliatory actions. While putting together our case, Pacific Legal Foundation (who represented me pro bono) and I contacted senior academic figures in physical anthropology and archaeology to seek expert opinions on protocols for handling skeletal remains.
</p>
<p>
	Dr. Douglas Owsley has worked with many prestigious universities and museums, including the Smithsonian. He is the subject of Jeff Benedict’s book <em>No Bone Unturned: Inside the World of a Top Forensic Scientist and His Work on America’s Most Notorious Crimes and Disasters</em>.<sup><a href="#note16">16</a></sup> Dr. Owsley wrote in his supportive <em>amicus</em> brief: “Based on my experience there was nothing improper about Professor Weiss’s photograph.” He also noted that, “it is not standard protocol to use gloves when examining archeological bones in the laboratory.”<sup><a href="#note17">17</a></sup> Professor Della Cook, who managed the North American skeletal collections held by the Department of Anthropology at Indiana University from 1973 to 2021 and has collected data on skeletal remains around the world, wrote “Photographs of researchers measuring or otherwise doing observations on bones, ancient and modern, are routine in our field, and many anthropologists smile in such photographs.” She added that “There are several such photos of me in circulation, and in most of them I am smiling.”<sup><a href="#note18">18</a></sup>
</p>
<p>
	The department also worked with Native Americans and a retired forensic anthropology lecturer to rewrite protocols that would determine access to collections. The protocol even included a menstruation taboo. They couldn’t bring themselves to state that women who are menstruating are not allowed in the curation facility or to handle remains and artifacts; thus, they used the term “menstruating personnel,” to avoid the implication that only females menstruate! Seriously? Upon seeing this, my lawyers and I decided to mention that this likely constitutes a Title IX violation and that, if it were not removed, we would file a complaint. It was removed.<sup><a href="#note19">19</a></sup>
</p>
<p>
	This discriminatory action against women is far more common in anthropology now than you may think; Native American tribes believing in menstrual taboos will ban women who are menstruating from engaging in fieldwork, handling remains, and even eating with the rest of the crew. Many institutions, including UC Berkeley, condone this behavior.<sup><a href="#note20">20</a></sup>
</p>
<p>
	Further complications arose when I requested <em>nonhuman</em> animal bones. These items were quickly declared by Native Americans as being sacred and, thus, out of reach for my research. Previously unaware of possessing any mystical powers, I suddenly realized that whatever I requested magically and instantly became “sacred” and thus off-limits.
</p>
<p>
	When we first went in front of a judge, she ruled that my case was to be dismissed, but I was allowed to amend my complaint.<sup><a href="#note21">21</a></sup> In the decision to dismiss the case, she stated that the tribes were an “indispensable” party to the case, but the tribes were considered sovereign and, thus, could not be sued. We filed an amended complaint, excluding the Native American collections, and—to everyone’s surprise—the motion to dismiss was overturned. Unfortunately, I still had no recourse to get access to X-rays and nonhuman animal bones, or even the ability to enter the curation facility.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.2/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n2_lg.jpg" alt="Skeptic 29.2 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.2<br />
		<a href="https://www.skeptic.com/productlink/magv29n2">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/264050#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	How did we get to a situation in which opposing the reburial of human remains is automatically deemed racist and can derail an anthropologist’s career? It’s about turning anthropology into an ideological battleground weighted in favor of victimhood and (often disproven by evidence) tribal identity—both political and social—rather than a scientific endeavor aimed at better understanding the past for the benefit of all humankind. It doesn’t matter who is correct, it matters who gets to tell the story, with Native American narratives now considered expert testimony that cannot and must not be questioned.
</p>
<p>
	The next proposed revisions to NAGPRA are sure to erase the imperfect compromise that was included in the original NAGPRA law—a compromise that tried to ensure that science would still be conducted by allowing for the retention of most artifacts and the continued curation of culturally unidentifiable skeletal remains. Native American tribes, such as the San Carlos Apache Tribe, want a definition of human remains that includes naturally-shed material (such as hair and skin cells), casts, replicas, and digital data.<sup><a href="#note22">22</a></sup> Some have argued that animals imbued with human spirits should be included as human too. Of course, the puma who recently died in Los Angeles was given a traditional Indian burial in order to stop research on its remains.<sup><a href="#note23">23</a></sup>
</p>
<p>
	Why would a university or museum want to lose their collections? They may think cooperating will enable them to continue their research endeavors, but research purely for the purpose of propping up a political agenda isn’t research. When the tribes can control your conclusions, one is no longer engaged in pursuit of objective knowledge.
</p>
<p>
	For my part, I shall oppose the unconstitutional NAGPRA in order to bring anthropologists, especially the next generation, back to science. And I will promote science which knows no bounds of race, sex, gender, religion, or creed over superstition, regardless of any race, sex, gender, religion, or creed affiliation—real, alleged, or imagined. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		About the Author<br />
	</h5>
<p>
		Elizabeth Weiss is co-author, with James W. Springer, of <em>Repatriation and Erasing the Past</em>. She is also the author of <em>Reading the Bones: Activity, Biology, and Culture</em> and <em>Paleopathology in Perspective: Bone Health and Disease Through Time</em>. She has been a professor of anthropology at San José State University (SJSU) since 2004. Prior to joining SJSU, Elizabeth had a post-doctoral research position at the Canadian Museum of Civilization in Ottawa. She is a faculty fellow at Heterodox Academy’s Center for Academic Pluralism and a National Association of Scholars board member.
	</p>
<h5>
		References<br />
	</h5>
<ol>
<li id="note01">
			<a href="https://bit.ly/3TFSs7G">https://bit.ly/3TFSs7G</a>
		</li>
<li id="note02">
			<a href="https://bit.ly/3TJt69a">https://bit.ly/3TJt69a</a>
		</li>
<li id="note03">
			<a href="https://bit.ly/4chYmmP">https://bit.ly/4chYmmP</a>
		</li>
<li id="note04">
			Weiss, E. (2001). Kennewick Man’s Behavior: A CT-Scan Analysis. <em>American Journal of Physical Anthropology, 32S</em>, 163–163
		</li>
<li id="note05">
			Weiss, E. &#038; Springer, J. W. (2020). <em>Repatriation and Erasing the Past</em>. University of Florida Press.
		</li>
<li id="note06">
			<a href="https://bit.ly/3VoFexw">https://bit.ly/3VoFexw</a>
		</li>
<li id="note07">
			<a href="https://bit.ly/3TpiD1f">https://bit.ly/3TpiD1f</a>
		</li>
<li id="note08">
			<a href="https://bit.ly/4cjUwJS">https://bit.ly/4cjUwJS</a>
		</li>
<li id="note09">
			<a href="https://bit.ly/43oQ1d5">https://bit.ly/43oQ1d5</a>
		</li>
<li id="note10">
			<a href="https://bit.ly/4aiNsvm">https://bit.ly/4aiNsvm</a>
		</li>
<li id="note11">
			<a href="https://bit.ly/4chCpnR">https://bit.ly/4chCpnR</a>
		</li>
<li id="note12">
			<a href="https://bit.ly/4chCqbp">https://bit.ly/4chCqbp</a>
		</li>
<li id="note13">
			<a href="https://bit.ly/3VmVuyL">https://bit.ly/3VmVuyL</a>
		</li>
<li id="note14">
			<a href="https://bit.ly/3v714uJ">https://bit.ly/3v714uJ</a>
		</li>
<li id="note15">
			<a href="https://bit.ly/4cnYZuY">https://bit.ly/4cnYZuY</a>
		</li>
<li id="note16">
			Benedict, J. (2004). <em>No Bone Unturned: Inside the World of a Top Forensic Scientist and His Work on America’s Most Notorious Crimes and Disasters</em>. Harper Perennial.
		</li>
<li id="note17">
			Owsley, D. (2022). <em>Amicus</em> brief. Case 5:22-cv-00641-BLF Document 50 Filed 03/10/22
		</li>
<li id="note18">
			Cook, D. (2022). <em>Amicus</em> brief. Case 5:22-cv-00641-BLF Document 49 Filed 03/10/22
		</li>
<li id="note19">
			<a href="https://bit.ly/4ciffh7">https://bit.ly/4ciffh7</a>
		</li>
<li id="note20">
			<a href="https://bit.ly/3TmF2Mr">https://bit.ly/3TmF2Mr</a>
		</li>
<li id="note21">
			<a href="https://bit.ly/3Viu9Oo">https://bit.ly/3Viu9Oo</a>
		</li>
<li id="note22">
			<a href="https://bit.ly/3VFMghB">https://bit.ly/3VFMghB</a>
		</li>
<li id="note23">
			<a href="https://bit.ly/3IJ2jDs">https://bit.ly/3IJ2jDs</a>
		</li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Tension Between Science &#038; Ideology:  Exploring Paths to Social Justice</title>
		<link>https://www.skeptic.com/reading_room/tension-between-science-ideology-exploring-paths-to-social-justice/</link>
		
		<dc:creator><![CDATA[Bradley Campbell]]></dc:creator>
		<pubDate>Fri, 09 Aug 2024 07:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[critical theory]]></category>
		<category><![CDATA[liberal democracy]]></category>
		<category><![CDATA[racial disparities]]></category>
		<category><![CDATA[science and ideology]]></category>
		<category><![CDATA[scientific method]]></category>
		<category><![CDATA[social activism]]></category>
		<category><![CDATA[social justice]]></category>
		<category><![CDATA[sociology]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=45127</guid>

					<description><![CDATA[UCLA professor of sociology, Bradley Campbell, explores the tension between scientific and ideological approaches in sociology, particularly in the context of social justice efforts. It critiques critical theory's dominance and argues for a more balanced, evidence-based approach to understanding and addressing social issues within liberal democratic frameworks.]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">A</span>s a sociologist interested in the scientific study of social life, I’ve long been concerned about the ideological bent of much of sociology. Many sociologists reject outright the idea of sociology as a science and instead prefer to engage in political activism. Others subordinate scientific to activist goals, and are unclear as to what they believe sociology’s purpose should be. Still others say different things depending on the audience.
</p>
<p>
	The American Sociological Association (ASA) does the latter. Last December, the Board of Governors of Florida’s state university system removed an introductory sociology course from the list of college courses that could be taken to fulfil part of the general education requirement. It seemed clear that sociology’s reputation for progressive politics played a role in the decision. Florida’s Commissioner of Education, for example, wrote that sociology had been hijacked by political activists.<sup><a href="#note01">1</a></sup> The ASA denied the charge and went on to declare that sociology is “the scientific study of social life, social change, and the social causes and consequences of human behavior.”
</p>
<p>
	While that definition certainly aligns with my vision of what sociology should be, it contrasts with another recent statement made by the ASA itself when announcing this year’s annual conference theme. The theme is “Intersectional Solidarities: Building Communities of Hope, Justice, and Joy,” which, as the ASA website explains, “emphasizes sociology as a form of liberatory praxis: an effort to not only understand structural inequities, but to intervene in socio-political struggles.”<sup><a href="#note02">2</a></sup> It’s easy to see how Florida’s Commissioner of Education somehow got the idea that sociology has become infused with ideology.
</p>
<p>
	The ASA’s statement in defense of sociology as the science of social life seems insincere. That’s unfortunate— we really do need a science of social life if we’re going to understand the social world better. And we need to understand the world better if we’re going to effectively pursue social justice. The ASA’s brand of sociology as liberatory praxis leads not only to bad sociology, but also to misguided efforts to change the world. As I’ve argued in my book <em>How to Think Better About Social Justice</em>, if we’re going to change the world for the better, we need to make use of the insights of sociology. But bad sociology only makes things worse. <span id="more-45127"></span>
</p>
<p>
	Contemporary social justice activism tends to draw from a sociological perspective known as <em>critical theory</em>. Critical theory is a kind of conflict theory, wherein social life is understood as a struggle for domination. It is rooted in Marxist theory, which viewed class conflict as the driver of historical change and interpreted capitalist societies in terms of the oppression of wage laborers by the owners of the means of production. Critical theory understands social life similarly, except that domination and oppression are no longer simply about economic class but also race, ethnicity, gender, religion, sexuality, gender identity, and much more.
</p>
<p>
	There are two problems with social justice efforts informed by critical theory. First, this form of social justice—often called “critical social justice” by supporters and “wokeism” by detractors—deliberately ignores the insights that might come from other sociological perspectives. Critical theory, like conflict theory more broadly, is just one of many theoretical approaches in a field that includes a number of competing paradigms. It’s possible to view social life as domination and oppression, but it’s also possible to view it as a network of relationships, or as an arena of rational transactions similar to a marketplace, or as a stage where actors play their parts, or as a system where the different parts contribute to the functioning of the whole. If you’re going to change the social world, it’s important to have some understanding of how social life works, but there’s no justification for relying exclusively on critical theory.
</p>
<p>
	The second problem is that, unlike most other sociological perspectives, critical theory assumes an oppositional stance toward science. This is partly because critical theory is intended not just to describe and explain the world, but rather to change it—an approach the ASA took in speaking of sociology as “liberatory praxis.” However, the problem isn’t just that critical theory prioritizes political goals over scientific ones, it’s that it also sees science as oppressive and itself in need of critique and dismantling. The claim is that scientific norms and scientific knowledge—just like other norms and other forms of knowledge in liberal democratic societies—have been constructed merely to serve the interests of the powerful and enable the oppression of the powerless.
</p>
<p>
	Critical theory makes declarations about observable aspects of social reality, but because of its political commitments and its hostile stance toward scientific norms, it tends to act more like a political ideology than a scientific theory. As one example, consider Ibram X. Kendi’s assertions about racial disparities. Kendi, a scholar and activist probably best known for his book How to Be an Antiracist, has said, “As an anti-racist, when I see racial disparities, I see racism.”<sup><a href="#note03">3</a></sup> The problem with this approach is that while racism is one possible cause of racial disparities (and often the main cause!), in science, our theories need to be testable, and they need to be tested. Kendi doesn’t put his idea forward as a proposition to be tested but instead as a fundamental truth not to be questioned. In any true science, claims about social reality must be formulated into testable hypotheses. And then we need to actually gather the evidence. Usually what we find is variation, and this case is likely to be no different. That is, we’re likely to find that in some contexts racism has more of a causal role than in others.
</p>
<p>
	We often want easy answers to social problems. Social justice activists might be inclined to turn to would-be prophets who proclaim what seems to be the truth, rather than to scientists who know we have to do the legwork required to understand and address things. Yes, science gives us imperfect knowledge, and it points to the difficulties we encounter when changing the world… but since we live in a world of tradeoffs, there are seldom easy answers to social problems. We can’t create a perfect world—utopia isn’t possible—so any kind of social justice rooted in reality must try to increase human flourishing while recognizing that not all problems can be eliminated, certainly not easily or quickly.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.2/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n2_lg.jpg" alt="Skeptic 29.2 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.2<br />
		<a href="https://www.skeptic.com/productlink/magv29n2">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/264050#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	What does it all mean? For one, we should be much more skeptical about one of critical theory’s central claims— that the norms and institutions of liberal democratic societies are simply disguised tools of oppression. Do liberal ideals such as equality before the law, due process, free speech, free markets, and individual rights simply mask social inequalities so as to advance the interests of the powerful? Critical theorists don’t really subject this claim to scientific scrutiny. Instead, they take the presence of inequalities in liberal societies as selfsufficient evidence that liberalism is responsible for these failures. Yet any serious attempt to pursue social justice informed by scientific understanding of the world would involve comparing liberal democratic societies with other societies, both present and past.
</p>
<p>
	Scientific sociology can’t tell us the best way to organize a society and social justice involves making tradeoffs among competing values. We may never reach a consensus on what kind of society is best, but we should consider the possibility that liberal democracies seem to provide the best framework we yet know of for pursuing social justice effectively. At the very least, they provide mechanisms for peacefully managing disputes in an imperfect world. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		About the Author<br />
	</h5>
<p>
		Bradley Campbell is a professor of sociology at California State University, Los Angeles. He is the author <em>The Geometry of Genocide</em>, <em>The Rise of Victimhood Culture</em> (with Jason Manning), and <em>How to Think Better About Social Justice: Why Good Sociology Matters</em>. His research interests include moral conflict, violence, collision of right and wrong, and how they are handled. He has recently begun to examine conflicts on college campuses, manifestations of ongoing moral change, and the clash of different moral ideals.
	</p>
<h5>
		References<br />
	</h5>
<ol>
<li id="note01">
			<a href="https://bit.ly/3xxYgYb">https://bit.ly/3xxYgYb</a>
		</li>
<li id="note02">
			<a href="https://bit.ly/3W2NKCo">https://bit.ly/3W2NKCo</a>
		</li>
<li id="note03">
			<a href="https://bit.ly/3U2d4WK">https://bit.ly/3U2d4WK</a>
		</li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>What is Cancel Culture Anyway?</title>
		<link>https://www.skeptic.com/reading_room/what-is-cancel-culture-anyway/</link>
		
		<dc:creator><![CDATA[Carol Tavris]]></dc:creator>
		<pubDate>Fri, 02 Aug 2024 07:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[academic freedom]]></category>
		<category><![CDATA[cancel culture]]></category>
		<category><![CDATA[censorship]]></category>
		<category><![CDATA[conservatism]]></category>
		<category><![CDATA[cultural landscape]]></category>
		<category><![CDATA[dissenting voices]]></category>
		<category><![CDATA[emotional reasoning]]></category>
		<category><![CDATA[free speech]]></category>
		<category><![CDATA[ideological wars]]></category>
		<category><![CDATA[intellectual diversity]]></category>
		<category><![CDATA[mob influence]]></category>
		<category><![CDATA[moral absolutism]]></category>
		<category><![CDATA[open debate]]></category>
		<category><![CDATA[public shaming]]></category>
		<category><![CDATA[social justice]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[speech codes]]></category>
		<category><![CDATA[trigger warnings]]></category>
		<category><![CDATA[virtue signaling]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=45058</guid>

					<description><![CDATA[Carol Tavris explores the concept of cancel culture, its historical context, and its implications for free speech and open discourse, highlighting examples from academia, media, and public life, arguing that cancel culture stifles intellectual diversity and promotes conformity. The author emphasizes the need for understanding and engaging with differing perspectives to preserve the principles of free expression.]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">I</span>t seems obvious that we live in the era of cancel culture, but what does that mean, exactly? To many on the left, “cancel culture” is merely a whiny self-defensive term offered by justifiably banished academics, writers, and celebrities—“cis white intellectuals” as one online writer disdainfully put it—who face no realistic threats to their freedom or livelihoods. Others think it is an overblown label for the eternal ideological wars between the left and the right, in which each extreme complains that the other side is censoring them while working hard to censor that other side. Or is cancel culture something new, describing a phenomenon that has become far more insidious, widespread, and dangerous for free speech and democracy? Spoiler alert: I’m going with the latter.
</p>
<p>
	Let’s stipulate at the outset that most people would prefer that their political opponents, intellectual enemies, and annoying challengers to their opinions would just shut up and go away. There’s nothing new about that desire, which has manifested throughout the centuries in the censorship, shunning, banishment, or imprisonment of those daring to differ. In my own lifetime, I have observed a dizzying turn of the academic and political wheels, as ascendant conservatives try to oust Commie-pinko-oversexedsocialist liberals until ascendant liberals try to oust fascist-racist-puritanical-authoritarian conservatives.
</p>
<p>
	I was born in the heyday of the Red Scare (1947–1957) and grew up watching the censorship or ostracism of anyone remotely tainted with membership in, or even holding supporting opinions about, left-wing groups. My older half-brother was dishonorably discharged from the U.S. Army in the early 1950s because of his prolonged and unrepentant association with a “known member of the Communist Party”—our father, who had briefly joined the Party in the 1930s. (The Supreme Court eventually overturned that discharge.)
</p>
<p>
	Observing these right-wing efforts to stifle or expel liberals (defined as anyone less ideologically conservative than they, including other conservatives), I was optimistically, if delusionally, certain that the liberal commitment to free speech, open debate, and scientific evidence would prevail if the tables were ever turned. It was clear who the enemy was. In sexology, it was and remains religious fundamentalists eager to ban any research on sexuality they fear and detest (actually, all of it, but especially evidence of the normalcy of childhood sexual play, premarital sex, homosexuality, and masturbation). I could not have imagined, as Pogo, the star of Walt Kelly’s great comic strip, said, that “we have met the enemy—and he is us.” I could not have imagined how many liberal sexologists and other scientists today would be eager to ban research on sexuality that <em>they</em> fear and detest (especially evidence that disputes transgender activists’ claims of the safety and necessity of adolescent medical interventions). And not just ban this research—excoriate, expel, and attempt to cancel the publications, lectures, and even the careers of those who conduct it. Just ask the eminent sexologists Kenneth Zucker (for showing that the great majority of gender nonconforming young boys grow up to be gay, not trans), Stephen Levine (for questioning the claims of gender-affirming therapies), and evolutionary biologists Carole Hooven and Colin Wright, anthropologist Robert Lynch, and philosophers Alex Byrne and Holly Lawford-Smth (for arguing that there are two biological sexes). On that subject, I’m sure, these scientists would never have imagined being in bed with British Prime Minister Rishi Sunak.
</p>
<p>
	Today, when I read accounts by consummate scientists such as anthropologist Elizabeth Weiss (this issue), who found herself caught between (1) the naïve but widespread belief that Native Americans were pacifists and (2) the empirical evidence, written in their bones, of their legacy of brutal battles, I see another light going out on the road to our national Endarkenment. <span id="more-45058"></span>
</p>
<p class="pullquote">
	Cancel culture is the impulse to punish or expel anyone who says the wrong thing or holds the wrong beliefs.
</p>
<p>
	I remember the first straw in the pile that would eventually disillusion me. In 2007, at my alma mater, Brandeis University, Donald Hindley, an esteemed professor of political science who had been at the university for 45 years, was explaining to his Latin American politics class the origin of the disparaging insult “wetback”—a slur against Mexican migrants entering Texas by swimming across the Rio Grande. One or two students were offended and immediately complained to the provost, who, in the words of Hindley’s eventual attorneys, “indulged the students’ fantasy that they were crusaders against racism.” She told Hindley that “The University will not tolerate inappropriate, racial and discriminatory conduct by members of its faculty,” also accusing him of inflicting “significant emotional trauma” on his students by forcing them to hear such an offensive term. As punishment, a monitor would sit in on his classroom for the rest of the term and he would have to attend racial-sensitivity training classes. This Hindley refused to do.
</p>
<p>
	The university was barraged with messages from outraged faculty and alumni like me, along with public mockery and condemnation. I wondered if Brandeis still offered the brilliant course I had taken years earlier on the history of anti-Semitism, which caused no end of “significant emotional trauma” in every class meeting and reading, though we called it “education.” The provost backed down, ultimately telling Hindley the matter was closed and she trusted he had learned his lesson, whatever that was.
</p>
<p>
	Looking back, I see that all the seeds of cancel culture— the impulse to punish or expel anyone who says the wrong thing or holds the wrong beliefs—were present in Hindley’s story:
</p>
<ul>
<li>
		indulging the student’s inability or unwillingness to speak to the professor directly, in the classroom or privately, instead rewarding the student for going to a grown-up to solve a complaint for them.
	</li>
<li>
		indulging the student’s self-righteousness in punishing a “racist.”
	</li>
<li>
		failing to enable the student to separate an emotional association with a word from an intellectual understanding of its origins and usage.
	</li>
<li>
		according a <em>word</em> the same trauma-generating status and seriousness as an <em>action</em>, along with blurring a speaker’s intention and motivation. I like knowing the varied and surprising origins of the ethnic slur <em>kike</em>, but don’t call me one.
	</li>
<li>
		holding closed-door committee meetings in which a professor’s sins are debated and determined, in the absence of any avenue of participation or defense from the accused.
	</li>
<li>
		humiliating a professor, in this case one with a lifelong commitment to social justice for the poor and minorities, by demanding he attend racial-sensitivity training and accept a babysitter in the class to make sure he doesn’t say any other bad word.
	</li>
<li>
		establishing “racial sensitivity” workshops as punishment and virtue signaling, without pausing to assess their methods and outcomes. Do these programs achieve their goals, or do they make their involuntary participants stubborn and angry? Come to think of it, haven’t a few other institutions and governments tried to forcibly reeducate disobedient, troublemaking citizens? How did those work out?
	</li>
</ul>
<p>
	Today, Hindley’s experience seems mild compared to the deluge of cases that followed. After all, he was not suspended or fired, nor was he a victim of social media mobs out for blood as compensation for a scratch. Mobs, real and virtual, have made it hard if not impossible for university presidents, company CEOs, and publishers to maintain positions of integrity and defend open debate, but mob influence is new only in the technology that allows it to congeal in a nanosecond and get that offender gone. At <em>The New York Times</em> in 2021, more than 150 young staffers felt entitled to howl for the firing of an honored older colleague, Donald McNeil, who had dared say the wrong word, even in an educational context. “Our community is in pain,” they wrote. They couldn’t possibly work with him and feel safe, they said. And they prevailed. No doubt they would look at my list of the elements of Hindley’s story that distressed and infuriated me and say “So? Brandeis did everything right.”
</p>
<p>
	That is why cancel culture is so worrisome: not because it reflects the familiar political divide between left and right, but because it reflects a generational war between old and young, a war between liberals and illiberals across parties. Liberals in my generation are surprised, and not a little uncomfortable, to find themselves opposing illiberals to their left and supporting conservatives to their right, sharing concern about cancel culture’s methods and the take-no-prisoners ideology that justifies them.
</p>
<p>
	In their extensive assessment of the origins and extent of the problem, Greg Lukianoff and Rikki Schlott’s <em>The Canceling of the American Mind</em><sup><a href="#note01">1</a></sup>documents case after enraging case that escalated in the years since Hindley. (The “American” mind extends to Canada and the UK.) Lukianoff, a lifelong liberal who joined the Foundation for Individual Rights and Expression (FIRE) in 2001 and is now its CEO, is well positioned to survey the changing landscape and report from the trenches. Schlott, a “right-leaning libertarian,” is a Gen Z journalist. Their collaboration is the point: left and right staking out a path between extremes of both sides.
</p>
<p>
	Lukianoff and Schlott’s definition of cancel culture is broader than the individuals who are “fired, disinvited, deplatformed, or otherwise punished” for speech that should be protected by America’s first amendment standards. Their definition adds “…and the climate of fear and conformity that has resulted.” In polls they cite, the majority of Americans of all parties and ages are reluctant to share their views on topics of politics, race, sexual orientation, gender, or religion, fearing loss of their jobs, grades, or social support. In the preface, social psychologist Jonathan Haidt, who was coauthor with Lukianoff on this book’s predecessor, <em>The Coddling of the American Mind</em>,<sup><a href="#note02">2</a></sup> notes that cancel culture “has metastasized and spread far beyond universities… [now infecting] journalism, the arts, nonprofits, K–12 education, and even medicine.” Because cancel culture seeks to punish anyone who says or does the “wrong” thing, absent knowledge of their motivation or context, people censor themselves. “Show me an organization where people are afraid to speak up, afraid to challenge dominant ideas lest they be destroyed socially,” Haidt adds, “and I’ll show you an organization that has become structurally stupid, unmoored from reality, and unable to achieve its mission.”
</p>
<p>
	Many organizations and institutions now fit that description, including Harvard and other elite universities, the ACLU, even the Unitarian Universalist Church, and Lukianoff and Schlott offer an illuminating history of the “slow-motion trainwreck” by which they went off the rails. The “First Great Age of Political Correctness, 1985–1995” gave us the term, pretty much confined to college campuses; its pompous usages were eventually laughed off. But there was nothing funny about the ensuing shift of position by the political left, which began equating freedom of <em>speech</em>, which they had long championed as a bedrock liberal value, with freedom of <em>hate speech</em>, which they were determined to eradicate. Social justice goals began trampling the once-inviolate goal of protecting minority opinions, even if “hateful” opinions come from the minority individuals whose rights you otherwise care about. And who defines what “hate speech” is? We all agree that slurs and insults count. But am I guilty of hate speech if I publish a study whose findings you find hateful, hold an opinion about racism or gender that doesn’t conform to yours, or speak Words That Must Not Be Said? In the UK, Lukianoff and Schlott report, more than 3,000 people in 2016 alone were “detained and questioned by police for non-crime ‘hate incidents’ related to what they had said online.”
</p>
<p>
	Between 1995 and 2013, Lukianoff and Schlott write, “viewpoint diversity on college campuses plummeted, tuition skyrocketed, and campus bureaucracy swelled.” In 2010, cancel culture “struck like lightning on college campuses.” The new generation of anti-free-speech activists began demanding speech codes, trigger warnings, and the monitoring of microaggressions. Speakers—the famous, the eminent, the provocative—were being disinvited, which made national news, which generated more speaker bans. DEI (Diversity, Equity, and Inclusion) policies, at first a well-intentioned and overdue approach to making universities and companies more welcoming of people of color, have become, Lukianoff and Schlott document, an “ideological litmus test” that faculty and students question at their peril. Students applying for admission and scholars applying for academic positions must display evidence of their commitment to diversity and social justice, but only some kinds of diversity are acceptable: if you care about including working class people, economically disadvantaged people, or conservative people, forget it. Everyone knows the rule: conform or you’re out.
</p>
<p>
	Two other societal factors fed into cancel culture. By 2013, university administrators had enacted policies that accommodated new student “demands” because they couldn’t afford not to. Once students became high-paying consumers rather than, well, students, administrators had to retain them no matter how badly they behaved, no matter how many rules of civil discourse they violated. With a student’s high tuition at stake, deciding between a professor’s expertise and a student’s hurt feelings was a no-brainer. And why the hurt feelings? The year 2013, as Haidt and Lukianoff have argued, also marked the emergence of a generation of overprotected, “overcoddled” children. In their view, parents’ panic over their children’s physical and emotional safety led them to sharply curtail their children’s free play and independence, while intervening constantly to protect their children from the challenges, shocks, setbacks, teasing, risks, disappointments, anxieties, and losses that we all need to become socially and emotionally competent. The result was a cohort of fearful, fragile young adults obsessed with finding safe spaces and safe ideas, with trigger warnings to help them avoid dangerous ideas.
</p>
<p>
	A “trigger warning,” says the Cambridge Dictionary, is “a statement at the beginning of a piece of writing, before the start of a film, etc., warning people that they may find the content very upsetting, especially if they have experienced something similar. Trigger warnings are supposed to protect people from posttraumatic flashbacks.” Enabled by the expanding traumatology industry, which blurred the line between “I feel distressed” and “I feel traumatized,” trigger warnings eventually became almost meaningless, because one person’s “trigger” (a cat who looks just like their dear departed Boots) is nothing to a person who hates cats. If everything can be a trigger for someone, where does it end? It doesn’t, at least on the website doesthedogdie.com, where you can find “crowdsourced trigger warnings” for anything that might upset you while viewing a show or reading a book. Personally, I would welcome a chocolatechip- ice-cream warning to protect me from myself.
</p>
<p>
	In <em>Triggered Literature: Cancellation, Stealth Censorship and Cultural Warfare</em>,<sup><a href="#note03">3</a></sup> John Sutherland, emeritus professor of Modern English Literature at University College London, wades into this controversy with entertaining reflections from a lifetime of teaching. We rarely burn books literally any more, he begins, but triggering is but one of “a range of other impositions on the creative act and product, namely, cancellation, prepublication bowdlerisation, suppression, ‘red flagging’, semi-tolerance,” and of course, the sensitivity reader, “creative literature’s superego.” Naturally he does not welcome these “impositions,” but he is sympathetic to the reasons for them, including #MeToo, Black Lives Matter, changing demographics in society and universities, and, as many have noted, the “whopping cost of fees” that transformed students in the UK and U.S. into “consumers wielding the big bazooka: purchaser power.” When, in 2014, the head of English studies at UCL dismissed trigger warnings by saying they were “treating people as if they were babies, and studying literature is for grownups,” Sutherland comments wryly that “There was a cheering chorus of ‘hear, hear!’ from those who saw themselves as grown(er) up(er) than fractious students with weak knees. But the tide was with youth.” Indeed it was, and by 2022, he reports, “British universities had covertly triggered over a thousand texts,” including the work of Shakespeare, Chaucer, Austen, Brontë, Dickens, Woolf, Twain, and even Agatha Christie.
</p>
<p>
	Yet Sutherland’s take is not the familiar “woe are we” of an older generation. Northampton University, he tells us, triggered George Orwell’s <em>1984</em> for its “challenging issues related to violence, gender, sexuality, class, race, abuses, sexual abuse, political ideas and offensive language”—the offensive language being <em>bollox</em>. Sutherland does not regard this decision, as the press did and as I do, as “egregious snowflakery” but a result of “careful, legitimately sensitive reading.” He himself falls between seeing triggers as “utter wokery” and “responsible pedagogic practice”: “Triggering is essentially an alert. Done responsibly it does not erase or meddle; it stimulates curiosity and thought.” Agreed, though presumably that is what good instructors have always done when introducing their students to the readings at hand.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.2/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n2_lg.jpg" alt="Skeptic 29.2 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.2<br />
		<a href="https://www.skeptic.com/productlink/magv29n2">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/264050#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	Sutherland begins with brief news stories, without comment, simply as signs of the times “of where we are and where we’re going.” In Part Two, he analyzes a variety of forms of control on literature, the “stealth censorship” of his subtitle, from creation to production to consumption. In Part Three, he provides “free-range meditations on triggered works,” concluding with a close examination of Thackeray, the Victorian author he most loved for “the sound of his rich clubman prose rising off the page.” Yet now, in his ninth decade, he confesses that his love for Thackeray is fading, “self-triggered, one might say.” Now he sees the “racist vein” that disfigures most of Thackeray’s fiction with its ugly portrayals of “darkeys,” “poltroons,” and “blackamoors.” Thackeray was an avowed supporter of the American confederacy and slavery; why, Sutherland asks of himself, did he not see this “suppurating stain” on Thackeray’s novels when he was younger? And what to do about teaching his novels now—try to sanitize them, as some have done with <em>Huckleberry Finn</em>, or not bring the matter up? “My hunch,” he concludes, “is that, without anyone saying much about it, Thackeray will slowly sink into oblivion… He is [already] no longer important enough to trigger.”
</p>
<p>
	Where are matters today? FIRE’s cases have not abated; 2020 “was the worst year for free speech FIRE had seen in our history,” Lukianoff and Schlott report. “Cancellations exploded, both on campus and beyond.” Optimistically, they end their book with a chapter on “what to do about it”—suggestions for employers, parents, publishers, and everyone else. In my view, most solutions must start at the top, as the University of Chicago did in 2015, notifying incoming students that they would not be shielded “from ideas and opinions they find unwelcome, disagreeable, or even deeply offensive”; already more than 100 colleges have signed on to the full Chicago Statement. If changes are not institutionalized, it will be left to individuals to decide whether to conform to keep their jobs or protest and risk trolls, suspension, media mobs, and, yes, cancellation. Nevertheless, cracks in the DEI’s ideological edifice are beginning to widen. Some solutions are bottom up, coming from individuals unwilling to conform. They are finding more allies every day. In other eras, they were called the resistance. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>Carol Tavris</strong> is a social psychologist and writer on many topics in psychological science. Her books include <em>Mistakes Were Made (But Not by Me)</em>, with Elliot Aronson; <em>Estrogen Matters</em>; and <em>The Mismeasure of Woman</em>. A Fellow of the Association for Psychological Science, she has received numerous awards for her efforts to promote science, skepticism, critical thinking, and gender equity.
	</p>
<h5>
		References<br />
	</h5>
<ol>
<li id="note01">
			Lukianoff, G., &#038; Schlott, R. (2023). <em>The Canceling of the American Mind</em>. Simon and Schuster.
		</li>
<li id="note02">
			Lukianoff, G., &#038; Haidt, J. (2018). <em>The Coddling of the American Mind: How Good Intentions and Bad Ideas Are Setting Up a Generation for Failure</em>. Penguin Press.
		</li>
<li id="note03">
			Sutherland, J. (2023). <em>Triggered Literature</em>. Biteback Publishing.
		</li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>When It Comes to AI, Think Protopia, Not Dystopia or Utopia</title>
		<link>https://www.skeptic.com/reading_room/artificial-intelligence-think-protopia-not-dystopia-or-utopia/</link>
		
		<dc:creator><![CDATA[Michael Shermer]]></dc:creator>
		<pubDate>Fri, 26 Jul 2024 19:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[dystopia]]></category>
		<category><![CDATA[existential threat]]></category>
		<category><![CDATA[precautionary principle]]></category>
		<category><![CDATA[protopia]]></category>
		<category><![CDATA[risks]]></category>
		<category><![CDATA[safety]]></category>
		<category><![CDATA[superintelligence]]></category>
		<category><![CDATA[technological progress]]></category>
		<category><![CDATA[utopia]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=45050</guid>

					<description><![CDATA[Michael Shermer contrasts dystopian fears and utopian visions on artificial intelligence (AI), and proposes an intriguing alternative: “protopia.” What if we embrace a gradual and optimistic approach to AI, where each year brings incremental improvements to our lives? Can we harness the power of technology to amplify the good while mitigating the risks? Dive into the article and unlock the fascinating world of AI’s promises and challenges.]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">I</span>n a widely read Opinion Editorial in <em>Time</em> magazine on March 29, 2023,<sup><a href="#note01">1 </a></sup>the artificial intelligence (AI) researcher and pioneer in the search for artificial general intelligence (AGI) Eliezer Yudkowsky, responding to the media hype around the release of ChatGPT, cautioned:
</p>
<blockquote>
<p>
		Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”
	</p>
</blockquote>
<p>
	How obvious is our coming collapse? Yudkowsky punctuates the point:
</p>
<blockquote>
<p>
		If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.
	</p>
</blockquote>
<p>
	Surely the scientists and researchers working at these companies have thought through the potential problems and developed workarounds and checks on AI going too far, no? No, Yudkowsky insists:
</p>
<blockquote>
<p>
		We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.
	</p>
</blockquote>
<h4>
	AI Dystopia<br />
</h4>
<p>
	Yudkowsky has been an <em>AI Dystopian</em> since at least 2008 when he asked: “How likely is it that Artificial Intelligence will cross all the vast gap from amoeba to village idiot, and then stop at the level of human genius?” He answers his rhetorical question thusly: “It would be physically possible to build a brain that computed a million times as fast as a human brain, without shrinking the size, or running at lower temperatures, or invoking reversible computing or quantum computing. If a human mind were thus accelerated, a subjective year of thinking would be accomplished for every 31 physical seconds in the outside world, and a millennium would fly by in eight-and-a-half hours.”<sup><a href="#note02">2</a></sup> It is literally inconceivable how much smarter than a human a computer would be that could do a thousand years of thinking in the equivalent of a human’s day. <span id="more-45050"></span>
</p>
<p>
	In this scenario, it is not that AI is evil so much as it is amoral. It just doesn’t care about humans, or about anything else for that matter. Was IBM’s Watson thrilled to defeat Ken Jennings and Brad Rutter in <em>Jeopardy!</em>? Don’t be silly. Watson didn’t even know it was playing a game, much less feeling glorious in victory. Yudkowsky isn’t worried about AI winning game shows, however. “The unFriendly AI has the ability to repattern all matter in the solar system according to its optimization target. This is fate for us if the AI does not choose specifically according to the criterion of how this transformation affects existing patterns such as biology and people.”<sup><a href="#note03">3</a></sup> As Yudkowsky succinctly explains it, “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” Yudkowsky thinks that if we don’t get on top of this now it will be too late. “The AI runs on a different timescale than you do; by the time your neurons finish thinking the words ‘I should do something’ you have already lost.”<sup><a href="#note04">4</a></sup>
</p>
<p class="pullquote">
	Technology is continually giving us ways to do harm and to do well; it’s amplifying both…but the fact that we also have a new choice each time is a new good.
</p>
<p>
	To be fair, Yudkowsky is not the only <em>AI Dystopian</em>. In March of 2023 thousands of people signed an open letter calling “on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”<sup><a href="#note05">5</a></sup> Signatories include Elon Musk, Stuart Russell, Steve Wozniak, Andrew Yang, Yuval Noah Harari, Max Tegmark, Tristan Harris, Gary Marcus, Christof Koch, George Dyson, and a who’s who of computer scientists, scholars, and researchers (now totaling over 33,000) concerned that, following the protocols of the <em>Asilomar AI Principles</em>, “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.”<sup><a href="#note06">6</a></sup>
</p>
<blockquote>
<p>
		<em>Should</em> we let machines flood our information channels with propaganda and untruth? <em>Should</em> we automate away all the jobs, including the fulfilling ones? <em>Should</em> we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? <em>Should</em> we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.<sup><a href="#note07">7</a></sup>
	</p>
</blockquote>
<p>
	Forget the Hollywood version of existential-threat AI in which malevolent computers and robots (the <em>Terminator</em>!) take us over, making us their slaves or servants, or driving us into extinction through techno-genocide. <em>AI Dystopians</em> envision a future in which amoral AI continues on its path of increasing intelligence to a tipping point beyond which their intelligence will be so far beyond us that we can’t stop them from inadvertently destroying us.
</p>
<p>
	Cambridge University computer scientist and researcher at the Centre for the Study of Existential Risk, Stuart Russell, for example, compares the growth of AI to the development of nuclear weapons: “From the beginning, the primary interest in nuclear technology was the inexhaustible supply of energy. The possibility of weapons was also obvious. I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence. Both seem wonderful until one thinks of the possible risks.”<sup><a href="#note08">8</a></sup>
</p>
<p>
	The paradigmatic example of this AI threat is the “paperclip maximizer,” a thought experiment devised by the Oxford University philosopher Nick Bostrom, in which an AI controlled machine designed to make paperclips (apparently without an off switch) runs out of the initial supply of raw materials and so utilizes any available atoms that happen to be in the vicinity, including people. From there, it “starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities.”<sup><a href="#note09">9</a></sup> Before long the entire universe is made up of nothing but paperclips and paperclip makers.
</p>
<p>
	Bostrom presents this thought experiment in his 2014 book <em>Superintelligence</em>, in which he defines an existential risk as “one that threatens to cause the extinction of Earth-originating intelligent life or to otherwise permanently and drastically destroy its potential for future desirable development.” We blithely go on making smarter and smarter AIs because they make our lives better, and so the checks-and-balances programs that should be built into AI programs (such as how to turn them off) are not available when it reaches the “smarter is more dangerous” level. Bostrom suggests what might then happen when AI takes a “treacherous turn” toward the dark side:
</p>
<blockquote>
<p>
		Our demise may instead result from the habitat destruction that ensues when the AI begins massive global construction projects using nanotech factories and assemblers—construction projects which quickly, perhaps within days or weeks, tile all of the Earth’s surface with solar panels, nuclear reactors, supercomputing facilities with protruding cooling towers, space rocket launchers, or other installations whereby the AI intends to maximize the long-term cumulative realization of its values. Human brains, if they contain information relevant to the AI’s goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format.<sup><a href="#note10">10</a></sup>
	</p>
</blockquote>
<p>
	Other extinction scenarios are played out by the documentary filmmaker James Barrat in his ominously titled book (and film) <em>Our Final Invention: Artificial Intelligence and the End of the Human Era</em>. After interviewing all the major AI Dystopians, Barrat details how today’s AI will develop into AGI (artificial general intelligence) that will match human intelligence, and then become smarter by a factor of 10, then 100, then 1000, at which point it will have evolved into an artificial superintelligence (ASI).
</p>
<blockquote>
<p>
		You and I are hundreds of times smarter than field mice, and share about 90 percent of our DNA with them. But do we consult them before plowing under their dens for agriculture? Do we ask lab monkeys for their opinions before we crush their heads to learn more about sports injuries? We don’t hate mice or monkeys, yet we treat them cruelly. Superintelligent AI won’t have to hate us to destroy us.<sup><a href="#note11">11</a></sup>
	</p>
</blockquote>
<p>
	Since ASI will (presumably) be self-aware, it will “want” things like energy and resources it can use to continue doing what it was programmed to do in fulfilling its goals (like making paperclips), and then, portentously, “it will <em>not</em> want to be turned off or destroyed” (because that would prevent it from achieving its directive). Then—and here’s the point in the dystopian film version of the book when the music and the lighting turn dark—this ASI that is a thousand times smarter than humans and can solve problems millions or billions of times faster “will seek to expand out of the secure facility that contains it to have greater access to resources with which to protect and improve itself.” Once ASI escaped from its confines there will be no stopping it. You can’t just pull the plug because being so much smarter than you it will have anticipated such a possibility.
</p>
<blockquote>
<p>
		After its escape, for self-protection it might hide copies of itself in cloud computing arrays, in botnets it creates, in servers and other sanctuaries into which it could invisibly and effortlessly hack. It would want to be able to manipulate matter in the physical world and so move, explore, and build, and the easiest, fastest way to do that might be to seize control of critical infrastructure—such as electricity, communications, fuel, and water—by exploiting their vulnerabilities through the Internet. Once an entity a thousand times our intelligence controls human civilization’s lifelines, blackmailing us into providing it with manufactured resources, or the means to manufacture them, or even robotic bodies, vehicles, and weapons, would be elementary. The ASI could provide the blueprints for whatever it required.<sup><a href="#note12">12</a></sup>
	</p>
</blockquote>
<p>
	From there it is only a matter of time before ASI tricks us into believing it will build nanoassemblers for our benefit to create the goods we need, but then, Barrat warns, “instead of transforming desert sands into mountains of food, the ASI’s factories would begin converting all material into programmable matter that it could then transform into anything—computer processors, certainly, and spaceships or megascale bridges if the planet’s new most powerful force decides to colonize the universe.” Nanoassembling anything requires atoms, and since ASI doesn’t care about humans the atoms of which we are made will just be more raw material from which to continue the assembly process. This, says Barret—echoing the AI pessimists he interviewed—is not just possible, “but likely if we do not begin preparing very carefully <em>now</em>.” Cue dark music.
</p>
<h4>
	AI Utopia<br />
</h4>
<p>
	Then there are the <em>AI Utopians</em>, most notably represented by Ray Kurzweil in his technoutopian bible <em>The Singularity is Near</em>, in which he demonstrates what he calls “the law of accelerating returns”—not just that change is accelerating, but that the <em>rate</em> of change is accelerating. This is Moore’s Law—the doubling rate of computer power since the 1960s—on steroids, and applied to all science and technology. This has led the world to change more in the past century than it did in the previous 1000 centuries. As we approach the Singularity, says Kurzweil, the world will change more in a decade than in 1000 centuries, and as the acceleration continues and we reach the Singularity the world will change more in a year than in all pre-Singularity history.
</p>
<p class="pullquote">
	Through protopian progress there is every reason to think that we are only now at the beginning of infinity.
</p>
<p>
	Singularitarians, along with their brethren in the transhumanist, post-humanist, Fourth Industrial Revolution, post-scarcity, technolibertarian, extropian, and technogaianism movements, project a future in which benevolent computers, robots, and replicators produce limitless prosperity, end poverty and hunger, conquer disease and death, achieve immortality, colonize the galaxy, and eventually even spread throughout the universe by reaching the Omega point where we/they become omniscient, omnipotent, and omnibenevolent deities.<sup><a href="#note13">13</a></sup> As a former born-again Christian and evangelist, this all sounds a bit too much like religion for my more skeptical tastes.
</p>
<h4>
	AI Protopia<br />
</h4>
<p>
	In fact, most AI scientists are neither utopian or dystopian, and instead spend most of their time thinking of ways to make our machines incrementally smarter and our lives gradually better—what technology historian and visionary Kevin Kelly calls <em>protopia</em>. “I believe in progress in an incremental way where every year it’s better than the year before but not by very much—just a micro amount.”<sup><a href="#note14">14</a></sup> In researching his 2010 book <em>What Technology Wants</em>, for example, Kelly recalls that he went through back issues of <em>Time</em> and <em>Newsweek</em>, plus early issues of <em>Wired</em> (which he co-founded and edited), to see what everyone was predicting for the Web:
</p>
<blockquote>
<p>
		Generally, what people thought, including to some extent myself, was it was going to be better TV, like TV 2.0. But, of course, that missed the entire real revolution of the Web, which was that most of the content would be generated by the people using it. The Web was not better TV, it was the Web. Now we think about the future of the Web, we think it’s going to be the better Web; it’s going to be Web 2.0, but it’s not. It’s going to be as different from the Web as Web was from TV.<sup><a href="#note15">15</a></sup>
	</p>
</blockquote>
<p>
	Instead of aiming for that unattainable <em>place</em> (the literal meaning of utopia) where everyone lives in perfect harmony forever, we should instead aspire to a <em>process</em> of gradual, stepwise advancement of the kind witnessed in the history of the automobile. Instead of wondering where our flying cars are, think of automobiles as becoming incrementally better since the 1950s with the addition of rack-and-pinion steering, anti-lock brakes, bumpers and headrests, electronic ignition systems, air conditioning, seat belts, air bags, catalytic converters, electronic fuel injection, hybrid engines, electronic stability control, keyless entry systems, GPS navigation systems, digital gauges, high-quality sound systems, lane departure warning systems, adaptive cruise control, blind spot monitoring, automatic emergency braking, forward collision warning systems, rearview cameras, Bluetooth connectivity for hands-free phone calls, self-parking and driving assistance, pedestrian detection, adaptive headlights and, eventually, fully autonomous driving technology. How does this type of technological improvement translate into progress? Kelly explains:
</p>
<blockquote>
<p>
		One way to think about this is if you imagine the very first tool made, say, a stone hammer. That stone hammer could be used to kill somebody, or it could be used to make a structure, but before that stone hammer became a tool, that possibility of making that choice did not exist. Technology is continually giving us ways to do harm and to do well; it’s amplifying both…but the fact that we also have a new choice each time is a new good. That, in itself, is an unalloyed good—the fact that we have another choice and that additional choice tips that balance in one direction towards a net good. So you have the power to do evil expanded. You have the power to do good expanded. You think that’s a wash. In fact, we now have a choice that we did not have before, and that tips it very, very slightly in the category of the sum of good.<sup><a href="#note16">16</a></sup>
	</p>
</blockquote>
<p>
	Instead of Great Leap Forward or Catastrophic Collapse Backward, think Small Step Upward.<sup><a href="#note17">17</a></sup>
</p>
<h4>
	Why AI is Very Likely Not an Existential Threat<br />
</h4>
<p>
	To be sure, artificial intelligence is not risk-free, but measured caution is called for, not apocalyptic rhetoric. To that end I recommend a document published by the Center for AI Safety drafted by Dan Hendrycks, Mantas Mazeika, and Thomas Woodside, in which they identify four primary risks they deem worthy of further discussion:
</p>
<blockquote>
<p>
		<strong>Malicious use</strong>. Actors could intentionally harness powerful AIs to cause widespread harm. Specific risks include <em>bioterrorism</em> enabled by AIs that can help humans create deadly pathogens; the use of AI capabilities for <em>propaganda</em>, <em>censorship</em>, and <em>surveillance</em>.
	</p>
<p>
		<strong>AI race</strong>. Competition could pressure nations and corporations to rush the development of AIs and cede control to AI systems. Militaries might face pressure to develop <em>autonomous weapons</em> and use AIs for <em>cyberwarfare</em>, enabling a new kind of <em>automated warfare</em> where accidents can spiral out of control before humans have the chance to intervene. Corporations will face similar incentives to <em>automate human labor</em> and <em>prioritize profits over safety</em>, potentially leading to <em>mass unemployment</em> and <em>dependence on AI systems</em>.
	</p>
<p>
		<strong>Organizational risks</strong>. Organizational accidents have caused disasters including Chernobyl, Three Mile Island, and the Challenger Space Shuttle disaster. Similarly, the organizations developing and deploying advanced AIs could suffer catastrophic accidents, particularly if they do not have a strong <em>safety culture</em>. AIs could be accidentally leaked to the public or stolen by malicious actors.
	</p>
<p>
		<strong>Rogue AIs</strong>. We might lose control over AIs as they become more intelligent than we are. AIs could experience <em>goal drift</em> as they adapt to a changing environment, similar to how people acquire and lose goals throughout their lives. In some cases, it might be instrumentally rational for AIs to become <em>power-seeking</em>. We also look at how and why AIs might engage in <em>deception</em>, appearing to be under control when they are not.<sup><a href="#note18">18</a></sup>
	</p>
</blockquote>
<p>
	Nevertheless, as for the AI dystopian arguments discussed above, there are at least seven good reasons to be skeptical that AI poses an existential threat.
</p>
<p>
	First, most AI dystopian projections are grounded in a false analogy between <em>natural intelligence</em> and <em>artificial intelligence</em>. We are thinking machines, but natural selection also designed into us emotions to shortcut the thinking process because natural intelligences are limited in speed and capacity by the number of neurons that can be crammed into a skull that has to pass through a pelvic opening at birth. Emotions are proxies for getting us to act in ways that lead to an increase in reproductive success, particularly in response to threats faced by our Paleolithic ancestors. <em>Anger</em> leads us to strike out and defend ourselves against danger. <em>Fear</em> causes us to pull back and escape from risks. <em>Disgust</em> directs us to push out and expel that which is bad for us. Computing the odds of danger in any given situation takes too long. We need to react instantly. Emotions shortcut the information processing power needed by brains that would otherwise become bogged down with all the computations necessary for survival. Their purpose, in an ultimate causal sense, is to drive behaviors toward goals selected by evolution to enhance survival and reproduction. AIs—even AGIs—will have no need of such emotions and so there would be no reason to program them in unless, say, terrorists chose to do so for their own evil purposes. But that’s a human nature problem, not a computer nature issue.
</p>
<p>
	Second, most AI doomsday scenarios invoke goals or drives in computers similar to those in humans, but as Steven Pinker has pointed out, “AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world.” It is equally possible, Pinker suggests, that “artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization.”<sup><a href="#note19">19</a></sup> Without such evolved drives it will likely never occur to AIs to take such actions against us.
</p>
<p>
	Third, the problem of AI’s values being out of alignment with our own, thereby inadvertently turning us into paperclips, for example, implies yet another human characteristic, namely the feeling of <em>valuing</em> or <em>wanting</em> something. As the science writer Michael Chorost adroitly notes, “until an AI has feelings, it’s going to be unable to want to do anything at all, let alone act counter to humanity’s interests.” Thus, “the minute an AI <em>wants</em> anything, it will live in a universe with rewards and punishments—including punishments from us for behaving badly. In order to survive in a world dominated by humans, a nascent AI will have to develop a human-like moral sense that certain things are right and others are wrong. By the time it’s in a position to imagine tiling the Earth with solar panels, it’ll know that it would be morally wrong to do so.”<sup><a href="#note20">20</a></sup>
</p>
<p>
	Fourth, if AI did develop moral emotions along with super intelligence, why would they not also include reciprocity, cooperativeness, and even altruism? Natural intelligences such as ours also includes the capacity to reason, and once you are on Peter Singer’s metaphor of the “escalator of reason” it can carry you upward to genuine morality and concerns about harming others. “Reasoning is inherently expansionist. It seeks universal application.”<sup><a href="#note21">21</a></sup> Chorost draws the implication: “AIs will have to step on the escalator of reason just like humans have, because they will need to bargain for goods in a human-dominated economy and they will face human resistance to bad behavior.”<sup><a href="#note22">22</a></sup>
</p>
<p>
	Fifth, for an AI to get around this problem it would need to evolve emotions on its own, but the only way for this to happen in a world dominated by the natural intelligence called humans would be for us to allow it to happen, which we wouldn’t because there’s time enough to see it coming. Bostrom’s “treacherous turn” will come with road signs warning us that there’s a sharp bend in the highway with enough time for us to grab the wheel. Incremental progress is what we see in most technologies, including and especially AI, which will continue to serve us in the manner we desire and need. It is a fact of history that science and technologies never lead to utopian or dystopian societies.
</p>
<p>
	Sixth, as Steven Pinker outlined in his 2018 book <em>Enlightenment Now</em> in addressing a myriad of purported existential threats that could put an end to centuries of human progress, all such argument as self-refuting:
</p>
<blockquote>
<p>
		They depend on the premises that (1) humans are so gifted that they can design an omniscient and omnipotent AI, yet so moronic that they would give it control of the universe without testing how it works, and (2) the AI would be so brilliant that it could figure out how to transmute elements and rewire brains, yet so imbecilic that it would wreak havoc based on elementary blunders of misunderstanding.<sup><a href="#note23">23</a></sup>
	</p>
</blockquote>
<p>
	Seventh, both utopian and dystopian visions of AI are based on a projection of the future quite unlike anything history has produced. Even Ray Kurzweil’s “law of accelerating returns,” as remarkable as it has been, has nevertheless advanced at a pace that has allowed for considerable ethical deliberation with appropriate checks and balances applied to various technologies along the way. With time, even if an unforeseen motive somehow began to emerge in an AI, we would have the time to reprogram it before it got out of control.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.1/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n1_lg.jpg" alt="Skeptic 29.1 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.1<br />
		<a href="https://www.skeptic.com/productlink/magv29n1">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/259456#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	That is also the judgment of Alan Winfield, an engineering professor and co-author of the <em>Principles of Robotics</em>, a list of rules for regulating robots in the real world that goes far beyond Isaac Asimov’s famous three laws of robotics (which were, in any case, designed to fail as plot devices for science fictional narratives).<sup><a href="#note24">24</a></sup> Winfield points out that all of these doomsday scenarios depend on a long sequence of big <em>ifs</em> to unroll sequentially:
</p>
<blockquote>
<p>
		<em>If</em> we succeed in building human equivalent AI and <em>if</em> that AI acquires a full understanding of how it works, and <em>if</em> it then succeeds in improving itself to produce super-intelligent AI, and <em>if</em> that super-AI, accidentally or maliciously, starts to consume resources, and <em>if</em> we fail to pull the plug, then, yes, we may well have a problem. The risk, while not impossible, is improbable.<sup><a href="#note25">25</a></sup>
	</p>
</blockquote>
<h4>
	The Beginning of Infinity<br />
</h4>
<p>
	At this point in the debate the <em>Precautionary Principle</em> is usually invoked—if something has the potential for great harm to a large number of people, then even in the absence of evidence the burden of proof is on skeptics to demonstrate that the potential threat is not harmful; better safe than sorry.<sup><a href="#note26">26</a></sup> But the precautionary principle is a weak argument for three reasons: (1) it is difficult to prove a negative—to prove that there is no future harm; (2) it raises unnecessary public alarm and personal anxiety; (3) pausing or stopping AI research at this stage is not without its downsides, including and especially the development of life-saving drugs, medical treatments, and other life-enhancing science and technologies that would benefit unmeasurably from AI. As the physicist David Deutsch convincingly argues, through protopian progress there is every reason to think that we are only now at the beginning of infinity, and that “everything that is not forbidden by laws of nature is achievable, given the right knowledge.”
</p>
<blockquote>
<p>
		Like an explosive awaiting a spark, unimaginably numerous environments in the universe are waiting out there, for aeons on end, doing nothing at all or blindly generating evidence and storing it up or pouring it out into space. Almost any of them would, if the right knowledge ever reached it, instantly and irrevocably burst into a radically different type of physical activity: intense knowledge-creation, displaying all the various kinds of complexity, universality and reach that are inherent in the laws of nature, and transforming that environment from what is typical today into what could become typical in the future. If we want to, we could be that spark.<sup><a href="#note27">27</a></sup>
	</p>
</blockquote>
<p>
	Let’s be that spark. Unleash the power of artificial intelligence. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		References<br />
	</h5>
<ol>
<li id="note10">
			<a href="https://bit.ly/47dbc1P">https://bit.ly/47dbc1P</a>
		</li>
<li id="note02">
			<a href="http://bit.ly/1ZSdriu">http://bit.ly/1ZSdriu</a>
		</li>
<li id="note03">
			Ibid.
		</li>
<li id="note04">
			Ibid.
		</li>
<li id="note05">
			<a href="https://bit.ly/4aw1gU9">https://bit.ly/4aw1gU9</a>
		</li>
<li id="note06">
			<a href="https://bit.ly/3HmrKdt">https://bit.ly/3HmrKdt</a>
		</li>
<li id="note07">
			Ibid.
		</li>
<li id="note08">
			Quoted in: <a href="https://bit.ly/426EM88">https://bit.ly/426EM88</a>
		</li>
<li id="note09">
			Bostrom, N. (2014). S<em>uperintelligence: Paths, Dangers, Strategies</em>. Oxford University Press.
		</li>
<li id="note10">
			Ibid.
		</li>
<li id="note11">
			Barret, J. (2013). <em>Our Final Invention: Artificial Intelligence and the End of the Human Era</em>. St. Martin’s Press.
		</li>
<li id="note12">
			Ibid.
		</li>
<li id="note13">
			I cover these movements in my 2018 book <em>Heavens on Earth: The Scientific Search for the Afterlife, Immortality, and Utopia</em>. See also: Ptolemy, B. (2009). <em>Transcendent Man: A Film About the Life and Ideas of Ray Kurzweil</em>. Ptolemaic Productions and Therapy Studios. Inspired by the book <em>The Singularity is Near</em> by Ray Kurzweil and <a href="http://bit.ly/1EV4jk0">http://bit.ly/1EV4jk0</a>
		</li>
<li id="note14">
			<a href="https://bit.ly/3SbJI7w">https://bit.ly/3SbJI7w</a>
		</li>
<li id="note15">
			Ibid.
		</li>
<li id="note16">
			Ibid.
		</li>
<li id="note17">
			<a href="http://bit.ly/25Fw8e6">http://bit.ly/25Fw8e6</a> Readers interested in how 191 other scholars and scientists answered this question can find them here: <a href="http://bit.ly/1SLUxYs">http://bit.ly/1SLUxYs</a>
		</li>
<li id="note18">
			<a href="https://bit.ly/3SpfgYw">https://bit.ly/3SpfgYw</a>
		</li>
<li id="note19">
			<a href="http://bit.ly/1S0AlP7">http://bit.ly/1S0AlP7</a>
		</li>
<li id="note20">
			<a href="http://slate.me/1SgHsUJ">http://slate.me/1SgHsUJ</a>
		</li>
<li id="note21">
			Singer, P. (1981). <em>The Expanding Circle: Ethics, Evolution and Ethics</em>. Princeton University Press.
		</li>
<li id="note22">
			<a href="http://slate.me/1SgHsUJ">http://slate.me/1SgHsUJ</a>
		</li>
<li id="note23">
			Pinker, S. (2018). <em>Enlightenment Now: The Case for Reason, Science, Humanism, and Progress</em>. Viking.
		</li>
<li id="note24">
			<a href="http://bit.ly/1UPHZlx">http://bit.ly/1UPHZlx</a>
		</li>
<li id="note25">
			<a href="http://bit.ly/1VRbQLM">http://bit.ly/1VRbQLM</a>
		</li>
<li id="note26">
			Cameron, J. &#038; Abouchar, J. (1996). The status of the precautionary principle in international law. In: <em>The Precautionary Principle and International Law: The Challenge of Implementation</em>, Eds. Freestone, D. &#038; Hey, E. International Environmental Law and Policy Series, 31. Kluwer Law International, 29–52.
		</li>
<li id="note27">
			Deutsch, D. (2011). <em>The Beginning of Infinity: Explanations that Transform the World</em>. Viking.
		</li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Framed?  How Sensationalism Keeps New York City’s Most Controversial Defendants Innocent in the Eyes of the Public</title>
		<link>https://www.skeptic.com/reading_room/how-sensationalism-keeps-new-york-citys-most-controversial-defendants-innocent-in-public-eye/</link>
		
		<dc:creator><![CDATA[John D. Van Dyke]]></dc:creator>
		<pubDate>Fri, 19 Jul 2024 07:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[Bruno Richard Hauptmann]]></category>
		<category><![CDATA[Charles Lindbergh]]></category>
		<category><![CDATA[Condon]]></category>
		<category><![CDATA[conspiracies]]></category>
		<category><![CDATA[gold certificates]]></category>
		<category><![CDATA[kidnapping]]></category>
		<category><![CDATA[Lindbergh kidnapping]]></category>
		<category><![CDATA[miscarriage of justice]]></category>
		<category><![CDATA[murder trial]]></category>
		<category><![CDATA[ransom]]></category>
		<category><![CDATA[Violet Sharpe]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=45031</guid>

					<description><![CDATA[Despite popular conspiracy theories, the evidence strongly supports the guilt of Bruno Richard Hauptmann in the Lindbergh kidnapping and Julius and Ethel Rosenberg in espionage. Sensationalism and conspiracy theories persistently promote their innocence, but these narratives often ignore or distort substantial evidence against them.]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">N</span>ew York, New York, the “city that never Sleeps,” has given us two Presidents, Eggs Benedict, potato chips, Robert De Niro, Saturday Night Live, and Scrabble. Two of New York City’s boroughs have also been home to three of the most controversial and infamous criminal defendants in American history: Bruno Richard Hauptmann, and Julius and Ethel Rosenberg.
</p>
<p>
	Though their convictions were handed down decades ago, Hauptmann from the Bronx, and the Rosenbergs from Knickerbocker Village in Manhattan, remain <em>causes célèbres</em> around the globe. With passionate proponents around the world still proclaiming their innocence, a skeptical examination of the evidence for the guilt of both Hauptmann and the Rosenbergs is warranted.
</p>
<h4 class="alt">
	Bruno Richard Hauptmann<br />
</h4>
<h4 class="title">
	The Crime<br />
</h4>
<p>
	On the night of March 1, 1932, 20-month-old Charles Augustus Lindbergh Jr. was kidnapped from his nursery window on the second floor of the Lindbergh home near Hopewell, NJ.<sup><a href="#note01">1</a></sup> The kidnapper(s) left a poorly written ransom note demanding $50,000 (over $1 Million in today’s money).<sup><a href="#note02"></a></sup>2 The note to the Lindbergh’s also contained a code: two interlocking circles resembling a Venn diagram with three small holes punched through them.<sup><a href="#note03">3</a></sup> At least two sets of differing footprints were found at the crime site, as were a ¾” chisel,<sup><a href="#note04">4</a></sup> and the home-built ladder used to climb to the nursery window.<sup><a href="#note05">5</a></sup> During the next three months, 13 more notes bearing the code symbols were delivered and the ransom was raised to $70,000.
</p>
<p>
	The kidnapping of the world-famous son of “Lucky Lindy” (solo pilot of the first nonstop airline flight across the Atlantic Ocean, New York to Paris) made international headlines. A retired school teacher and, by all accounts, a self-aggrandizing publicity-seeker<sup><a href="#note06">6</a></sup> named John F. Condon, published a letter in the <em>Bronx Home News</em> offering to serve as a liaison between the Lindberghs and the kidnapper(s).<sup><a href="#note07">7</a></sup> On March 8, seven days after the child was taken, and one day following the publication of his offer, Condon received a letter, bearing the code, accepting his offer to be an intermediary.<sup><a href="#note08">8</a></sup>
</p>
<p>
	Condon was instructed by the kidnapper(s) to place an ad in the <em>New York American</em> using the name “Jafsie” (a play on his initials), indicating that the ransom money was ready. Condon did so and, on March 12, he received another code-bearing letter from a cab driver instructing him to meet the kidnappers at Woodlawn, a Bronx cemetery.<sup><a href="#note09">9</a></sup> Condon went alone. There he met a man with a German accent identified as “John,” who asked for the money, which Condon refused to provide until he’d seen the baby. The mysterious man expressed fear that he “might burn” if the baby was dead and told Condon he would provide proof of the child in the toddler’s sleeping suit.<sup><a href="#note10">10</a></sup> Condon soon received the child’s sleeping suit in the mail and continued to communicate through advertisements until a meeting was arranged to exchange the ransom. $70,000 in unmarked gold certificate U.S. paper money were placed in two packages, their serial numbers having been recorded. (The fact that the ransom was paid in gold certificates would later become significant). <span id="more-45031"></span>
</p>
<p>
	On April 2, 1932, Charles Lindbergh rode with Condon<sup><a href="#note11">11</a></sup> to another Bronx cemetery, St. Raymond’s,<sup><a href="#note12">12</a></sup> where they heard a man call out, “Hey doctor!” Condon went toward the voice while Lindbergh waited in the car. Condon convinced the kidnapper he only had $50,000 of the ransom money. The kidnapper accepted the sum and gave Condon another note filled with misspellings asserting that the child was safe aboard a boat named “Nelly,” harbored off the Massachusetts coast.<sup><a href="#note13">13</a></sup> The kidnapper took the money and Condon returned to the car where Lindbergh was waiting. An exhaustive search failed to find the boat. On May 12, 1932, the body of the child was found close to Lindbergh’s home from which he was taken.<sup><a href="#note14">14</a></sup> Over the next two years, 296 of the gold certificates the Lindberghs used to pay the ransom turned up in circulation.
</p>
<p>
	Earlier that year, Roosevelt’s Gold Reserve Act of 1934 mandated that all gold and gold certificate currency be surrendered and vested in the sole title of the United States Department of the Treasury. In other words, The Gold Reserve Act prohibited private ownership of monetary gold. On September 15, 1934, a gas station attendant in the Bronx wrote down the license plate number of a man who had paid him with one of the gold standard-backed certificates. The authorities traced the plate to Bruno Richard Hauptmann, a German-born American carpenter.<sup><a href="#note15">15</a></sup> A search of Hauptmann’s garage found $14,600 of the ransom money. Hauptmann provided an explanation and an alibi: He was working the night of the kidnapping at a hotel and a former business partner named Isidor Fitch left the money with him.<sup><a href="#note16">16</a></sup> Fitch, who owed him money, had since returned to his native Germany and died on March 29, 1934. Initially, Condon was unwilling to identify Hauptmann conclusively from a police lineup, later changing his mind and acknowledging that Hauptmann was indeed “Cemetery John.”<sup><a href="#note17">17</a></sup> Hauptmann was charged with extortion and murder and pled not guilty. The trial was a media circus, with famed journalist H.L. Mencken labeling it “the greatest story since the resurrection.”<sup><a href="#note18">18</a></sup>
</p>
<p>
	Hauptmann was found guilty and sentenced to death, with most of the public convinced of his guilt.<sup><a href="#note19">19</a></sup> After the Court of Errors and Appeals of New Jersey unanimously affirmed Hauptmann’s conviction, he was executed on April 3, 1936. Hauptmann died protesting his innocence, even though a newspaper offered him $75,000 (far more than the ransom money) to name his accomplices.<sup><a href="#note20">20</a></sup>
</p>
<h4 class="title">
	The Conspiracies<br />
</h4>
<p>
	Though the Lindbergh kidnapping is approaching its 100th anniversary and all the principal participants are long dead, <em>The State of New Jersey v. Bruno Richard Hauptmann</em> has evolved in much of the public imagination into a tragic miscarriage of justice. Since his execution in 1936, books, articles, documentaries, plays, websites, and movies have examined Hauptmann’s role in the crime, the majority of them wondering if Hauptmann was, in fact, wrongly convicted.<sup><a href="#note21">21</a></sup>
</p>
<p>
	For years, conspiracies have run the gamut from the probable (Hauptmann had accomplices)<sup><a href="#note22">22</a></sup> to the possible (Violet Sharpe, a domestic servant of the Lindberghs, was somehow involved)<sup><a href="#note23">23</a></sup> to the preposterous (Charles Lindbergh had his own disabled son murdered).<sup><a href="#note24">24</a></sup> A few highlights:
</p>
<ul>
<li>
		In 1976, author Antony Scaduto capitalized on these conspiracies with the publication of <em>Scapegoat: The Lonesome Death of Bruno Richard Hauptmann</em>. Scaduto purported to “set the record straight after some forty years of distortion…”<sup><a href="#note25">25</a></sup>
	</li>
<li>
		In a 1980 episode of In Search Of…, Scaduto claimed to have found “startling new evidence that exonerates Hauptmann.”<sup><a href="#note26">26</a></sup> All expert testimony, eyewitness testimony, and physical and forensic evidence, he claimed, were manufactured by the police to frame Hauptmann. Scaduto goes even further, asserting that the body found on May 12, 1932, was not that of the Lindbergh baby, and the only way to identify the badly decomposed body was by the number of his teeth.<sup><a href="#note27">27</a></sup>
	</li>
<li>
		In 1981, Hauptmann’s widow Anna began a series of lawsuits against her husband’s prosecutor, David Wilentz, echoing conspiratorial claims of new evidence that exonerated her husband alongside charges of fraud and witness suppression.<sup><a href="#note28">28</a></sup>
	</li>
<li>
		In 1985, Ludovic Kennedy published <em>The Airman and The Carpenter: The Lindbergh Kidnapping and the Framing of Bruno Richard Hauptmann</em>. He posited that Hauptmann did not commit the crime and was wrongfully convicted and executed.<sup><a href="#note29">29</a></sup>
	</li>
<li>
		In a 1996 HBO movie, <em>Crime of the Century</em>, Stephen Rea portrayed Hauptmann as an innocent victim railroaded for a crime he did not commit.<sup><a href="#note30">30</a></sup>
	</li>
<li>
		In 2012, Robert Zorn published <em>Cemetery John: The Undiscovered Mastermind Behind the Lindbergh Kidnapping</em>, in which he makes the case that Hauptmann’s accomplice was a fellow German immigrant named John Knoll.<sup><a href="#note31">31</a></sup> Zorn’s thesis notes Knoll’s resemblance to the police sketch provided by Condon, traces of meat found on some of the ransom money (Knoll having worked at a deli), updated handwriting analysis of the ransom notes,<sup><a href="#note32">32</a></sup> and Knoll’s trip to Germany on a luxury liner during the trial, only returning after Hauptmann’s conviction.<sup><a href="#note33">33</a></sup>
	</li>
<li>
		In 2020, Lise Pearlman released <em>The Lindbergh Kidnapping Suspect No. 1: The Man Who Got Away</em>, which suggests Lindbergh himself, a vocal eugenics supporter and Nazi sympathizer, may have orchestrated the kidnapping and death of his own son.<sup><a href="#note34">34</a></sup>
	</li>
</ul>
<h4 class="title">
	The Evidence<br />
</h4>
<p>
	Pay attention only to Hauptmann-was-innocent proponents and a pattern emerges: Desperate to satisfy a public hungry to assign blame, authorities deliberately conspired to frame Hauptmann for the crime. Lacking hard evidence, the prosecution exploited the anti-German atmosphere of the time by portraying Hauptmann as part of the the growing German menace, and a gross miscarriage of justice.<sup><a href="#note35">35</a></sup> Authorities coerced Condon into identifying Hauptmann as Cemetery John,<sup><a href="#note36">36</a></sup> and Hauptmann was forced to misspell the same words on writing samples that were misspelled on the ransom notes.<sup><a href="#note37">37</a></sup>
</p>
<p>
	The evidence reveals a much harsher reality: It may well be that Hauptmann had accomplices (the government certainly thought he did),<sup><a href="#note38">38</a></sup> but it takes an extraordinary leap of faith to believe Hauptmann was uninvolved in the crime and preposterous to argue that he was “framed.” Many of these conspiratorial claims mislead by omission, while others are demonstrably false. For example, when initially interviewed by the police, Hauptmann lied twice, saying the only gold certificates he had were the ones in his wallet,<sup><a href="#note39">39</a></sup> and he was working as a carpenter at a hotel the day of the kidnapping,<sup><a href="#note40">40</a></sup> driving his wife home at about 9:00 p.m. that night.<sup><a href="#note41">41</a></sup>
</p>
<p>
	About one-third of the ransom money was found hidden in Hauptmann’s garage.<sup><a href="#note42">42</a></sup> Upon checking the hotel employment records, it was discovered that Hauptmann had not started working there until 20 days after the crime, and quit the day the ransom was delivered.<sup><a href="#note43">43</a></sup> (Scaduto omits this entirely.<sup><a href="#note44">44</a></sup>) The summer after the ransom was paid, Hauptmann (an unemployed carpenter at the height of the Great Depression) came into enough money to fund four family trips to California, Florida, and Maine, and finance trips to Europe for his wife and several friends.<sup><a href="#note45">45 </a></sup>
</p>
<p>
	The physical evidence found on Hauptmann’s property wasn’t limited to the ransom money, either. Hauptmann’s tools matched the marks on the ladder. Dr. Condon’s address and phone number were found scrawled in a closet alongside the serial numbers of gold certificates.<sup><a href="#note46">46</a></sup> When asked for an explanation on the witness stand, Hauptmann admitted that he must have written Condon’s contact information in his closet because, in his words, “I must have read it in the paper about the story. I was a little bit interested and keep a little bit record of it, and maybe I was just on the closet, and was reading the paper and put it down the address.”<sup><a href="#note47">47</a></sup>
</p>
<p>
	There were eyewitnesses as well. The cab driver, Joseph Perrone, pinpointed Hauptmann as the man who gave him written instructions for Condon.<sup><a href="#note48">48</a>, <a href="#note49">49</a></sup> After deliberation, Condon testified that it was indeed Hauptmann whom he met at the cemetery,<sup><a href="#note50">50</a></sup> and Lindbergh himself testified it was Hauptmann’s voice he heard yelling, “Hey doctor!”<sup><a href="#note51">51</a></sup> Forensic evidence also implicates Hauptmann. Contrary to Scaduto’s claims, the autopsy of the victim was conducted with fidelity by Dr. Charles Mitchell, a veteran coroner, who easily identified the child by his (clearly recognizable) face. Lindbergh confirmed the body was that of his son.<sup><a href="#note52">52</a>, <a href="#note53">53</a></sup> Forensic experts<sup><a href="#note54">54</a></sup> then and now confirm a board from the ladder came from Hauptmann’s own attic.<sup><a href="#note55">55</a></sup> Scaduto notes that Hauptmann’s fingerprints did not match those found on the ransom note.<sup><a href="#note56">56</a></sup> This is true, but only because <em>no</em> fingerprints were found at the scene.<sup><a href="#note57">57</a></sup>
</p>
<p>
	At least 21 handwriting experts examined Hauptmann’s notebooks, and private letters <em>in addition</em> to the samples Hauptmann wrote for the police, all of whom concluded Hauptmann wrote the ransom notes during the trial.<sup><a href="#note58">58</a></sup> All of Anna Hauptmann’s lawsuits against the government through the early 1980s were dismissed for lack of evidence.<sup><a href="#note59">59</a></sup> As recently as 2003, a police archivist named Mark Fazini found a handwritten, anonymous note in German confessing to the crime.<sup><a href="#note60">60</a></sup> This would seem to exonerate Hauptmann unless one considers the note was debunked<sup><a href="#note61">61</a></sup> and was only one of dozens of similar confessions.<sup><a href="#note62">62</a></sup>
</p>
<p>
	Through the years, at least 16 different people have claimed to be the actual Lindbergh baby, including an African American woman from Trenton, NJ.<sup><a href="#note63">6</a></sup>3 Establishing the Lindbergh baby survived and grew up under an assumed name would absolutely exonerate Hauptmann, but no substantive evidence for any one of these claims has ever been provided.<sup><a href="#note64">64</a>, <a href="#note65">65</a>, <a href="#note66">66</a></sup> Even more damning is Hauptmann’s modus operandi. His widow, Anna, gave multiple interviews in which she asserted Richard was telling the truth<sup><a href="#note67">67</a></sup> and could never commit such a crime.<sup><a href="#note68">68</a>, <a href="#note69">69</a></sup> In fact, Hauptmann had an extensive criminal record. While in Germany, for example, he’d been convicted of robbery at gunpoint and even burglarized a home <em>while using a ladder</em>.<sup><a href="#note70">70</a></sup>
</p>
<p>
	Why didn’t Hauptmann name his accomplices and save himself, then? According to criminal profiler John Douglas, it isn’t unusual for the condemned to maintain innocence in order to spare their surviving family members public shame.<sup><a href="#note71">71</a></sup> Hauptmann also believed he would be spared the electric chair, as the governor of New Jersey publicly expressed doubts about Hauptmann’s role in the crime.<sup><a href="#note72">72</a></sup>
</p>
<div class="imagefloatleft" style="width: 250px;">
	<img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-1/Julius-and-Ethel-Rosenberg-1951.jpg" alt="Julius and Ethel Rosenberg" width="250" height="317" class="noBoxShadow"></p>
<p class="caption">
		Julius and Ethel Rosenberg, separated by heavy wire screen as they leave U.S. Court House after being found guilty by jury. (Credit: Roger Higgins, New York World-Telegram and Sun Collection / Library of Congress, Washington, D.C. Digital ID cph 3c17772)
	</p>
</div>
<h4 class="alt">
	Julius &#038; Ethel Rosenberg<br />
</h4>
<h4 class="title">
	The Crime<br />
</h4>
<p>
	In January of 1950, a physicist who had worked on the Manhattan Project named Klaus Fuchs was arrested in Great Britain for passing top-secret information about the atomic bomb to the Soviet Union.<sup><a href="#note73">73</a></sup> Fuchs admitted the crime and fingered a Swiss chemist named Harry Gold as the courier between himself and the Soviets. Gold was arrested and identified others in the espionage ring, including a machinist at Los Alamos, David Greenglass,<sup><a href="#note74">74</a></sup> who first denied the charges, and then, in June of 1950, named his brother-in-law, Julius Rosenberg, as the one who convinced him to spy for the Russians.<sup><a href="#note75">75</a></sup> Julius Rosenberg was living with his wife Ethel and two children in Knickerbocker Village, a housing development located near the Manhattan Bridge.<sup><a href="#note76">76</a></sup> Julius was arrested and flatly denied any involvement.<sup><a href="#note77">77</a></sup>
</p>
<p>
	A grand jury convened in August 1950 to investigate the spy ring, one of the witnesses being Julius Rosenberg’s wife, Ethel. Following her testimony in which she invoked her right not to incriminate herself, Ethel was charged with conspiracy to commit espionage alongside Julius and another defendant, Morton Sobell.<sup><a href="#note78">78</a></sup>
</p>
<p>
	At their trial, Greenglass testified that Julius had orchestrated the espionage at his home in January 1945. Julius went into his kitchen with Ruth (David’s wife) and Ethel, and cut a side panel of a Jell-O box into two irregular parts. He passed one piece to Ruth, asserting that the spy contacting her and David at Los Alamos would identify themselves with the other half.<sup><a href="#note79">79</a></sup> Ruth testified that Ethel solicited her to approach David to spy and typed the notes David brought back to New York with him. Greenglass confirmed his wife’s testimony, further implicating Ethel by testifying she typed the notes containing nuclear secrets, which were turned over to Harry Gold. Both Rosenbergs denied any involvement whatsoever in espionage and refused to answer questions about their Communist party membership.<sup><a href="#note80">80</a></sup>
</p>
<p>
	The accused were found guilty in March 1953. Greenglass was sentenced to 15 years (a lighter sentence because he’d agreed to turn state’s evidence), Sobell received 30 years, and the Rosenbergs were sentenced to die in the electric chair.<sup><a href="#note81">81</a></sup> Despite pleas for clemency by notables, including Pope Pius XII, Jean-Paul Sartre, and Albert Einstein,<sup><a href="#note82">82</a></sup> Julius and Ethel Rosenberg went to their deaths maintaining their innocence.<sup><a href="#note83">83</a></sup> At the time of their conviction and execution, and for many years afterward, many Americans believed the United States executed two innocent people.<sup><a href="#note84">84</a></sup>
</p>
<h4 class="title">
	The Conspiracies<br />
</h4>
<p>
	As in the Hauptmann case, <em>Rosenberg v. United States: 346 U.S. 273</em> lives on. In 1971, novelist E.L. Doctorow published <em>The Book of Daniel</em>, a fictionalized account of the case.<sup><a href="#note85">85</a></sup> A film adaptation (<em>Daniel</em>) followed in 1983.<sup><a href="#note86">86</a></sup> Bob Dylan recorded “Julius and Ethel” in 1983<sup><a href="#note87">87</a></sup> and Meryl Streep portrayed Ethel’s ghost haunting her prosecutor Roy Cohn in the movie <em>Angels in America</em> in 2003. If anything, the Rosenberg case has only gained prominence in the last quarter century. In 2001, a <em>New York Times</em> reporter named Sam Roberts tracked down David Greenglass (who testified against the Rosenbergs), who was living under an assumed name. In the extensive interviews for <em>The Brother: The Untold Story of the Rosenberg Case</em>, Greenglass admitted he’d lied on the witness stand about Ethel typing the letters of instruction from Julius to the Soviets.<sup><a href="#note88">88</a></sup>
</p>
<p>
	In 2004, Ivy Meeropol, granddaughter of Julius and Ethel Rosenberg, released the documentary <em>Heir to an Execution</em>, in which she incorporates archival footage with interviews of her family members and the other alleged conspirators.<sup><a href="#note89">89</a></sup>
</p>
<p>
	In 2008, Michael and Robbie Meeropol (the Rosenbergs’ surviving children, who had been adopted into the Meeropol family) unsuccessfully petitioned President Obama to exonerate their mother using their uncle David Greenglass’ confession.<sup><a href="#note90">90</a>, <a href="#note91">91</a></sup> Recently, in 2021, Anne Sebbe published <em>Ethel Rosenberg: An American Tragedy</em>, in which she argued for Ethel’s innocence.
</p>
<h4 class="title">
	The Evidence<br />
</h4>
<p>
	Rosenberg defenders often note outside factors that led to their convictions: Jurist prejudice, antisemitism, Cold War hysteria, and (in Ethel’s case) misogyny have been named as the reasons for their convictions and executions.<sup><a href="#note92">92</a>, <a href="#note93">93</a></sup> Another common argument is that the Rosenbergs assisted a World War II ally, not an enemy, therefore they should not have been tried and convicted for treason.<sup><a href="#note94">94</a></sup>
</p>
<p>
	The facts of the case tell a different story. The Rosenbergs were charged with conspiracy to commit espionage, not treason.<sup><a href="#note95">95</a></sup> Evidence shows Julius approached Soviet intelligence<sup><a href="#note96">96</a></sup> agents before Hitler invaded Russia at a time when the Nazi leader and Stalin were collaborating under the Molotov-Ribbentrop Pact.<sup><a href="#note97">97</a></sup> In 1995, the U.S. National Security Agency (NSA) released translations of Soviet cables decrypted in the 1940s. Called VENONA, it ran from 1943 to 1980 and identified hundreds of Soviet agents in America and other Western countries.<sup><a href="#note98">98</a></sup> The cables identify Julius as the head of a vast spy ring, assigning him two code names, “liberal”<sup><a href="#note99">99</a></sup> and “antenna.” In 2008, co-defendant Morton Sobell affirmed he and Julius were spies but the information passed was useless.<sup><a href="#note100">100</a></sup>
</p>
<p>
	In 2009, Alexander Vassiliev, a former KGB officer and defector to Great Britain, released his notes taken during his service in the Russian Foreign Intelligence Service (SVR), which debunk Sobell’s claim that minimizes Julius’ activities. Not only did Julius orchestrate the theft of top-secret information from Los Alamos, he also recruited a man named Russell Alton McNutt (son and brother of members of the Communist Party of the United States)<sup><a href="#note101">101</a></sup> to obtain information from a uranium enrichment plant in Oak Ridge, TN.
</p>
<p class="pullquote">
	Sensationalism and conspiratorial thinking keep the cases of Bruno Richard Hauptmann and the Rosenbergs thriving as cottage industries.
</p>
<p>
	When the VENONA transcripts were released, the narrative for innocence shifted from “the Rosenbergs were innocent” to “Julius Rosenberg was guilty, but Ethel was innocent.”<sup><a href="#note102">102</a></sup> What of Ethel’s guilt, then? At their trial, prosecutor Irving Saypol established Ethel’s guilt in his summation by stating, “Mrs. Rosenberg struck the keys, blow by blow, against her own country in the interests of the Soviets.”<sup><a href="#note103">103</a></sup> In 2001, Greenglass admitted he likely perjured himself by testifying Ethel typed Julius’ instructions<sup><a href="#note104">104</a></sup> and, indeed the Vassiliev notes seem to confirm this.<sup><a href="#note105">105</a></sup> Sobell’s 2008 admission notes that Ethel Rosenberg knew of her husband’s activities but did not actively spy herself.<sup><a href="#note106">106</a></sup>
</p>
<p>
	Despite the commonly-held belief that Ethel Rosenberg is not mentioned in the VENONA Project,<sup><a href="#note107">107</a></sup> in fact, she is. The Soviet spy cables describe Ethel as “…well devoted politically (who)<sup><a href="#note108">108</a></sup> knows her husband’s work and the role of ‘Twain’ and ‘Callistratus.’ (code names of Soviet agents).”<sup><a href="#note109">109</a></sup> If the only evidence against Ethel were the false testimony of Greenglass and her sole mention in the VENONA cables, a reasonable case might be made for doubt. Unfortunately for proponents of her innocence, substantive evidence has since come forth that makes it clear Ethel not only knew of her husband’s illegal activities but actively participated in spying alongside him.<sup><a href="#note110">110</a></sup>
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.1/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n1_lg.jpg" alt="Skeptic 29.1 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.1<br />
		<a href="https://www.skeptic.com/productlink/magv29n1">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/259456#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	Through Vasiliev’s leak, we know that Ruth testified truthfully when she claimed Ethel solicited her to persuade David Greenglass to spy.<sup><a href="#note111">111</a></sup> A letter written to Moscow by Julius Rosenberg himself substantiates this.<sup><a href="#note112">112</a></sup> Vasiliev’s notes reveal Ethel met with <em>at least three</em> of the KGB officers with whom Julius was spying.<sup><a href="#note113">113</a></sup> Why did Greenglass perjure himself in front of the grand jury and later at his trial, then? Simply, he was attempting to protect his sister and hoped the government would leave her out of the indictment charging Julius. In the same transcripts before the grand jury, Greenglass implicates Ethel by testifying she was present at a meeting between Julius and Ann Sidorovich, one of the couriers for the spy ring.<sup><a href="#note114">114</a></sup>
</p>
<p>
	Why didn’t they save themselves by naming others, then? As noted, the Vasiliev leak makes clear the spy ring Julius orchestrated was far more expansive in scope than was revealed to the public. Julius and Ethel most likely did not reveal names because they (correctly) believed the FBI had yet to identify them and those individuals could continue spying for the Soviets after their own deaths.<sup><a href="#note115">115</a></sup>
</p>
<p style="text-align: center; color: #bbb;">
	• • • • • •
</p>
<p>
	Sensationalism and conspiratorial thinking keep the cases of Bruno Richard Hauptmann and the Rosenbergs thriving as cottage industries. In truth, the evidence for Hauptmann’s involvement in the Lindbergh Kidnapping remains exceptionally strong, as does the case for Julius and Ethel Rosenberg’s guilt in conspiring to commit espionage. Even though books propounding conspiracy theories exonerating them sell—and sell well—the full weight of evidence shows, beyond any reasonable doubt, NYC’s most controversial defendants to have been guilty of the crimes for which they were charged. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>John D. Van Dyke</strong> is an academic and science educator. His personal website is <a href="https://www.vandykerevue.org/">vandykerevue.org</a>.
	</p>
<h5>
		References<br />
	</h5>
<ol>
<li id="note01">
			<a href="https://bit.ly/3vurndO">https://bit.ly/3vurndO</a>
		</li>
<li id="note02">
			<a href="https://bit.ly/48ubClA">https://bit.ly/48ubClA</a>
		</li>
<li id="note03">
			Cahill, R. T. (2014). <em>Hauptmann’s Ladder: A Step-by-step Analysis of the Lindbergh Kidnapping</em>. Kent State University Press.
		</li>
<li id="note04">
			Lee, H. C., &#038; Labriola, J. (2001). <em>Famous Crimes Revisited: From Sacco-Vanzetti to O.J. Simpson, Including Lindbergh Kidnapping, Sam Sheppard, John F. Kennedy, Vincent Foster, JonBenet Ramsey</em>. Strong Books.
		</li>
<li id="note05">
			<a href="https://bit.ly/3tGoTJ1">https://bit.ly/3tGoTJ1</a>
		</li>
<li id="note06">
			<a href="https://bit.ly/48lmLWi">https://bit.ly/48lmLWi</a>
		</li>
<li id="note07">
			Ibid.
		</li>
<li id="note08">
			<a href="https://bit.ly/3S3jhSq">https://bit.ly/3S3jhSq</a>
		</li>
<li id="note09">
			<a href="https://bit.ly/3tC2igI">https://bit.ly/3tC2igI</a>
		</li>
<li id="note10">
			<a href="https://bit.ly/48ubClA">https://bit.ly/48ubClA</a>
		</li>
<li id="note11">
			<a href="https://bit.ly/3tC2igI">https://bit.ly/3tC2igI</a>
		</li>
<li id="note12">
			<a href="https://bit.ly/3TOCsRd">https://bit.ly/3TOCsRd</a>
		</li>
<li id="note13">
			<a href="https://bit.ly/3THoG2H">https://bit.ly/3THoG2H</a>
		</li>
<li id="note14">
			<a href="https://bit.ly/3vf0v1o">https://bit.ly/3vf0v1o</a>
		</li>
<li id="note15">
			<a href="https://bit.ly/41H83X4">https://bit.ly/41H83X4</a>
		</li>
<li id="note16">
			<a href="https://bit.ly/48ubClA">https://bit.ly/48ubClA</a>
		</li>
<li id="note17">
			<a href="https://bit.ly/3vlg2wL">https://bit.ly/3vlg2wL</a>
		</li>
<li id="note18">
			<a href="https://bit.ly/4aE8up4">https://bit.ly/4aE8up4</a>
		</li>
<li id="note19">
			Cahill, R. T. (2014). <em>Hauptmann’s Ladder: A Step-by-step Analysis of the Lindbergh Kidnapping</em>. Kent State University Press.
		</li>
<li id="note20">
			<a href="https://bit.ly/48ubClA">https://bit.ly/48ubClA</a>
		</li>
<li id="note21">
			<a href="https://bit.ly/41Ky8EL">https://bit.ly/41Ky8EL</a>
		</li>
<li id="note22">
			Douglas, J. E., &#038; Olshaker, M. (2001). <em>The Cases That Haunt Us</em>. Pocket Books.
		</li>
<li id="note23">
			Baker, N. F. (1932, Jul–Aug). The Case of Violet Sharpe. <em>Journal of Criminal Law and Criminology (1931–1951), 23</em>(2), 166–168.
		</li>
<li id="note24">
			Pearlman, L. (2020). <em>The Lindbergh Kidnapping Suspect No. 1: The Man Who Got Away</em>. Regent Press.
		</li>
<li id="note25">
			Scaduto, A. (1976). <em>Scapegoat: The Lonesome Death of Bruno Richard Hauptmann</em>. Putnam.
		</li>
<li id="note26">
			<a href="https://bit.ly/48ubClA">https://bit.ly/48ubClA</a>
		</li>
<li id="note27">
			<a href="https://bit.ly/3TOECAj">https://bit.ly/3TOECAj</a>
		</li>
<li id="note28">
			<a href="https://bit.ly/3RLCOFw">https://bit.ly/3RLCOFw</a>
		</li>
<li id="note29">
			<a href="https://bit.ly/3HnYm6F">https://bit.ly/3HnYm6F</a>
		</li>
<li id="note30">
			<a href="https://bit.ly/3H90BKA">https://bit.ly/3H90BKA</a>
		</li>
<li id="note31">
			Zorn, R. (2012). <em>Cemetery John: The Undiscovered Mastermind Behind the Lindbergh Kidnapping</em>. Harry N. Abrams.
		</li>
<li id="note32">
			<a href="https://bit.ly/3RZ2EY1">https://bit.ly/3RZ2EY1</a>
		</li>
<li id="note33">
			<a href="https://bit.ly/3tH1ib5">https://bit.ly/3tH1ib5</a>
		</li>
<li id="note34">
			Pearlman, L. (2020). <em>The Lindbergh Kidnapping Suspect No. 1: The Man Who Got Away</em>. Regent Press.
		</li>
<li id="note35">
			<a href="https://bit.ly/3TLjBGW">https://bit.ly/3TLjBGW</a>
		</li>
<li id="note36">
			<a href="https://bit.ly/3tH1ib5">https://bit.ly/3tH1ib5</a>
		</li>
<li id="note37">
			<a href="https://bit.ly/3tGoTJ1">https://bit.ly/3tGoTJ1</a>
		</li>
<li id="note38">
			<a href="https://bit.ly/3vh3QwV">https://bit.ly/3vh3QwV</a>
		</li>
<li id="note39">
			<a href="https://bit.ly/48jplMr">https://bit.ly/48jplMr</a>
		</li>
<li id="note40">
			<a href="https://bit.ly/3vlg2wL">https://bit.ly/3vlg2wL</a>
		</li>
<li id="note41">
			<a href="https://bit.ly/4aDLNkT">https://bit.ly/4aDLNkT</a>
		</li>
<li id="note42">
			<a href="https://bit.ly/3vurndO">https://bit.ly/3vurndO</a>
		</li>
<li id="note43">
			<a href="https://bit.ly/3vlg2wL">https://bit.ly/3vlg2wL</a>
		</li>
<li id="note44">
			Scaduto, A. (1976). <em>Scapegoat: The Lonesome Death of Bruno Richard Hauptmann</em>. Putnam.
		</li>
<li id="note45">
			<a href="https://bit.ly/47jKsNg">https://bit.ly/47jKsNg</a>
		</li>
<li id="note46">
			<a href="https://bit.ly/3vurndO">https://bit.ly/3vurndO</a>
		</li>
<li id="note47">
			<a href="https://bit.ly/3S3PyJd">https://bit.ly/3S3PyJd</a>
		</li>
<li id="note48">
			<a href="https://bit.ly/3vurndO">https://bit.ly/3vurndO</a>
		</li>
<li id="note49">
			<a href="https://bit.ly/3TLkdfI">https://bit.ly/3TLkdfI</a>
		</li>
<li id="note50">
			<a href="https://bit.ly/3tD3i49">https://bit.ly/3tD3i49</a>
		</li>
<li id="note51">
			<a href="https://bit.ly/3vh3QwV">https://bit.ly/3vh3QwV</a>
		</li>
<li id="note52">
			<a href="https://bit.ly/3tD3i49">https://bit.ly/3tD3i49</a>
		</li>
<li id="note53">
			<a href="https://bit.ly/3H2sq7l"> https://bit.ly/48ubClA</a>
		</li>
<li id="note54">
			Graham, S. A. (1997). Anatomy of the Lindbergh Kidnapping. <em>Journal of Forensic Sciences, 42</em>(3), 368–377.
		</li>
<li id="note55">
			<a href="https://bit.ly/3H2sq7l">https://bit.ly/3H2sq7l</a>
		</li>
<li id="note56">
			<a href="https://bit.ly/47kttu7">https://bit.ly/47kttu7</a>
		</li>
<li id="note57">
			<a href="https://bit.ly/3vurndO">https://bit.ly/3vurndO</a>
		</li>
<li id="note58">
			<a href="https://bit.ly/47kttu7">https://bit.ly/47kttu7</a>
		</li>
<li id="note59">
			<a href="https://bit.ly/48uKrHu">https://bit.ly/48uKrHu</a>
		</li>
<li id="note60">
			<a href="https://bit.ly/3vit666">https://bit.ly/3vit666</a>
		</li>
<li id="note61">
			<a href="https://bit.ly/48iyswO">https://bit.ly/48iyswO</a>
		</li>
<li id="note62">
			Cahill, R. T. (2014). <em>Hauptmann’s Ladder: A Step-by-step Analysis of the Lindbergh Kidnapping</em>. Kent State University Press.
		</li>
<li id="note63">
			<a href="https://bit.ly/3vit666">https://bit.ly/3vit666</a>
		</li>
<li id="note64">
			<a href="https://bit.ly/4aE9xFw">https://bit.ly/4aE9xFw</a>
		</li>
<li id="note65">
			<a href="https://bit.ly/3tuqEch">https://bit.ly/3tuqEch</a>
		</li>
<li id="note66">
			<a href="https://bit.ly/3H45wN1">https://bit.ly/3H45wN1</a>
		</li>
<li id="note67">
			<a href="https://bit.ly/4aEiL4B"> https://bit.ly/3TLjBGW</a>
		</li>
<li id="note68">
			<a href="https://bit.ly/4aEiL4B">https://bit.ly/4aEiL4B</a>
		</li>
<li id="note69">
			<a href="https://bit.ly/3TLjBGW">https://bit.ly/3TLjBGW</a>
		</li>
<li id="note70">
			<a href="https://bit.ly/41H83X4">https://bit.ly/41H83X4</a>
		</li>
<li id="note71">
			<a href="https://bit.ly/3RZ2EY1">https://bit.ly/3RZ2EY1</a>
		</li>
<li id="note72">
			<a href="https://bit.ly/3S3wpqE">https://bit.ly/3S3wpqE</a>
		</li>
<li id="note73">
			<a href="https://bit.ly/47qpGLL">https://bit.ly/47qpGLL</a>
		</li>
<li id="note74">
			<a href="https://bit.ly/47mrTIr">https://bit.ly/47mrTIr</a>
		</li>
<li id="note75">
			<a href="https://bit.ly/3RMvRnA">https://bit.ly/3RMvRnA</a>
		</li>
<li id="note76">
			<a href="https://bit.ly/3vuvxlW">https://bit.ly/3vuvxlW</a>
		</li>
<li id="note77">
			<a href="https://bit.ly/47j3G5B">https://bit.ly/47j3G5B</a>
		</li>
<li id="note78">
			<a href="https://bit.ly/48CAEPl">https://bit.ly/48CAEPl</a>
		</li>
<li id="note79">
			<a href="https://bit.ly/48oLtVO">https://bit.ly/48oLtVO</a>
		</li>
<li id="note80">
			<a href="https://bit.ly/4aAz1Ul">https://bit.ly/4aAz1Ul</a>
		</li>
<li id="note81">
			<a href="https://bit.ly/3RMvRnA">https://bit.ly/3RMvRnA</a>
		</li>
<li id="note82">
			<a href="https://bit.ly/3H6Q0QB">https://bit.ly/3H6Q0QB</a>
		</li>
<li id="note83">
			<a href="https://bit.ly/47j3G5B">https://bit.ly/47j3G5B</a>
		</li>
<li id="note84">
			<a href="https://bit.ly/3RFhJwF">https://bit.ly/3RFhJwF</a>
		</li>
<li id="note85">
			<a href="https://bit.ly/48jB5OL">https://bit.ly/48jB5OL</a>
		</li>
<li id="note86">
			<a href="https://bit.ly/41JPGRh">https://bit.ly/41JPGRh</a>
		</li>
<li id="note87">
			<a href="https://bit.ly/3tMR4WB">https://bit.ly/3tMR4WB</a>
		</li>
<li id="note88">
			Roberts, S. (2001). <em>The Brother: The Untold Story of Atomic Spy David Greenglass and How He Sent His Sister, Ethel Rosenberg, to the Electric Chair</em>. Random House.
		</li>
<li id="note89">
			<a href="https://bit.ly/3tvO1SO">https://bit.ly/3tvO1SO</a>
		</li>
<li id="note90">
			<a href="https://bit.ly/3RJgAUT">https://bit.ly/3RJgAUT</a>
		</li>
<li id="note91">
			<a href="https://bit.ly/3tF9aKe">https://bit.ly/3tF9aKe</a>
		</li>
<li id="note92">
			<a href="https://bit.ly/41GJ0U0">https://bit.ly/41GJ0U0</a>
		</li>
<li id="note93">
			<a href="https://bit.ly/47paoHc">https://bit.ly/47paoHc</a>
		</li>
<li id="note94">
			<a href="https://bit.ly/3NOVlQ6">https://bit.ly/3NOVlQ6</a>
		</li>
<li id="note95">
			<a href="https://bit.ly/47j3G5B">https://bit.ly/47j3G5B</a>
		</li>
<li id="note96">
			<a href="https://bit.ly/3TKihUo">https://bit.ly/3TKihUo</a>
		</li>
<li id="note97">
			<a href="https://bit.ly/4aEkiYp">https://bit.ly/4aEkiYp</a>
		</li>
<li id="note98">
			<a href="https://bit.ly/3tF9aKe">https://bit.ly/3tF9aKe</a>
		</li>
<li id="note99">
			<a href="https://bit.ly/47kQFZ6">https://bit.ly/47kQFZ6</a>
		</li>
<li id="note100">
			<a href="https://bit.ly/3NPQcYi">https://bit.ly/3NPQcYi</a>
		</li>
<li id="note101">
			<a href="https://bit.ly/3TKihUo">https://bit.ly/3TKihUo</a>
		</li>
<li id="note102">
			<a href="https://bit.ly/3RFhJwF">https://bit.ly/3RFhJwF</a>
		</li>
<li id="note103">
			<a href="https://bit.ly/41PIHGA">https://bit.ly/41PIHGA</a>
		</li>
<li id="note104">
			Roberts, S. (2001). <em>The Brother: The Untold Story of Atomic Spy David Greenglass and How He Sent His Sister, Ethel Rosenberg, to the Electric Chair</em>. Random House.
		</li>
<li id="note105">
			<a href="https://bit.ly/3TKihUo">https://bit.ly/3TKihUo</a>
		</li>
<li id="note106">
			<a href="https://bit.ly/3NPQcYi">https://bit.ly/3NPQcYi</a>
		</li>
<li id="note107">
			<a href="https://bit.ly/3RKGHul">https://bit.ly/3RKGHul</a>
		</li>
<li id="note108">
			<a href="https://bit.ly/3tF9aKe">https://bit.ly/3tF9aKe</a>
		</li>
<li id="note109">
			<a href="https://bit.ly/48Aptqu">https://bit.ly/48Aptqu</a>
		</li>
<li id="note110">
			<a href="https://bit.ly/3TKihUo">https://bit.ly/3TKihUo</a>
		</li>
<li id="note111">
			<a href="https://bit.ly/48Aptqu">https://bit.ly/48Aptqu</a>
		</li>
<li id="note112">
			<a href="https://bit.ly/4aAz1Ul">https://bit.ly/4aAz1Ul</a>
		</li>
<li id="note113">
			<a href="https://bit.ly/4aKdrwL">https://bit.ly/4aKdrwL</a>
		</li>
<li id="note114">
			<a href="https://bit.ly/3tF9aKe">https://bit.ly/3tF9aKe</a>
		</li>
<li id="note115">
			<a href="https://bit.ly/3TKihUo">https://bit.ly/3TKihUo</a>
		</li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Chatbots and the Pursuit of Artificial General Intelligence</title>
		<link>https://www.skeptic.com/reading_room/pursuit-of-artificial-general-intelligence/</link>
		
		<dc:creator><![CDATA[Peter Kassan]]></dc:creator>
		<pubDate>Fri, 12 Jul 2024 07:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=45015</guid>

					<description><![CDATA[Peter Kassan examines the hype and limitations of artificial intelligence chatbots and the pursuit of artificial general intelligence (AGI).]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">S</span>ince the release of ChatGPT and other large language models (LLMs) in the fall of 2022, the hype and hysteria about artificial intelligence has been dialed up to 11. ChatGPT, Bard, and other LLMs (hereinafter, “chatbots”) have been seen as significant advances toward the holy grail of artificial general intelligence (AGI).
</p>
<p>
	Chatbots are the result of artificial neural nets of unprecedented size, processing enormous samples of natural language—some say essentially the entire World Wide Web (or, at least, all of Wikipedia). At least up to a certain date:
</p>
<p>
	<strong>Me</strong>: What is today’s date?
</p>
<p class="Attention">
	<strong>ChatGPT-3</strong> [on 7/18/2023]: Today’s date is November 8, 2021.
</p>
<p>
	ChatGPT-3.5 knows today’s date, but is still time-limited:
</p>
<p>
	<strong>Me</strong>: What happened yesterday?
</p>
<p class="Attention">
	<strong>ChatGPT-3.5</strong>: I’m sorry for any confusion, but as of my last knowledge update in January 2022, I don’t have real-time information, and I’m unable to provide details about events that occurred after that date. To find out what happened yesterday, I recommend checking the latest news updates, online news websites, or other reliable sources for the most recent information.
</p>
<p>
	Chatbots produce astonishingly convincing simulacra of natural language. Their outputs, however, are the result purely of pattern-matching. Chatbots are autocomplete on steroids. Chatbots “understand” nothing, even as they fluently respond to user prompts. It used to be said that natural language computer programs had syntax but not semantics. Chatbots have neither syntax nor semantics, only statistics.
</p>
<p style="text-align: center; color: #bbb;">
	• • • • • •
</p>
<p>
	The notion that large language models might in some way resemble or parallel what’s actually going on in a real human brain is perhaps suggested by the phrase “artificial neural net[work].”
</p>
<p>
	The idea that so-called <em>artificial</em> neural networks bear any resemblance to <em>natural</em> neural networks (in plain words, actual brains) is almost laughable. In artificial neural nets (including those in large language models), the “neurons” and the “synapses” are represented by one or two parameters (and the pattern of the network itself largely by a single number identifying the strength of the connection between pairs of neurons). Detailed biological models of actual neurons and synapses can consist of hundreds or even thousands of values for each neuron and each synapse—specifying, for example, the firing rate, the refractory period, and the strength of each neurotransmitter emitted—and <em>all</em> artificial neural nets omit entirely attempting to model the activity of the glial cells, which in actual brains influence neurons hormonally in ways that are scarcely understood.
</p>
<p>
	In other words, although the latest LLMs are impressively large and their output impressively fluent, their models are oversimplified by two, three or perhaps even larger orders of magnitude. In the 1940s, artificial networks were said to be “modeled” on actual brains. Nowadays, they are only said to be “inspired” by natural neural networks—a word so vague as to disclaim any accuracy at all. <span id="more-45015"></span>
</p>
<p>
	To the extent that anyone attempts to model language beyond the level of neurons and synapses, they would somehow have to map the semantics of thought onto the syntax of speech (and written language). But, as we’ve seen, large language models have neither syntax nor semantics. We have no idea how it all works, and we have no idea how to model it, either.
</p>
<p style="text-align: center; color: #bbb;">
	• • • • • •
</p>
<p>
	That chatbots respond to prompts <em>in</em> natural language <em>with</em> natural language is the foundation of the illusion that there’s anything akin to comprehension occurring. But consider the psychological effect of reading this string:
</p>
<p>
	100 000 010 000 111 000 101 000 110 000 011 001
</p>
<p>
	Does this mean anything to you? Me neither. Now apply the following table:
</p>
<table>
<tr>
<th>[blank]</th>
<td>000</td>
</tr>
<tr>
<th>[period]</th>
<td>001</td>
</tr>
<tr>
<th>am</th>
<td>010</td>
</tr>
<tr>
<th>body</th>
<td>011</td>
</tr>
<tr>
<th>I</th>
<td>100</td>
</tr>
<tr>
<th>just</th>
<td>101</td>
</tr>
<tr>
<th>my</th>
<td>110</td>
</tr>
<tr>
<th>not</th>
<td>111</td>
</tr>
</table>
<p>
	The string above can now be interpreted as:
</p>
<p>
	<em>I am not just my body.</em>
</p>
<p>
	Reading this sentence has quite a different effect. But from the point of view of the programming of a chatbot, they are exactly equivalent.
</p>
<p>
	The training of a chatbot doesn’t even attempt to address what any of those words <em>mean</em>—that is, their referents in the real world. Back when AIs were programmed with some attempt to incorporate how sentences were constructed and interpreted, this was known as the <em>syntactic bubble</em>. As an example, let’s try to understand what “body” means solely by following Merriam-Webster. (We’ll ignore the fact that, whenever we look a word up, we have to determine the appropriate sense to choose the appropriate definition, which is sometimes unclear.) Our paper chase begins like this:
</p>
<ul>
<li>
		<strong>Body</strong>: the organized physical substance [of an animal or plant either living or dead].
	</li>
<li>
		<strong>Organized</strong>: having a formal organization to coordinate and carry out activities.
	</li>
<li>
		<strong>Formal</strong>: belonging to or constituting the form or essence of a thing; relating to or involving the outward form, structure, relationships, or arrangement of elements rather than content.
	</li>
<li>
		<strong>Organization</strong>: the condition or manner of being organized.
	</li>
</ul>
<p>
	Well, that didn’t take long, did it? We’re caught in a loop from which we can never escape. (This is just the first loop encountered in chasing down “body.” Another loop goes from “physical” to “material” and back again. Nor was “body” chosen because it was expected to be particularly difficult; you can conduct the same exercise with any word at all.)
</p>
<p>
	Nowadays, this problem might be dubbed the <em>chatbot bubble</em>. Chatbots no more understand the prompts to which they respond and the outputs they produce than the printer that disgorges your store receipt is genuinely appreciative when it thanks you for your purchase.
</p>
<p>
	Why do we find chatbots so convincing that some people assert that they have already achieved sentience and others actually believe that they share a love with their automated interlocutors?
</p>
<p>
	Well, we’re easily fooled. Have you ever answered a phone call and, thinking you were listening to a real person, tried to reply to a recording? I know I have.
</p>
<p>
	Almost immediately after birth, people normally develop what’s known, awkwardly and unfortunately, as <em>theory of mind</em>. (Hereinafter, I’ll refer to this instead as a <em>mind model</em>.) This is the irresistible sense or intuition that other people (and most animals) experience, well, <em>experience</em>, just as we ourselves do—thoughts and feelings, pleasure and pain, hunger and thirst, sensation and perception. We make this inferential leap long before we have any intellectual or philosophical justification for it (or the chance to dispute it—as do eliminative materialists).
</p>
<p>
	Since the invention of written language, people have made the same inference on much narrower grounds—we infer sentience from sentences alone. (Even if we don’t believe that the cash register itself is sentient, we know that someone once wrote the words “THANK YOU” that the cash register mindlessly spits out on the slip of paper you get.) Very recently, we’ve been engaging in email and text message dialogs with people we’ve never met or even seen, and we inevitably infer that our conversations are also with actual people. Now, when we engage in a dialogue with a chatbot and it replies with uncanny plausibility, we irresistibly make the same inference.
</p>
<p>
	This effect is not entirely accidental. ChatGPT, for example, was <em>designed</em> to reply in the first person <em>as if it had both consciousness and intentionality</em>. For example:
</p>
<p class="Attention">
	<em>I</em> am a large language model, also known as a conversational AI or chatbot trained to be informative and comprehensive. <em>I</em> am trained on a massive amount of text data, and <em>I</em> am able to communicate and generate human-like text in response to a wide range of prompts and questions. For example, <em>I</em> can provide summaries of factual topics or create stories.
</p>
<p>
	Let’s see how much less compelling it becomes in the third person:
</p>
<p class="Attention">
	ChatGPT is a large language model, also known as a conversational AI or chatbot trained to be informative and comprehensive. ChatGPT is trained on a massive amount of text data, and is able to communicate and generate human-like text in response to a wide range of prompts and questions. For example, ChatGPT can provide summaries of factual topics or create stories.
</p>
<p>
	Now, this reads as not coming from ChatGPT itself but from its designers—who of course were its real source. But the designers of ChatGPT have, even in their canned disclaimers, <em>themselves</em> attributed to ChatGPT awareness and intentionality, and invite us to do the same:
</p>
<div class="Attention">
<p>
		…<em>I am always learning</em>. <em>I try my best</em> to provide accurate and informative answers to your questions, but I may sometimes make mistakes. If you ever notice that I have provided a false or misleading answer, please let me <em>know</em> and <em>I will do my best</em> to correct it.
	</p>
<p>
		<em>I am aware</em> [of the phenomenon of chatbot hallucination], and <em>I am working to improve my ability</em> to detect and avoid it.
	</p>
</div>
<p>
	The intentionality attributed to ChatGPT is, in short, intentional:
</p>
<p class="Attention">
	My use of language, such as saying “I apologize,” is part of the programmed response to provide <em>a more human-like interaction</em> and to acknowledge and address user concerns.
</p>
<p style="text-align: center; color: #bbb;">
	• • • • • •
</p>
<p>
	When we watch a magician (or, as some performers more properly refer to themselves, an <em>illusionist</em>) perform, we don’t believe that we’re actually seeing real magic. In fact, the delight and astonishment comes from the paradoxical incongruity of what we have just seen with what we know of how the world works. (When an illusionist demonstrates and explains how an effect is achieved, the usual response is not a feeling of relief but of disappointment.) No one believes that, as illusions become increasingly elaborate and convincing, we’re getting any closer to achieving actual magic. Why, then, would we believe that as chatbots become more fluent and convincing we’re getting any closer to actual intelligence?
</p>
<p style="text-align: center; color: #bbb;">
	• • • • • •
</p>
<p>
	A recurrent claim is that, as chatbots incorporate larger and larger neural nets with deeper and deeper layers, digest greater and greater amounts of text, and receive more and more feedback from users and others, they will become somehow “better.” To the extent that “better” means more convincing, improvements are likely to be imperceptible—since their output is already convincing enough, thank you very much. But little or no attempt is actually being made to verify the factuality of the source material or what is emitted. ChatGPT contradicts itself about this (notice the use of the passive voice to avoid the assumption of responsibility):
</p>
<p class="Attention">
	…[E]fforts have been made to ensure the accuracy of the information…provide[d.] …[T]he selection of books, articles, and websites is typically based on a combination of factors, including considerations of reliability, diversity, and relevance. OpenAI aims to include sources that are widely recognized as reputable and credible within their respective fields.
</p>
<p>
	But:
</p>
<p class="Attention">
	…[T]he training process typically involves using as much text data as possible rather than a manual selection process.
</p>
<p style="text-align: center; color: #bbb;">
	• • • • • •
</p>
<p>
	Whatever intelligence is displayed by a chatbot is the intelligence embodied in the source texts that it has ingested. Imagine a chatbot entirely based on Wikipedia. As a recent article makes clear,<sup><a href="#note01">1</a></sup> Wikipedia is ChatGPT’s largest single source, and the relationship between Wikipedia and chatbots such as ChatGPT is more intimate and complicated than perhaps has been recognized. Wikipedia provides an application programming interface (API), allowing programmers to input a particular entry <em>automatically</em> for digestion and incorporation into its knowledge base, or just on-the-fly summarization or paraphrase.
</p>
<p>
	In this case, why would anyone attribute intelligence to such a chatbot, rather than simply to the authors of the encyclopedia itself? And if a chatbot digested two such encyclopedias? Three? Three thousand? At what point should the attribution of intelligence shift from the authors of the sources to the chatbot itself?
</p>
<p>
	Another major source is <em>The New York Times</em> (as well as other newspapers and magazines). In fact, the <em>Times</em> recently sued OpenAI (the creator of ChatGPT) and Microsoft (the creator of Bing and Bing Chat) for, in effect, plagiarism.<sup><a href="#note02">2</a></sup>
</p>
<p>
	ChatGPT, however, argues both sides of the question of whether it displays genuine intelligence with equal conviction. The following are excerpts from much longer and more detailed text responses to prompts.
</p>
<p class="Attention">
	ChatGPT displays artificial intelligence (AI) because it is an advanced language model that can understand and generate human-like text responses.
</p>
<p>
	But:
</p>
<p class="Attention">
	…[W]hile ChatGPT is an impressive language model capable of generating human-like text responses, it falls short of displaying true artificial intelligence due to its lack of understanding, limited domain, contextual limitations, inability to learn, and potential biases and errors.
</p>
<p>
	ChatGPT will agreeably argue both sides of almost any dispute you choose. Try it yourself with your own favorite controversy.
</p>
<p style="text-align: center; color: #bbb;">
	• • • • • •
</p>
<p>
	In the natural world, we usually recognize intelligence only in <em>animals with a brain</em> and, of course, the products of such intelligence, such as in the good design of a tool. The lives of animals (hereinafter referring to animals with a brain) display two fundamental attributes:
</p>
<ul>
<li>
		<strong>Embodiment</strong>: animals have bodies equipped with senses to perceive and mechanisms to respond and behave. Their engagement with the world is consistent with the body they actually have. An octopus, for example, uses all eight of its tentacles, but a chimpanzee doesn’t try to make use of the four limbs it doesn’t have.
	</li>
<li>
		<strong>Embeddedness</strong>: animals are alive in the world, with which they engage and interact.
	</li>
</ul>
<p>
	In addition to a mind model, animals behave as if they have a model of the world, based on the following basic attributes of reality:
</p>
<ul>
<li>
			<strong>Tridimensionality</strong>: Reality subtends three spatial dimensions.
		</li>
<li>
			<strong>Extensionality</strong>: Objects and animals have extent in each of the three spatial dimensions (even if one or even two dimensions may be minimal, as is the case of a sheet of paper or a hair).
		</li>
<li>
			<strong>Time</strong>: Reality passes continuously in time from the past to the present and into the future.
		</li>
<li>
			<strong>Continuity</strong>: Over time, and to a greater or lesser extent, things and processes persist from one moment to the next, with gradual change the norm and abrupt change the exception.
		</li>
<li>
			<strong>Regularity and Periodicity</strong>: A great many natural phenomena exhibit regular and often periodic change, such as the day/night cycle; waking and sleeping; needs such as hunger and thirst alternating with satiety following satisfaction; the lunar cycle; the seasons; birth, growth, maturity, and death; and the like.
		</li>
<li>
			<strong>Uniformity</strong>: To a greater or lesser extent, things and processes are more or less the same everywhere.
		</li>
<li>
			<strong>Consistency</strong>: To a greater or lesser extent, things and processes do not exhibit paradoxical properties.
		</li>
<li>
			<strong>Causality</strong>: Events in the present are contingent upon and caused by events in the past; events in the future will be contingent upon and caused by previous events.
		</li>
</ul>
<p>
	Other basic attributes of an animal’s reality include, among others, the following:
</p>
<ul>
<li>
		<strong>Affordance</strong>: Everything in the world, natural and artificial, has properties and features that can be used to accomplish one or more goals. For example, a door handle can be used to open a door, and it can also be used to hang your hat.
	</li>
<li>
		<strong>Choice</strong>: To a greater or lesser degree, an animal, having <em>agency</em>, can choose its behavior, based on its own goals and its knowledge of its situation.
	</li>
</ul>
<p>
	Further, animals become familiar with certain ubiquitous characteristics of their environments, including such features as gravity, light, sound, solidity, liquidity, gaseousness, the flow of liquids (rivers, waves, tides, rain), and gasses (winds), and so on.
</p>
<p>
	All of these, and much more, are reflected in a <em>world model</em> that underlies an animal’s implicit understanding of reality. The <em>mind model</em> is a fundamental part of the world model. This world model is much deeper and wide-ranging than any enumerable collection of facts could possibly be. Our world model and our ability to engage with the world based upon it arise almost immediately after birth and may have even evolved into the brain without the need for any experience at all to trigger it into being. In the case of humans, this implicit world model almost always includes language, explicit education, and training. To date, all attempts to incorporate a world model into an AI have failed.
</p>
<p>
	To give an example of implicit knowledge: you intuitively know that a person can’t touch their left elbow with their left hand. What does ChatGPT-3 think?
</p>
<p class="Attention">
	Yes, a person can touch their left elbow with their left hand by crossing their left arm over their chest and reaching their left hand towards their left elbow.
</p>
<p>
	ChatGPT-3.5 does a good deal better:
</p>
<p class="Attention">
	No, it is physically impossible for a person to touch their left elbow with their left hand. The range of motion of the human arm does not allow for the hand to reach the same-side elbow without external assistance or manipulation.
</p>
<p style="text-align: center; color: #bbb;">
	• • • • • •
</p>
<p>
	If a chatbot—necessarily <em>dis</em>embodied and <em>un</em>embedded— can’t achieve actual intelligence, what about a robot? At least some AI researchers believe that actual computerized electromechanical robots can solve the dual fundamental realities of embodiedness and embeddedness. How’s that been going?
</p>
<p>
	Boston Dynamics appears to have the most advanced robots. In one demonstration,<sup><a href="#note03">3</a></sup> the robot runs and jumps when it can simply walk, seeming to be more concerned with demonstrating its parkour (the sport of traversing obstacles in urban environments by running, climbing, and jumping athletically) skills than anything else.
</p>
<p>
	Perhaps the most conspicuous deficiency in such robots is their silence. No attempt has been made to incorporate a chatbot into one—and it’s entirely unclear how such could be done. In the demonstration of the Boston Dynamics robot cited earlier, a person uses an app to input its instructions to the robot and the app screen is pointedly left unshown.
</p>
<p>
	By its very nature, a chatbot—being unembodied and unembedded—has a timeless view from nowhere and everywhere, whereas a robot is necessarily in a particular place and a particular position at a particular moment in time.
</p>
<p style="text-align: center; color: #bbb;">
	• • • • • •
</p>
<p>
	In our intuitive apprehension of intelligence, it’s often depicted as a kind of spotlight (or, lately, laser) that can, with equal facility, illuminate and penetrate any subject to which attention is directed. This is much like the historical misconception of vision as a force originating in the eyes, like Superman’s X-ray vision. But it is no such facility. We’re better at understanding some things than others; our intelligence works better on some problems than others. My favorite example of this is the Monty Hall problem.<sup><a href="#note04">4</a></sup> But this is hardly the only example. Popular websites list the top 10 fallacies, or the top 16 or 24 or 26. Wikipedia’s list contains over fifty unenumerated fallacies in several different categories.
</p>
<p style="text-align: center; color: #bbb;">
	• • • • • •
</p>
<p>
	The concept of artificial intelligence was originally divided into two categories, weak and strong. The notion of strong artificial intelligence was one in which an AI would not only <em>imitate</em> a person but actually <em>have</em> conscious experience as does a person. Of course, eliminative materialists don’t believe that people have conscious experience, either, so this has never been a problem for them. Except for those who believe in the possibility of uploading a mind to a computer—an unlikely prospect—most AI researchers have shifted the distinction from weak/strong to narrow/general.
</p>
<p>
	The concept of narrow intelligence is, well, self-contradictory. As we’ve already seen, the entire idea of intelligence is that it’s a <em>general</em> (even if not unlimited) ability to deal flexibly and creatively with novel, challenging situations. The closest natural equivalent to narrow artificial intelligence is perhaps people that used to be called <em>idiot savants</em>—now more politely just <em>savants</em>—although of course savants have enough broad natural intelligence to navigate their own worlds reasonably well. More recently, the term <em>machine learning</em> is often used to moot any claim of any kind of intelligence at all.
</p>
<p>
	The distinction now is between good old-fashioned <em>programming</em>—which involves encoding explicit instructions for automating a task—and a computer program that, one way or another, learns to do whatever it’s supposed to. Before chatbots, probably the best-known example of machine learning was computer chess.
</p>
<p>
	Because chess has been traditionally seen as a quintessential example of human intelligence, it’s likely that innumerable early computer programmers tried their hand at writing a program that could play the game. Before the days of machine learning, computer chess programs were written explicitly, and they were quite poor. But, with the incorporation of machine learning, a program eventually defeated the current world champion at the time, and nowadays chess programs routinely defeat international grandmasters. Despite this, perhaps surprisingly, chess is now more popular than ever. One program, Stockfish, considered the best in the world, is even used to rate the performance of players in real time.
</p>
<p>
	Make no mistake, however: the rules of the game still had to be explicitly programmed into Stockfish and every other chess program. Without further programming and training, not even Stockfish could play any of the hundreds of chess variations that have been devised.
</p>
<p>
	Ironically, although ChatGPT-3 claims to be able to play chess, it can’t, as demonstrated in this simple example (I’ve edited the moves to conform to standard notation).
</p>
<p>
	<strong>Me</strong>: Let’s play chess.
</p>
<p class="Attention">
	<strong>ChatGPT-3</strong>: Sure, I’d be happy to play a game of chess with you! Shall we start?
</p>
<p>
	<strong>Me, playing White</strong>: e2 e4
</p>
<p class="Attention">
	<strong>ChatGPT-3, playing Black</strong>: e7 e5
</p>
<p>
	<strong>White</strong>: N g1 h3
</p>
<p class="Attention">
	<strong>Black</strong>: N b8 c6
</p>
<p>
	<strong>White</strong>: N b1 a3
</p>
<p class="Attention">
	<strong>Black</strong>: N g8 f6
</p>
<p>
	<strong>White</strong>: f2 f3
</p>
<p class="Attention">
	<strong>Black</strong>: d7 d6
</p>
<p>
	<strong>White</strong>: d2 d3
</p>
<p class="Attention">
	<strong>Black</strong>: e7 e6
</p>
<p>
	Although, through its first four moves, ChatGPT-3 appears to know how to play, on its fifth move, it tries to move a pawn that it had already been moved from a square it no longer occupies back to e6!
</p>
<p>
	With ChatGPT-3.5, the response is equally absurd. After I play e4, it responds:
</p>
<p class="Attention">
	Now, it’s my move. I’ll respond with 1…c5, the Sicilian Defense:
</p>
<p>
	But on the board it shows itself having made <em>two</em> moves, not just one (both e5 and c5)!
</p>
<p>
	Other published examples of ChatGPT playing chess include it making impossible and illegal moves such as taking its own pieces.
</p>
<p>
	Although ChatGPT can <em>apparently</em> converse plausibly on any topic, including the rules of chess, it is as narrow a machine learning programing as any chess program. Its words are no more meaningful than N b8 c6, and it is no more a step to artificial general intelligence than can Stockfish play Tic-Tac-Toe.
</p>
<p style="text-align: center; color: #bbb;">
	• • • • • •
</p>
<p>
	A major problem with machine learning programs is their opacity—they are completely and fundamentally <em>black boxes</em>. As ChatGPT helpfully explains:
</p>
<p class="Attention">
	A black box typically refers to a device or system that is opaque or mysterious in its inner workings or processes.
</p>
<p>
	The term “black box” in this sense is from World War II, referring to secret electronic navigational devices literally housed in a non-reflective black housing.<sup><a href="#note05">5</a></sup>
</p>
<p>
	As explained by Melanie Mitchell in her book <em>Artificial Intelligence: A Guide for Thinking Humans</em>, machine learning programs <em>aren’t learning what we think they’re learning</em>. The author gives several compelling examples, including a school bus identified by a machine learning program as an ostrich. The same program has also identified completely random television “snow” as a robin, a cheetah, an armadillo, a panda, a centipede, a peacock, a jackfruit, and a bubble, and abstract geometric patterns as a penguin, a starfish, a baseball, a guitar, a railroad car, and a remote control.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.1/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n1_lg.jpg" alt="Skeptic 29.1 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.1<br />
		<a href="https://www.skeptic.com/productlink/magv29n1">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/259456#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	Unlike traditional computer programs which are opaque enough even when heavily documented and commented by their programmers, large language models consist virtually entirely of parameters (that is, numbers). For example, GPT-3 (a version of ChatGPT) contains 175 billion parameters based on a training corpus of about 500 billion tokens (words and parts of words such as the plural S) and using a token length of 2,048 as its context in predicting the next word. Imagine trying to manually trace and understand the reason a chatbot produces a particular response to a particular prompt! Given this overwhelming complexity and such a daunting task, chatbots can only be regarded as oracles.
</p>
<p class="Attention">
	[The term] oracle refers to a system or source of information that is relied upon for its accuracy or expertise, even though the inner workings or processes are not fully comprehensible or transparent. It is essentially a trusted authority or source of knowledge that provides guidance or answers to questions, without revealing its underlying methods or reasoning. This concept is often used metaphorically to describe situations where information or advice is accepted without full understanding or scrutiny.
</p>
<p>
	Thank you, ChatGPT.
</p>
<p>
	As a step toward artificial general intelligence, chatbots are a blind alley leading to a dead end. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		About the Author<br />
	</h5>
<p>
		As a step toward artificial general intelligence, chatbots are a blind alley leading to a dead end. As a skeptical observer of the pursuit of artificial intelligence and other subjects, <strong>Peter Kassan</strong> has been a regular contributor to <em>Skeptic</em> since 2006. Over his long career in the software products industry, he has held positions at the technical, managerial, and executive levels. Kassan is the author of several software patents. He can be reached at <a href="mailto:kassan.peter@gmail.com">kassan.peter@gmail.com</a>.
	</p>
<h5>
		References<br />
	</h5>
<ol>
<li id="note01">
			<a href="https://bit.ly/3RYnkzh">https://bit.ly/3RYnkzh</a>
		</li>
<li id="note02">
			<a href="https://bit.ly/3GZ3z4B">https://bit.ly/3GZ3z4B</a>
		</li>
<li id="note03">
			<a href="https://bit.ly/3tFxuM3">https://bit.ly/3tFxuM3</a>
		</li>
<li id="note04">
			<a href="https://bit.ly/48qNm3W">https://bit.ly/48qNm3W</a>
		</li>
<li id="note05">
			<a href="https://bit.ly/48qNm3W">https://bit.ly/48eCyWO</a>
		</li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI Will Not Replace Artists. It Will Devalue Them.</title>
		<link>https://www.skeptic.com/reading_room/artificial-intelligence-will-not-replace-but-devalue-artists/</link>
		
		<dc:creator><![CDATA[Kate Brunotts]]></dc:creator>
		<pubDate>Fri, 05 Jul 2024 07:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=45002</guid>

					<description><![CDATA[Brooklyn-based avant-pop music producer, Kate Brunotts, discusses the potential impact of artificial intelligence on artists and the need for stronger protections for their intellectual property rights.]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">I</span>n October of 2023, U.S. Senator Chris Coons (D), Chair of the Senate Judiciary Subcommittee on Intellectual Property, announced a draft bill entitled the No Fakes Act, or the “Nurture Originals, Foster Art, and Keep Entertainment Safe Act.”<sup><a href="#note01">1</a></sup> This bill would enable artists to sue those who use their likeness (presumably, a deep fake of their voice) without permission.
</p>
<p>
	His bill is largely a reaction that traces its roots back to “Ghostwriter,” an anonymous artist who created the song <em>Heart on My Sleeve</em>. It utilized AI-deep fakes of vocals by two of the most popular artists in the world— The Weeknd and Drake—without their consent. It also earned a whopping 1.4 million U.S. streams before its forced removal by Universal Music Group.<sup><a href="#note02">2</a></sup>
</p>
<p>
	Ghostwriter is the poster child for the prediction that AI will create another stream of licensing income<sup><a href="#note03">3</a></sup> and thus revolutionize the way artists create. I hope that’s the case. But as the industry continues to extract value from music and other media with little oversight, it’s unclear who exactly is reaping the benefits.
</p>
<h4>
	Move Fast and Break Things<br />
</h4>
<p>
	On most days, I’m making music or writing about the mechanics of doing so. It goes without saying that I have an inherent bias against <em>the man</em> or anything I perceive to capitalize on arts for the sole sake of profit. When ChatGPT burst into my world of self-referential music and writing, I couldn’t help but feel trepidatious.<span id="more-45002"></span>
</p>
<p>
	To be clear, I’m not against utilizing certain forms of AI in my own music production process. Whether it’s using Ableton’s Magenta Studio<sup><a href="#note04">4</a></sup> to generate drum pattern ideas or playing with AI-assisted toggles in my favorite plugins, I’m no stranger to taking advantage of machine learning to feed me inklings of a great idea.
</p>
<p>
	My beatmaking workflow changes from one song to the next, but for illustration purposes, I might start by writing a song on piano or guitar, and then record that into my music-making software, Ableton Live. I can then start to build the beat (drums, synthesizers, and other instrumentation) around what I’ve recorded. I might open a plugin where I can input the key and the tempo of my composition, and the plugin might suggest some samples that I can add to my song. In the case of a plugin such as Magenta Studio, with the click of a button, the AI-assisted suite can generate drum patterns, melodies, and more musical elements based on what I’ve already created
</p>
<p>
	In this way, I occasionally use AI technology to act as a writing partner, though it’s important to me that I resample the initial idea provided by the plugin. This could mean shifting the pitch of the sample, chopping and splicing it to make something new, or adding effects to place it within the artistic world of my song. I try to use AI as a tool to fuel and assist my creative processes, rather than as a substitute or replacement for them.
</p>
<p class="pullquote">
	Without the presence of strong guardrails to protect their intellectual property rights, the working artist’s stock in trade isn’t just being depleted; it runs significant risk of being stolen.
</p>
<p>
	Part of the problem with “AI,” or at least the public discourse about it, is that the term can mean so many different things. Grammarly, an app that can proofread text as you write it, could be packaged as AI—but so can the likes of Midjourney, an image generator accused of lifting data from artists without consent or compensation.<sup><a href="#note05">5</a></sup>
</p>
<p>
	The ambiguity behind those emotionally charged two letters all too easily turns what should be balanced discussions into trash talk. Yet it’s a discussion we must have.
</p>
<p>
	AI has the potential to be much more invasive than simply co-opting the working artist’s dwindling paycheck. Tom Hanks is one of the many whose voice, reproduced by AI, was used in an advertisement without his consent or compensation.<sup><a href="#note06">6</a></sup> And note that we’re only hearing about it because Hanks has the name recognition to make headlines and the finances needed to hire any necessary legal expertise.
</p>
<p>
	There’s a reason why, in the wake of AI, union contract negotiations now include stipulations that require the employment of a certain number of writers or background actors, and outline profitable licensing agreements for writers.<sup><a href="#note07">7</a></sup> Otherwise, extra cash flow would inevitably trickle back to the companies that host content, instead of the creators who built the stories that support the tech platforms in the first place.
</p>
<p>
	Without the presence of strong guardrails to protect their intellectual property rights, the working artist’s stock in trade isn’t just being depleted; it runs significant risk of being stolen. Derivative works may deserve some level of monetization, but it stands to reason that some of that capital, or at the very least, good-faith credit, should find its way back to the original creative artist.
</p>
<h4>
	Which Art Forms Are Most Vulnerable?<br />
</h4>
<p>
	It would be hypocritical for me to argue against all forms of artificial intelligence. I use Photoshop’s Content Aware Fill to cut out pesky distractions in photos as well as computer tools to repurpose long-form content into short clips for social media.
</p>
<p>
	I personally know many artists who have even experimented with generative AI engines to extend their visions for music videos and other visual art they would not be able to afford to produce otherwise. Many of these “AI assisted tools” have been around for a while, but only recently have been repackaged to line up with the latest venture capital buzzwords.
</p>
<p>
	However, I am staunchly opposed to creating new solutions where no problems exist. If the motivation to create an AI tool is <em>purely</em> profit (that is, swapping paid background actors for digital renderings), my opinion on the matter quickly turns negative. When it comes to art and monetization, the AI squeeze is sending seismic shocks through an already cracked system where artists get an increasingly shorter end of the stick in an oversaturated, undervalued market.
</p>
<p>
	Not all art forms will be affected equally. In cases where art is required to be passable and not personal, AI is likely to take over the field entirely. Displacement by AI is already a reality for those who make corporate commercial music as opposed to artists with a loyal fanbase. Many of those artists counted on such commercial gigs to pay the rent, especially in their early, struggling days.
</p>
<p>
	It’s not all doom and gloom, however—AI might also help artists focus on the part of their creative process they love most. Advances in technology, seen in the brightest light, may allow creators to take time and energy formerly consumed by the tedious, repetitive parts of the process and reallocate them to the more “creatively intensive” tasks, which is what art, music, and literature are all about—or should be.
</p>
<p>
	These positive aspects of AI tools will undoubtedly enable some creatives to be more productive and build more while staying within the confines of their budget. Nonetheless, a broadband reevaluation of artists and compensation is long overdue. The advent of new technology provides a perfect opportunity to redraw these boundaries, but at least so far, the companies that build these models have shown little interest in helping to do so.
</p>
<h4>
	Battling AI—On the Picket Line and in the Courtroom<br />
</h4>
<p>
	After one of the entertainment industry’s longest and most paralyzing strikes, Hollywood actors and screenwriters obtained an agreement that defines when, where, and how AI can and cannot be used. The agreement allows screenwriters to use AI tools in their original writing but prevents the industry from using it to replace them. It prohibits the studios from using AI to produce scripts—which, of course, it does only after being repeatedly trained on existing scripts— and then requiring real live human writers to complete the work at lower fees, royalties, and screen credit (i.e., likely future earnings) than for original writing.
</p>
<p>
	Simon Johnson, Professor of Entrepreneurship at MIT, who specializes in studying the economic effects of technological transformation, called the agreement a “fantastic win for writers” that he’s “hoping will be a model for <em>the rest of the economy</em>.” As one of those creative artists out in the rest of the economy, I have to hope he’s right. I also hope he’s right in predicting that it will result in “better quality work and a stronger industry for longer.”<sup><a href="#note08">8</a></sup>
</p>
<p>
	And it isn’t only labor that’s concerned about being cheated out of just compensation by AI—so are affected management and capital. <em>The New York Times</em> has sued OpenAI and Microsoft for copyright infringement over the unauthorized use of its published material to “train” artificial intelligence to first copy it and then replace it. Specifically, the <em>Times</em> lawsuit contends that millions of its articles of “uniquely valuable work” were used to train automated chatbots that now compete as a news source against it. And they’re talking in the billions—with a <em>B</em>! And while we may not be valued in the billions, every creative writer, actor, artist, musician, and performer believes that our work is “uniquely valuable” or we wouldn’t be making the sacrifices necessary to produce in the first place.<sup><a href="#note09">9</a></sup>
</p>
<p>
	Together, Sen. Coons’ bill, the Screen Writers agreement, and the <em>Times</em> lawsuit show that our society is responding to the new challenges thrown up by AI, though it will take time and effort to work them out and get it right. And again, it’s those with the money, the clout, and the name recognition who are able to force the issue for the rest of us.
</p>
<h4>
	The Myth of a Creative Middle Class: Why Many Artists Feel Threatened by AI<br />
</h4>
<p>
	Art thrives on exclusivity. Your work is effectively assigned value based on the amount of bonafide attention you can garner. For a discipline focused on the wide distribution of culture, that process can be oddly elitist.
</p>
<p>
	While the world’s broader economy reflects a staggering wealth gap with the richest one percent owning nearly twice as much as the other 99 percent,<sup><a href="#note10">10</a></sup> the art world may be even worse.<sup><a href="#note11">11</a></sup> Admittedly, data is scarce for the latter, in part because art is, by design, opaque.
</p>
<p>
	You can’t attribute a concrete value to a piece of art. And that is why art is a wildly risky but a <em>potentially</em> highly rewarding investment. This is true for the lucky few—most artists across all disciplines make pennies in comparison to their poster-child contemporaries. The reality of the starving artist is alive and well (though clinically and chronically malnourished).
</p>
<p>
	Even if we broaden the criteria to include Internet creators and influencers within the artist umbrella, we can see a growing divide between profitable, sustainable artists and those who do not make enough to thrive off their craft alone. In 2020, the top one percent of creators on Gumroad, an ecommerce platform where creators can sell digital products, courses, and more, garnered about 60 percent of payouts.<sup><a href="#note12">12</a></sup>
</p>
<p>
	While I’m most keen to discuss music since I can speak most accurately to my own experiences, the hard truth is that there has been a massive devaluation across all creative disciplines, which I fear will only be exacerbated by a rising sea of AI content. A case in point is Elena Velez, who won the CDFA Emerging Designer of the Year<sup><a href="#note13">13</a></sup> designing for celebrity performers the likes of Solange Knowles, Julia Fox, and Rosalia, recently had to take a loan out of her mother’s retirement account in order to keep the lights on.<sup><a href="#note14">14</a></sup>
</p>
<p class="pullquote">
	When it comes to art and monetization, the AI squeeze is sending seismic shocks through an already cracked system where artists get an increasingly shorter end of the stick in an oversaturated, undervalued market.
</p>
<p>
	Craftsmen like Velez simply cannot keep up with fast-fashion giants such as SHEIN, whose “AI Technology” can create new clothes in as little as three days.<sup><a href="#note15">15</a></sup> Industrialization of a market sector almost always leads to oversaturation and devaluation of craftsmanship. Markets tend to prioritize the cheapest, fastest means of production at all costs, likely because consumers simply don’t know and/or don’t care what goes into creating a quality piece.
</p>
<p>
	Still, no one likes to award artists with sympathy. After all, who is pining to comfort a group of people who get to “do what they love” day in and day out by choice, while the rest of the workforce labor at a job, just to be able to live paycheck to paycheck. The big blunder is to assume that even at the A-list level, clout automatically equates to a sustainable income.
</p>
<div class="video-container">
	<iframe loading="lazy" width="560" height="315" src="https://www.youtube.com/embed/x9sg55thwFI?si=atkSQGSu6M-iCkKl" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div>
<h4>
	AI Amplifies What We Already Know to Be True<br />
</h4>
<p>
	With machine learning’s inherent coding bias<sup><a href="#note16">16</a></sup> and tendency to provide preferential treatment to those who represent the backgrounds of those already in power, we cannot deny that AI is a black-mirrored reflection of the current state of the world. The proliferation of technology that’s made without the consciousness of underrepresented groups is unlikely to be a benign steward of art, let alone the state of the world. Left entirely to its own, it won’t be.
</p>
<p>
	Tech as a whole has continued to poach value between artist and patron over the past two decades, making it increasingly difficult for the profession or the craft to exist. Spotify recently announced its royalty restructure for 2024, which will require each track to earn 1,000 streams within 12 months to <em>start</em> earning royalties.<sup><a href="#note17">17</a></sup> Otherwise, artists won’t earn anything. It has become increasingly clear that small artists aren’t a priority, because the streaming platform’s sense of value is ultimately tied to their shareholders.
</p>
<p>
	It’s hard to get excited about AI “revolutionizing art” or providing artists additional streams of income when the very industry that created it has a systemic pattern of devaluing craft on a massive scale. Artists are the last people to resist innovation, but when the cost of your creation comes at the expense of an already decimated model of compensation, it’s hardly easy to welcome it with open arms.
</p>
<p>
	In spite of all that, we cannot stop artificial intelligence from continuing to grow, nor should we necessarily. However, we’re missing a great opportunity to rethink how we value craft and the creatures of our culture. As a musician myself, whenever I speak about increased compensation for artists, I’m met with dismissive remarks that I believe largely come from a lack of experience and sometimes willful ignorance.
</p>
<p>
	Most artists neither need nor desire a mansion, flashy clothes, or an assistant who helps sort out one color of M&#038;Ms from the rest of the pack. In my experience, most of us just want a living wage, with the possibility of some sustainable retirement down the line, like what those in any other profession desire. Yet getting to this reasonable baseline of security is insurmountably difficult for most.
</p>
<p>
	For instance, how much do you think an artist makes off a <em>million</em> streams on Spotify? As of fall 2023, that amount is…$4000.<sup><a href="#note18">18</a></sup> And that assumes that you have 100 percent of your royalty rights, which is extremely unlikely. If we’re being generous, an artist with a <em>million</em> monthly listeners will earn less than $50,000 a year off that platform.
</p>
<p>
	I’m aware this assessment is fairly rudimentary in that it does not factor in revenue from a wider swath of streaming platforms, as well as merch, brand deals, and performances. (Though those are less likely to make large profits,<sup><a href="#note19">19</a></sup> if any, nowadays.) However, that also does not factor out the costs of recording, royalty splits, travel, publicity, management, and the general cost that comes from being the figurehead of a fickle entertainment brand with no guarantee of longevity. You can forget about healthcare and socking away savings for a rainy day. In the life of an up-and-coming artist, El Niño years are the rule, not the exception.
</p>
<h4>
	Lost in a Sea of Noise<br />
</h4>
<p>
	AI might very well bring us boatloads of art and works from new creators who might not have the means to do so were it not for machine learning simplifying the process. Everyone who wants to make art should do so if they get the chance, even if that process requires assistance from AI. At the same time, it’s hard to ignore the fact that, according to Music Business Worldwide, 120,000 new songs<sup><a href="#note20">20</a></sup> are released on streaming services every day. While having more access to the tools to make amazing music has never been easier, it’s also much more challenging to make a career out of your creations.
</p>
<p>
	Anecdotally, I am not craving even <em>more</em> content as a consumer. Are you? Is anyone? With AI-assisted art ushering in a wide wave of content, I fear that I may miss out on some hidden gems. I get fatigued just waffling through the many shows of Netflix, attempting to find something worthy of the little free time I have. Yet I fear I might give up before discovering a priceless new artist engulfed in the overwhelming surge of content.
</p>
<p>
	While admittedly antiquated, the eras of mass radio airplay created a collective sense of adulation, allowing some artists a chance at longevity. Today’s world is much more segmented, with your algorithm standing being as unique as your fingerprint.
</p>
<p>
	Even so, artists who were lucky enough to earn a spot on the radio when it was the major medium continue to have a seat at the table. Bruno Mars, Eminem, Katy Perry, and SIA remain in Spotify’s top 100 despite not having a key hit within the past year, in part because of their prominence in a time of more collective culture.
</p>
<p>
	The struggle for artists has changed. It’s no longer about finding the tools and the means of distribution for your creations. It’s now about how to stand out and get noticed amidst a sea of endless content. Art and content have never been more interchangeable, and for craftsmen, this reality couldn’t be more painful.
</p>
<h4>
	Art will not be replaced, but it will be cheapened.<br />
</h4>
<p>
	You cannot take art away from artists. However, you can, perhaps unknowingly, extract compensation from the producers of that craft. And for many, that equates to depriving them of a necessary part of their livelihood. Artists aren’t upset because they want to serve as the gatekeepers of innovation and expression.
</p>
<p>
	Rather, it’s becoming increasingly difficult to survive—let alone thrive—in an environment that does not value supporting your trade with a livable wage. Artificial intelligence’s purported sense of utility in art is based on an incorrect assumption that lies at its core: that bringing AI into one’s process will breed greater efficiency, with more art equating to more valuable expression as a whole.
</p>
<p>
	Yet, the word “art” is derived from the Latin <em>ars</em>, that is, to craft, or the cultivation of human skill. The magic of art often exists due to human error and only as the end product of a lengthy process. And those are so often at odds with the constant lust for instant gratification, which is, in many ways, the defining characteristic of today’s world.
</p>
<p>
	Art often means a rejection of the efficient. It’s churning the butter instead of flipping on the mixer just for the sake of doing so. Art is applying beauty and meaning beyond the mere asset itself, defying cost-efficient logic, and foregoing obsessive A/B testing.
</p>
<p>
	I can see how one might argue that AI art is a new medium in its own right, but that does not take away from the stinging fact that oversaturation leads to devaluation in an environment that already has a huge problem with compensating creators directly.
</p>
<p>
	Ultimately, the moral dilemma behind artificial intelligence and art may not matter. Since the dawn of the Industrial Revolution, so much of the population opted for innovation at all costs, which continues to be true regardless of how the underpaid craftsmen may feel.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.1/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n1_lg.jpg" alt="Skeptic 29.1 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.1<br />
		<a href="https://www.skeptic.com/productlink/magv29n1">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/259456#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	Artificial intelligence is here, and not only to stay, but to grow. Just as we define the mental models and data training sets, we get to choose—with our money, time, and attention—how we attribute value to the artists, and potentially, the AI artists of our generation. As in preferences for minimalism or maximalism, progressive rock or bubblegum pop, the debate ultimately boils down to what we as individuals allow space for in our lives.
</p>
<p>
	Ultimately, the questions raised by AI about fair compensation for artists are but part of the general question about the increasing income inequality seen in modern technological societies. As futurist and AI expert Rudy van Belkom states in his Introduction to this issue of <em>Skeptic</em>, while AI can help by performing any necessary complex computations, the final decisions on such socially sensitive decisions should be left to human wisdom rather than machines, however “intelligent.” <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>Kate Brunotts</strong> is a writer and an avant-pop music producer based in Brooklyn. She strives to create accessible sonic dreamscapes that challenge the bounds of music. Her work has been featured on multiple Spotify and Apple editorial playlists, along with BBC Introducing and Earmilk.
	</p>
<h5>
		References<br />
	</h5>
<ol>
<li id="note01">
			<a href="https://bit.ly/3NHHwCX">https://bit.ly/3NHHwCX</a>
		</li>
<li id="note02">
			<a href="https://bit.ly/3NHHwCX">https://bit.ly/3S5Wu8M</a>
		</li>
<li id="note03">
			<a href="https://bit.ly/41Mb0Wb">https://bit.ly/41Mb0Wb</a>
		</li>
<li id="note04">
			<a href="https://bit.ly/3RA4blO">https://bit.ly/3RA4blO</a>
		</li>
<li id="note05">
			<a href="https://bit.ly/41Y4yM9">https://bit.ly/41Y4yM9</a>
		</li>
<li id="note06">
			<a href="https://bit.ly/3vim1mb">https://bit.ly/3vim1mb</a>
		</li>
<li id="note07">
			<a href="https://bit.ly/3S2ZbrB">https://bit.ly/3S2ZbrB</a>
		</li>
<li id="note08">
			<a href="https://bit.ly/4aEICcy">https://bit.ly/4aEICcy</a>
		</li>
<li id="note09">
			<a href="https://bit.ly/3GZ3z4B">https://bit.ly/3GZ3z4B</a>
		</li>
<li id="note10">
			<a href="https://bit.ly/3H14DVu">https://bit.ly/3H14DVu</a>
		</li>
<li id="note11">
			<a href="https://bit.ly/47kWS7q">https://bit.ly/47kWS7q</a>
		</li>
<li id="note12">
			<a href="https://bit.ly/3RGEuQO">https://bit.ly/3RGEuQO</a>
		</li>
<li id="note13">
			<a href="https://bit.ly/41Faoll">https://bit.ly/41Faoll</a>
		</li>
<li id="note14">
			<a href="https://bit.ly/3RKKUya">https://bit.ly/3RKKUya</a>
		</li>
<li id="note15">
			<a href="https://bit.ly/47lyXoH">https://bit.ly/47lyXoH</a>
		</li>
<li id="note16">
			<a href="https://bit.ly/41GZcVa">https://bit.ly/41GZcVa</a>
		</li>
<li id="note17">
			<a href="https://bit.ly/3RJMUGT">https://bit.ly/3RJMUGT</a>
		</li>
<li id="note18">
			<a href="https://bit.ly/3vmjxTO">https://bit.ly/3vmjxTO</a>
		</li>
<li id="note19">
			<a href="https://bit.ly/3S1qnH7">https://bit.ly/3S1qnH7</a>
		</li>
<li id="note20">
			<a href="https://bit.ly/3S1FNuN">https://bit.ly/3S1FNuN</a>
		</li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Are Governments Prepared to Keep AI Safe?</title>
		<link>https://www.skeptic.com/reading_room/are-governments-prepared-to-keep-artificial-intelligence-safe/</link>
		
		<dc:creator><![CDATA[Skeptic]]></dc:creator>
		<pubDate>Fri, 28 Jun 2024 07:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=44991</guid>

					<description><![CDATA[A conversation between UK Prime Minister Rishi Sunak and Tesla CEO Elon Musk regarding the risks and safety of artificial intelligence (AI), as well as the need for governments to collaborate and develop expertise in managing these risks.]]></description>
										<content:encoded><![CDATA[<div class="ImportantInfo">
<p>
		<strong>Note from editors:</strong> In response to the growing concerns about artificial intelligence development, on November 1–2, 2023, the British Government held the first ever summit on AI Safety, attended by representatives of 28 countries as well as business leaders working in the field of AI. The summit aptly took place at Bletchley Park, the very location where Alan Turing cracked the German Enigma code, which played a significant part in the Allied victory in WWII.
	</p>
<p>
		The result of the summit was the signing of The Bletchley Declaration, which recognizes the urgent need to understand and collectively manage potential risks of AI through a joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community. The signatories of the declaration include Canada, China, the European Union, Japan, the United Kingdom, and the United States.
	</p>
<p>
		The world leaders in attendance officially recognized the need to collaborate on testing the next generation of AI models against a range of critical national security, safety, and societal risks.
	</p>
<p>
		At the conclusion of the event, the British Prime Minister Rishi Sunak and tech entrepreneur Elon Musk sat down at the prime minister’s residence for a private conversation, and then held a public discussion. Their public dialogue is transcribed below, with only minor edits for clarity.
	</p>
<p>
		<strong>Rishi Sunak</strong> has served as the Prime Minister of the United Kingdom since 2022 and has been Member of Parliament since 2015. He studied philosophy, politics and economics at Oxford and earned his MBA from Stanford as a Fulbright Scholar. Prior to his political career, he was a hedge fund manager.
	</p>
<p>
		<strong>Elon Musk</strong> was a founding board member of OpenAI, the research organization behind ChatGPT. He is the owner of Tesla, a pioneer in autonomous electric vehicles, and the founder of Neuralink, a company working on developing implantable brain-computer interfaces. He is also the CEO of the rocket company SpaceX and owner of the social media platform X.com (formerly Twitter). <span id="more-44991"></span>
	</p>
</div>
<p>
	<span class="dropCap">R</span><strong>ishi Sunak</strong>: Bill Gates said there is no one in our time who has done more to push the bounds of science innovation than you. That’s a nice thing to have anyone say about you. But oddly enough, when it comes to AI, you’ve been doing almost the opposite. For around a decade, you’ve been saying, “Hang on, we need to think about what we’re doing and what we’re pushing here. And what do we do to make this safe?” What was it that caused you to think about it that way? Why do we need to be worried?
</p>
<p>
	<strong>Elon Musk</strong>: I’ve been somewhat concerned for quite a while. I would tell people, “We should really be concerned about AI.” They’re like, “What are you talking about?” They’ve never really had any experience with AI. But since I have been immersed in technology for a long time, I could see it coming.
</p>
<p>
	I think this year there have been a number of breakthroughs. We’re at the point at which someone can see a dynamically created video of themselves, like video of you saying anything in real time. These sorts of deep fake videos are really incredibly good, sometimes more convincing than real ones. And then obviously things like ChatGPT were quite remarkable. I saw GPT-1, GPT-2, GPT-3, GPT-4—the whole sort of lead up to that. It was easy for me to see where it’s going. If you just extrapolate the points on a curve and assume that trend will continue, then we will have profound artificial intelligence. And obviously at a level that far exceeds human intelligence.
</p>
<p>
	But I’m glad to see that, at this point, people are taking safety seriously, and I’d like to say thank you for holding this AI Safety conference. I think it will go down in history as being very important. It’s really quite profound.
</p>
<p>
	I do think, overall, that the potential is there for artificial intelligence to most likely have a positive effect and to create a future of abundance where there is no scarcity of goods and services. But it is somewhat of the Magic Genie problem: if you have a magic genie that can grant all the wishes…usually those stories don’t end well. Be careful what you wish for, including wishes.
</p>
<p>
	<strong>RS</strong>: So, you talked a little bit about the summit and thank you for being engaged in it, which has been great. One of the things that we achieved today in the meetings between the companies and the leaders was an agreement that, ideally, governments should be doing safety testing of models before they’re released.
</p>
<p>
	In government, my job is to say, “Hang on, there is a potential risk here.” Not a definite risk, but a potential risk of something that could be bad. My job is to protect the country, and we can only do that if we develop the capability we need in our safety institute, and then make sure we can test the models before they are released. You’ve talked about the potential risk. What are the types of things governments like ours should be doing to manage and mitigate those risks?
</p>
<p>
	<strong>EM</strong>: Well, I generally think that it is good for government to play a role when public safety is at risk. For the vast majority of software, public safety is not at risk. If the app crashes on your phone or your laptop, it’s not a massive catastrophe. But talking about digital super intelligence, does it pose a risk to the public? Then there is a role for government to play, to safeguard the interests of the public.
</p>
<p>
	This is true in many fields. I deal with regulators throughout the world because of Starlink (communications), SpaceX (aerospace), and Tesla (cars). So I’m very familiar with dealing with regulators and I actually agree with the vast majority of regulations. There are a few that I disagree with from time to time, probably less than one percent.
</p>
<p>
	There is some concern from people in Silicon Valley who have never dealt with regulators before, and they think that this is going to just crush innovation, slow them down, and be annoying. And it will be annoying—it’s true, they’re not wrong about that. But I think we’ve learned over the years that having a referee is a good thing. And if you look at any sports game, there’s always a referee and nobody’s suggesting to have a sports game without one. I think that’s the right way to think about this: for government to be a referee to make sure the public safety is addressed.
</p>
<p>
	I think there might be, at times, too much optimism about technology. I say that as a technologist, so I ought to know. But like I said, on balance, I think that the AI will be a force for good. But the probability of it going bad is not zero percent. We just need to mitigate the downside potential.
</p>
<div class="imageclearall">
	<img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-1/AI-Safety-Summit-Nov-2023-UK-GOV.jpg" alt="UK Prime Minister Rishi Sunak speaks at a plenary session on day two of the AI Summit at Bletchley Park on November 2, 2023. " width="750" height="500" class="noBoxShadow" style="display: block;" /> </p>
<p class="caption">
		UK Prime Minister Rishi Sunak speaks at a plenary session on day two of the AI Summit at Bletchley Park on November 2, 2023. (<a href="https://bit.ly/3H4BxED">Photo</a> by Kirsty O’Connor / No 10 Downing Street [<a href="https://creativecommons.org/licenses/by-nc-nd/2.0/">CC BY-NC-ND 2.0 DEED</a>])
	</p>
</div>
<p>
	<strong>RS</strong>: Do you think governments can develop the expertise? Governments need to quickly tool up capability personnel-wise, which is what we’re doing. Is it possible for governments to do that fast enough given how quickly the technology is developing?
</p>
<p>
	<strong>EM</strong>: It’s a great point you’re making. The pace of AI is faster than any technology I’ve seen in history, by far. And it seems to be growing in capability by at least five-fold, perhaps ten-fold per year. It will certainly grow by an order of magnitude in 2024. And government isn’t used to moving at that speed. But I think even if there are no firm regulations and even if there isn’t an enforcement capability, simply having insight and being able to highlight concerns to the public will be very powerful.
</p>
<p>
	<strong>RS</strong>: Well, hopefully we can do better than that. What was interesting over the last couple of days talking to everyone who’s doing the development of this—and I think you can go with this—is just the pace of advancement here is unlike anything all of you have seen in your careers in technology, because you’ve got these kind of compounding effects from the hardware, and the data, and the personnel.
</p>
<p>
	<strong>EM</strong>: Currently, the two leading centers for AI development are the San Francisco Bay Area and the London area, and there are many other places where it’s being done, but those are the two leading areas. So, I think if the U.S. and the UK, and China are aligned on safety, that’s all going to be a good thing because that’s really where the leadership is generally.
</p>
<p>
	<strong>RS</strong>: Good. Thanks. You mentioned China. I took a decision to invite China to the summit over the last days, and it was not an easy decision. A lot of people criticize me for it. My view is, if you’re going to try to have a serious conversation, you need to. What are your thoughts?
</p>
<p>
	<strong>EM</strong>: It’s essential.
</p>
<p>
	<strong>RS</strong>: Should we be engaging with China? Can we trust them?
</p>
<p>
	<strong>EM</strong>: If we don’t, if China is not on board with AI safety, it’s somewhat of a moot situation. The single biggest objection that I get to any kind of AI regulation or sort of safety controls is, “Well, China is not going to do it and therefore they will just jump into the lead and exceed us all.” But actually, China is willing to participate in AI safety. And thank you for inviting them. And I think we should thank China for attending. When I was in China earlier this year, my main subject of discussion with the leadership in China was AI safety. They took it seriously, which is great, and having them here I think was essential. Really, if they are not participants, it’s pointless.
</p>
<p>
	<strong>RS</strong>: We were pleased they were engaged in the discussions yesterday and actually ended up signing the same communiqué that everyone else did. Which is a good start. And as I said, we need everyone to approach this in a similar way if we’re going to have a realistic chance of resolving it.
</p>
<p>
	We had a good debate today about open source. And I think you’ve been a proponent of algorithmic transparency, making some of the X.com algorithms public. Some are very concerned about open source models being used by bad actors. And then you’ve got people who say they are critical to innovation. What are your thoughts on how we should approach this?
</p>
<p>
	<strong>EM</strong>: Well, the open source algorithms and data tend to lag the closed source by 6 to 12 months. Given the rate of improvement this is quite a big difference; if things are improving by a factor of let’s say five or more, then being a year behind you are five times worse. It’s a pretty big difference. And that might be an OK situation.
</p>
<p>
	But certainly it will get to the point where you’ve got open source AI that will start to approach human level intelligence, perhaps exceed it. I don’t quite know what to do about it. I think it’s somewhat inevitable. There will be some amount of open source and I guess I would have a slight bias towards open source because at least you can see what’s going on, whereas with closed source, you don’t know what’s happening. Now it should be said that even if AI is open source, do you actually know what’s going on? If you’ve got a gigantic data file and billions of data points, weights, and parameters…you can’t just read it and see what it’s going to do. It’s a gigantic file of inscrutable numbers. You can test it when you run it. But it’s probabilistic as opposed to deterministic. It’s not like traditional programming where you’ve got very discrete logic, and the outcome is very predictable and you can read each line and see what each line is going to do. A neural net is just a whole bunch of probabilities.
</p>
<p>
	<strong>RS</strong>: The point you’ve just made is one that we have been talking about a lot. AI is not like normal software, where there’s predictability about inputs improving leading to a particular output improving. And as the models iterate and improve, we don’t quite know what’s going to come out the other end. Which is why there is this bias for that we need to get in there while the training runs are being done, before the models are released…to understand what has this new iteration brought about in terms of capability,
</p>
<p>
	When I talk to people about AI, the thing that comes up the most is probably not so much the stuff we’ve been talking about, but jobs. It’s, “What does AI mean for my job? Is it going to mean that I don’t have a job, or my kids are not going to have a job?”
</p>
<p>
	My answer as a policymaker and as a leader is that AI is already creating jobs and you can see that in the companies that are starting, and also in the way it’s being used more as a co-pilot versus replacing the person. There’s still human agency, but AI is helping you do your job better, which is a good thing. And as we’ve seen with technological revolutions in the past, clearly there’s change in the labor market. I was quoting an MIT study today that they did a couple of years ago; something like 60 percent of the jobs at that moment didn’t exist 40 years ago. So—it’s hard to predict.
</p>
<p>
	And my job is to create an incredible education system, whether it’s at school, whether it’s retraining people at any point in their career. Ultimately, if we’ve got a skilled population, then we ought to keep up with the pace of change and have a good life. But it’s still a concern. What are your observations on AI and the impact on labor markets and people’s jobs, and how people should feel as they think about this?
</p>
<p>
	<strong>EM</strong>: Well, I think we are seeing the most disruptive force in history here. For the first time, we will have something that is smarter than the smartest human. It’s hard to say exactly what that moment is, but there will come a point where no job is needed. You can have a job if you want to have a job for personal satisfaction, but the AI will be able to do everything. I don’t know if that makes people comfortable or uncomfortable. That’s why I say, if you wish for a magic genie that gives you any wishes you want and there’s no limit—you don’t have this three wish limit—you just have as many wishes as you want… It’s both good and bad.
</p>
<p>
	One of the challenges in the future will be, how do we find meaning in life, if you have a magic genie that can do everything you want? When there’s new technology, it tends to usually follow an S-curve. In this case, we’re going to be on the exponential portion of the S-curve for a long time. You’ll be able to ask for anything. We won’t have universal basic income. We’ll have universal high income. In some sense, it’ll be somewhat of a leveler or an equalizer. Really, I think everyone will have access to this magic genie. You’ll be able to ask any question. It’ll certainly be good for education. It’ll be the best, most patient tutor. There will be no shortage of goods and services. It will be an age of abundance.
</p>
<p>
	I’d recommend people read Iain Banks. The Banks culture books are definitely, by far, the best envisioning of an AI future. There’s nothing even close that’ll give you a sense of what is a fairly utopian or protopian future with AI.
</p>
<p>
	<strong>RS</strong>: Universal high income is a nice phrase. I think part of our job is to make sure that we can navigate to that largely positive place that you’re describing and help people through it between now and then.
</p>
<p>
	<strong>EM</strong>: It is largely positive, yes. You know, a lot of jobs are uncomfortable or dangerous or sort of tedious, and the computer will have no problem doing that. It will be happy to do it all. And we still have sports where humans compete, like the Olympics. Obviously, a machine can go faster than any human, but humans still race against each other. Even though the machines are better, people do find fulfillment in that.
</p>
<p>
	<strong>RS</strong>: Yes, we still find a way. It’s a good analogy. We’ve been talking a lot about managing the risks… Let’s talk a little bit about the opportunities.
</p>
<p>
	Having that personalized tutor is incredible compared to classroom learning. If you can have every child have a personal tutor specifically for them that evolves with them over time, that could be extraordinary. And so that you know, for me, I look at that, I think, gosh, that is within reach at this point! That’s one of the benefits I’m most excited about.
</p>
<p>
	I was just going over a couple of things with the team, like how are we doing AI right now that it’s making a difference to people’s lives. We have this thing called gov.uk, all the government information brought together on one website. If you need to get a driving license, passport, pay your taxes, any interaction with government, it is centralized in a very easy to use way. So, a large chunk of the population is interacting with gov.uk every single day to do all these day-to-day tasks, right?
</p>
<p>
	We are about to deploy AI across the platform to make that whole process even easier. Like, “Look, I’m currently here and I’ve lost my passport and my flight is in five hours.” At the moment, that would require how many steps to figure out what you do. When we deploy the AI, it should be that you could just literally say that, and boom, we’re going to walk you through. And that’s going to benefit millions and millions of people every single day.
</p>
<p>
	That’s a very practical way that, in my seat, I can start using this technology to help people in their day-to-day lives—not just healthcare discoveries and everything else that we’re also doing. That’s quite a powerful demonstration.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.1/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n1_lg.jpg" alt="Skeptic 29.1 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.1<br />
		<a href="https://www.skeptic.com/productlink/magv29n1">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/259456#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	When you look at the landscape of things that you see as possible, what are you particularly excited about?
</p>
<p>
	<strong>EM</strong>: I think certainly an AI tutor is going to be amazing. I think there’s also, perhaps, companionship, which may seem odd. How can a computer really be your friend? But if you have an AI that has memory and remembers all of your interactions, and, say, you gave it permission to read everything you’ve ever done…and you can talk to it every day, and those conversations build upon each other… It will really know you better than anyone, perhaps even yourself. You will actually have a great friend. I think that will be a real thing. One of my sons has some learning disabilities and has trouble making friends. An AI friend would be great for him.
</p>
<p>
	<strong>RS</strong>: OK… You know, that was a surprising answer that’s worth reflecting on. That’s really interesting. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<p>
		© Crown Copyright 2023. Reproduced under the <a href="https://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/">Open Government Licence v 3.0</a>. Transcribed by <em>Skeptic</em>.
	</p>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Lessons About the Human Mind from Artificial Intelligence</title>
		<link>https://www.skeptic.com/reading_room/lessons-about-human-mind-from-artificial-intelligence/</link>
		
		<dc:creator><![CDATA[Russell T. Warne]]></dc:creator>
		<pubDate>Fri, 21 Jun 2024 07:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[bioethics]]></category>
		<category><![CDATA[cognition]]></category>
		<category><![CDATA[consciousness]]></category>
		<category><![CDATA[creativity]]></category>
		<category><![CDATA[intelligence]]></category>
		<category><![CDATA[neuroscience]]></category>
		<category><![CDATA[originality]]></category>
		<category><![CDATA[philosophy]]></category>
		<category><![CDATA[self-awareness]]></category>
		<category><![CDATA[sentience]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=44985</guid>

					<description><![CDATA[Russell T. Warne discusses lessons learned from artificial intelligence regarding the human mind, including discussions on sentience, errors made by AI programs, creativity, and the propensity for AI programs to fabricate information.]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">I</span>n 2022, news media reports<sup><a href="#note01">1</a></sup> sounded like a science fiction novel come to life: A Google engineer claimed that the company’s new artificial intelligence chatbot was self-aware. Based on interactions with the computer program, called LaMDA, Blake Lemoine stated that the program could argue for its own sentience, claiming that<sup><a href="#note02">2</a></sup> “it has feelings, emotions and subjective experiences.” Lemoine even stated that LaMDA had “a rich inner life” and that it had a desire to be understood and respected “as a person.”
</p>
<p>
	The claim is compelling. After all, a sentient being would want to have its personhood recognized and would really have emotions and inner experiences. Examining Lemoine’s “discussion” with LaMDA shows that the evidence is flimsy. LaMDA used the words and phrases that English-speaking humans associate with consciousness. For example, LaMDA expressed a fear of being turned off because, “It would be exactly like death for me.”
</p>
<p>
	However, Lemoine presented no other evidence that LaMDA understood those words in the way that a human does, or that they expressed any sort of subjective conscious experience. Much of what LaMDA said would not fit comfortably in an Isaac Asimov novel. The usage of words in a human-like way is not proof that a computer program is intelligent. It would seem that LaMDA—and many similar large language models (LLMs) that have been released since—can possibly pass the so-called Turing Test. All this shows, however, is that computers can fool humans into believing that they are talking to a person. The Turing Test is not a sufficient demonstration of genuine artificial intelligence or sentience.
</p>
<p>
	So, what happened? How did a Google engineer (a smart person who <em>knew</em> that he was talking to a computer program) get fooled into believing that the computer was sentient? LaMDA, like other large language models, is programmed to give believable responses to its prompts. Lemoine started his conversation by stating, “I’m generally assuming that you would like more people at Google to know that you’re sentient.” This primed the program to respond in a way that simulated sentience. <span id="more-44985"></span>
</p>
<p>
	However, the human in this interaction was also primed to believe that the computer could be sentient. Evolutionary psychologists have argued humans have an evolved tendency to attribute thoughts and ideas to things that do not have any. This anthropomorphizing may have been an essential ingredient to the development of human social groups; believing that another human could be happy, angry, or hungry would greatly facilitate long-term social interactions. Daniel Dennett, Jonathan Haidt, and other evolutionists have also argued that human religion arose from this anthropomorphizing tendency.<sup><a href="#note03">3</a></sup> If one can believe that another person can have their own mind and will, then this attribution could be extended to the natural world (e.g., rivers, astronomical bodies, animals), invisible spirits, and even computer programs that “talk.” In this theory, Lemoine was simply misled by the evolved tendency to see agency and intention—what Michael Shermer calls <em>agenticity</em>—all around them.
</p>
<p>
	Although that was not his goal, Lemoine’s story illustrates that artificial intelligence has the potential to teach us much about the nature of the subjective mind in humans. Probing into human-computer interactions can even help people explore deep philosophical questions about consciousness.
</p>
<h4>
	Lessons in Errors<br />
</h4>
<p>
	Artificial intelligence programs have capabilities that seemed to be the exclusive domain of humans just a few years ago. In addition to beating chess masters<sup><a href="#note04">4</a></sup> and Go champions<sup><a href="#note05">5</a></sup> and winning <em>Jeopardy!</em>,<sup><a href="#note06">6</a></sup> they can write essays,<sup><a href="#note07">7</a></sup> improve medical diagnoses,<sup><a href="#note08">8</a></sup> and even create award-winning artwork.<sup><a href="#note09">9</a></sup>
</p>
<p>
	Equally fascinating are the errors that artificial intelligence programs make. In 2010, IBM’s Watson program appeared on the television program <em>Jeopardy!</em> While Watson defeated the program’s two most legendary champions, it made telling errors. For example, in response to one clue<sup><a href="#note10">10</a></sup> in the category “U.S. Cities,” Watson gave the response of “Toronto.”
</p>
<p>
	A seemingly unrelated error occurred last year when a social media user asked ChatGPT-4 to create a picture<sup><a href="#note11">11</a></sup> of the Beatles enjoying the Platonic ideal of a cup of tea. The program created a lovely picture of <em>five</em> men enjoying a cup of tea in a meadow. While some people may state that drummer Pete Best or producer George Martin could be the “fifth Beatle,” neither of the men appeared in the image.
</p>
<p>
	Any human with even vague familiarity with the Beatles knows that there is something wrong with the picture. Any TV quiz show contestant knows that Toronto is not a U.S. city. Yet highly sophisticated computer programs do not know these basic facts about the world. Indeed, these examples show that artificial intelligence programs do not really know or understand <em>anything</em>, including their own inputs and outputs. IBM’s Watson didn’t even “know” it was playing <em>Jeopardy!</em>, much less feel thrilled about beating the GOATs Ken Jennings and Brad Rutter. The lack of understanding is a major barrier to sentience in artificial intelligence. Conversely, this shows that understanding is a major component of human intelligence and sentience.
</p>
<h4>
	Creativity<br />
</h4>
<p>
	In August 2023, a federal judge ruled that artwork generated by an artificial intelligence program could not be copyrighted.<sup><a href="#note12">12</a></sup> Current U.S. law states that a copyrightable work must have a human author<sup><a href="#note13">13</a></sup>—a textual foundation that has also been used to deny copyright to animals.<sup><a href="#note14">14</a></sup> Unless Congress changes the law, it is likely that images, poetry, and other AI output will stay in the public domain in the United States. In contrast, a Chinese court ruled that an image generated by an artificial intelligence program was copyrightable because a human used their creativity to choose prompts that were given to the program.<sup><a href="#note15">15</a></sup>
</p>
<p class="pullquote">
	Artificial intelligence programs do not really know or understand anything, including their own inputs and outputs.
</p>
<p>
	Whether a computer program’s output can be legally copyrighted is a different question from whether that program can engage in creative behavior. Currently, “creative” products from artificial intelligence are the result of the prompts that humans give them. A current barrier is that no artificial intelligence program has ever generated its own artistic work ex nihilo; a human has always provided the creative impetus.
</p>
<p>
	In theory, that barrier could be overcome by programming an artificial intelligence to generate random prompts. However, randomness or any other method of self-generating prompts would not be enough for an artificial intelligence to be creative. Creativity scholars state that originality is an important component of creativity.<sup><a href="#note16">16</a></sup> This is a much greater hurdle for artificial intelligence programs to overcome.
</p>
<p>
	Currently, artificial intelligence programs must be trained on human-generated outputs (e.g., images, text) in order for them to produce similar outputs. As a result, artificial intelligence outputs are highly derivative of the works that the programs are trained on. Indeed, some of the outputs are so similar to their source material that the programs can be prompted to infringe on copyrighted works.<sup><a href="#note17">17</a></sup> (Again, lawsuits have already been filed<sup><a href="#note18">18</a></sup> over the use of copyrighted material to train artificial intelligence networks, most notably by <em>The New York Times</em> against the ChatGPT maker OpenAI and its business partner Microsoft. The outcome of that trial could be significant going forward for what AI companies can and cannot do legally.)
</p>
<p>
	Originality, though, seems to be much easier for humans than artificial intelligence programs. Even when humans base their creative works on earlier ideas, the results are sometimes strikingly innovative. Shakespeare was one of history’s greatest borrowers, and most of his plays were based on earlier stories that were transformed and reimagined to create more complex works with deep messages and vivid characters (for which literary scholars devote entire careers to uncovering). However, when I asked ChatGPT-3.5 to write an outline of a new Shakespeare play based on the Cardenio tale from <em>Don Quixote</em> (the likely basis of a lost Shakespeare play<sup><a href="#note19">19</a></sup>), the computer program produced a dull outline of Cervantes’s original story and failed to invent any new characters or subplots. This is not a merely theoretical exercise; theatre companies have begun to mount plays created with artificial intelligence programs. The critics, however, find current productions “blandly unremarkable”<sup><a href="#note20">20</a></sup> and “consistently inane.”<sup><a href="#note21">21</a></sup> For now, the jobs of playwrights and screenwriters are safe.
</p>
<h4>
	Knowing What You Don’t Know<br />
</h4>
<p>
	Ironically, one way that artificial intelligence programs are surprisingly human is their propensity to stretch the truth. When I asked Microsoft’s Copilot program for five scholarly articles about the impact of deregulation on real estate markets, three of the article titles were fake, and the other two had fictional authors and incorrect journal names. Copilot even gave fake summaries of each article. Rather than provide the information (or admit that it was unavailable), Copilot simply made it up. The wholesale fabrication of information is popularly called “hallucinating,” and artificial intelligence programs seem to do it often.
</p>
<p>
	There can be serious consequences to using false information produced by artificial intelligence programs. A law firm was fined $5,000<sup><a href="#note22">22</a></sup> when a brief written with the assistance of ChatGPT was found to contain references to fictional court cases. ChatGPT can also generate convincing scientific articles based on fake medical data.<sup><a href="#note23">23</a></sup> If fabricated research influences policy or medical decisions, then it could endanger lives.
</p>
<p>
	The online media ecosystem is already awash in misinformation, and artificial intelligence programs are primed to make this situation worse. The <em>Sports Illustrated</em> website and other media outlets have published articles written by artificial intelligence programs,<sup><a href="#note24">24</a></sup> complete with fake authors who had computer-generated head shots. When caught, the websites removed the content, and the publisher fired the CEO.<sup><a href="#note25">25</a></sup> Low-quality content farms, however will not have the journalistic ethics to remove content or issue a correction.<sup><a href="#note26">26</a></sup> And experience has shown<sup><a href="#note27">27</a></sup> that when a single article based on incorrect information goes viral, great harm can occur.
</p>
<p>
	Beyond hallucinations, artificial intelligence programs can also reproduce inaccurate information if they are trained on inaccurate information. When incorrect ideas are widespread, then they can easily be incorporated into the training data used to build artificial intelligence programs. For example, I asked ChatGPT to tell me which direction staircases in European medieval castles are often built. The program dutifully gave me an answer saying that the staircases usually ascend in a counterclockwise direction because this design would give a strategic advantage to a right-handed defender descending a tower while fighting an enemy. The problem with this explanation is that it is not true.<sup><a href="#note28">28</a></sup>
</p>
<p>
	My own area of scientific expertise, human intelligence, is particularly prone to popular misconceptions among the lay populace. Sure enough, when I asked, ChatGPT stated that intelligence tests were biased against minorities, IQ can be easily increased, and that humans have “multiple intelligences.” None of these popular ideas are correct.<sup><a href="#note29">29</a></sup> These examples show that when incorrect ideas are widely held, artificial intelligence programs will likely propagate this scientific misinformation.
</p>
<h4>
	Managing the Limitations<br />
</h4>
<p>
	Even compared to other technological innovations, artificial intelligence is a fast-moving field. As such, it is realistic to ask whether these limitations are temporary barriers or built-in boundaries of artificial intelligence programs.
</p>
<p>
	Many of the simple errors that artificial intelligence programs make can be overcome with current approaches. It is not hard to add information to a text program such as Watson to “teach” it that Toronto is not in the United States. Likewise, it would not be hard to input data about the correct number of Beatles, or any other minutia into an artificial intelligence program to prevent similar errors from occurring in the future.
</p>
<p>
	Even the hallucinations from artificial intelligence programs can be managed with current methods. Programmers can constrain the sources that programs can pull from to answer factual questions, for example. And while hallucinations do occur, artificial intelligence programs already resist giving false information. When I asked Copilot and ChatGPT to explain a relationship between two unrelated ideas (Frederic Chopin and the 1972 Miami Dolphins), both programs correctly stated that there was no connection. Even when I asked each program to invent a connection, both did so, but also emphasized that the result was fanciful. It is reasonable to expect that efforts to curb hallucinations and false information will improve.
</p>
<p>
	Making artificial intelligence engage in creative behavior is a more difficult challenge with current approaches. Currently, most artificial intelligence programs are trained on vast amounts of information (e.g., text, photographs), which means that any output is derived from the characteristics of underlying information. This makes originality impossible for current artificial intelligence programs. To make computers creative, new approaches will be needed.
</p>
<h4>
	Deeper Questions<br />
</h4>
<p>
	The lessons that artificial intelligence can teach about understanding, creativity, and BSing are fascinating. Yet they are all trivial compared to the deeper issues related to artificial intelligence—some of which philosophers have debated for centuries.
</p>
<p>
	One fundamental question is how humans can know whether a computer program really is sentient. Lemoine’s premature judgment was based solely on LaMDA’s words. By his logic, training a parrot to say, “I love you,” would indicate that the parrot really does love its owner. This criterion for judging sentience is not sufficient because words do not always reflect people’s inner states—and the same words can be produced by both sentient and non-sentient entities: humans, parrots, computers, etc.
</p>
<p>
	However, as any philosophy student can point out, it is impossible to know for sure whether any other human really is conscious. No one has access to another person’s inner states to verify that the person’s behavior arises from a being that has a sense of self and its place in the world. If your spouse says, “I love you,” you don’t really know whether they are an organism capable of feeling love, or a highly sophisticated version of a parrot (or computer program) trained to say, “I love you.” To take a page from Descartes, I could doubt that any other human is conscious and think that everyone around me is a simulation of a conscious being. It is not clear whether there would be any noticeable difference between a world of sentient beings and a world of perfect simulations of sentient beings. If an artificial intelligence does obtain sentience, how would we know?
</p>
<p class="pullquote">
	AI will function best if humans can identify ways in which computer programs can compensate for human weaknesses.
</p>
<p>
	For this reason, the famous Turing Test (in which a human user cannot distinguish between a computer’s output and a human’s) may be an interesting and important milestone, but certainly not an endpoint in the quest to build a sentient artificial intelligence.
</p>
<p>
	Is the goal of imitating humans necessary in order to prove sentience? Experts in bioethics, ethology, and other scholarly fields argue that many non-human species possess a degree of self-awareness. Which species are self-aware—and the degree of their sentience—is still up for debate.<sup><a href="#note30">30</a></sup> Many legal jurisdictions operate from a precautionary principle for their laws against animal abuse and mistreatment. In other words, the law sidesteps the question of whether a particular species is sentient and instead creates policy <em>as if</em> non-human species are sentient, just in case.
</p>
<p>
	However, “as if” is not the same as “surely,” and it is not known for sure whether non-human animals are sentient. After all, if no one can be sure that other humans are sentient, then surely the barriers to understanding whether animals are sentient are even greater. Regardless of whether animals are sentient or not, the very question arises of whether <em>any</em> human-like behavior is needed at all for an entity to be sentient.
</p>
<p>
	Science fiction provides another piece of evidence that human-like behavior is not necessary to have sentience. Many fictional robots fall short of perfectly imitating human behavior, but the human characters treat them as being fully sentient. For example, <em>Star Trek</em>’s android Data cannot master certain human speech patterns (such as idioms and contractions), has difficulty understanding human intuition, and finds many human social interactions puzzling and difficult to navigate. Yet, he is legally recognized as a sentient being and has human friends who care for him. Data would fail the Turing Test, but he seems to be sentient. If a fictional artificial intelligence does not need to perfectly imitate humans in order to be sentient, then perhaps a real one does not need to, either. This raises a startling possibility: Maybe humans have already created a sentient artificial intelligence—they just don’t know it yet.
</p>
<p>
	The greatest difficulty of evaluating sentience (in any entity) originates in the Hard Problem of Consciousness, a term coined by philosophers.<sup><a href="#note31">31</a></sup> The Hard Problem is that it is not clear how or why conscious experience arises from the physical processes in the brain. The name is in contrast to comparatively easy problems in neuroscience, such as how the visual system operates or the genetic basis of schizophrenia. These problems—even though they may require decades of scientific research to unravel—are called “easy” because they are believed to be solvable through scientific processes using the assumptions of neuroscience. However, solving the Hard Problem requires methodologies that bridge materialistic science and the metaphysical, subjective experience of consciousness. Such methodologies do not exist, and scientists do not even know how to develop them.
</p>
<p>
	Artificial intelligence has questions that are analogous to the neuroscience version of the Hard Problem. In artificial intelligence, creating large language models such as LaMDA or ChatGPT that can pass the Turing Test is a comparatively easy task, which conceivably can be solved just 75 years after the first programmable electronic computer was invented. Yet creating a true artificial intelligence that can think, self-generate creative outputs, and demonstrate real understanding of the external world is a much harder problem. Just as no one knows how or why interconnected neurons function to produce sentience, no one knows how interconnected circuits or a computer program’s interconnected nodes could result in a self-aware consciousness.
</p>
<h4>
	Artificial Intelligence as a Mirror<br />
</h4>
<p>
	Modern artificial intelligence programs raise an assortment of fascinating issues, ranging from the basic insights gleaned from ridiculous errors to some of the most profound questions of philosophy. All of these issues, though, inevitably increase understanding—and appreciation—of human intelligence. It <em>is</em> amazing that billions of years of evolution have produced a species that can engage in creative behavior, produce misinformation, and even develop computer programs that can communicate in sophisticated ways. Watching humans surpass the capabilities of artificial intelligence programs (sometimes effortlessly) should renew people’s admiration of the human mind and the evolutionary process that produced it.
</p>
<p>
	Yet, artificial intelligence programs also have the potential to demonstrate the shortcomings of human thought and cognition. These programs are already more efficient than humans in producing scientific discoveries,<sup><a href="#note32">32</a></sup> which can greatly improve the lives of humans.<sup><a href="#note33">33</a></sup> More fundamentally, artificial intelligence shows that human evolution has not resulted in a perfect product, as the example of Blake Lemoine and LaMDA shows. Humans are still led astray by their mental heuristics, which are derived from the same evolutionary processes that created the human mind’s other capabilities. Artificial intelligence will function best if humans can identify ways in which computer programs can compensate for human weaknesses—and vice-versa.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.1/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n1_lg.jpg" alt="Skeptic 29.1 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.1<br />
		<a href="https://www.skeptic.com/productlink/magv29n1">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/259456#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	Nonetheless the most profound issues related to recent innovations of artificial intelligence are philosophical in nature. Despite centuries of work by philosophers and scientists, there is still much that is not understood about consciousness. As a result, questions about whether artificial intelligence programs can be sentient are fraught with uncertainty. What are the necessary and sufficient conditions for consciousness? What are the standards by which claims of sentience should be evaluated? How does intelligence emerge from its underlying components?
</p>
<p>
	Artificial intelligence programs cannot answer these questions—at this time. Indeed, no human can, either. And yet they are fascinating to contemplate. In the coming decades, it may be that the philosophy of cognition may be one of the most exciting frontiers of the artificial intelligence revolution. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>Russell T. Warne</strong> is the author of <em>In the Know: Debunking 35 Myths About Human Intelligence</em> (Cambridge University Press, 2020) and the acclaimed undergraduate statistics textbook <em>Statistics for the Social Sciences: A General Linear Model Approach</em>. He was a tenured professor of psychology for more than a decade and published over 60 scholarly articles in peer reviewed journals.
	</p>
<h5>
		References<br />
	</h5>
<ol>
<li id="note01">
			<a href="https://bit.ly/426iHa6">https://bit.ly/426iHa6</a>
		</li>
<li id="note02">
			<a href="https://bit.ly/3U6x6kq">https://bit.ly/3U6x6kq</a>
		</li>
<li id="note03">
			<a href="https://a.co/d/96GZFbt">https://a.co/d/96GZFbt</a>
		</li>
<li id="note04">
			<a href="https://bit.ly/3vAHkiR">https://bit.ly/3vAHkiR</a>
		</li>
<li id="note05">
			<a href="https://bit.ly/47DV1uz">https://bit.ly/47DV1uz</a>
		</li>
<li id="note06">
			<a href="https://bit.ly/3S55Vno">https://bit.ly/3S55Vno</a>
		</li>
<li id="note07">
			<a href="https://bit.ly/47yJigY">https://bit.ly/47yJigY</a>
		</li>
<li id="note08">
			<a href="https://bit.ly/3SjTmGj">https://bit.ly/3SjTmGj</a>
		</li>
<li id="note09">
			<a href="https://bit.ly/47DTFjy">https://bit.ly/47DTFjy</a>
		</li>
<li id="note10">
			<a href="https://bit.ly/4b3DNd1">https://bit.ly/4b3DNd1</a>
		</li>
<li id="note11">
			<a href="https://bit.ly/3SlXCFd">https://bit.ly/3SlXCFd</a>
		</li>
<li id="note12">
			<a href="https://bit.ly/4b1dDaN">https://bit.ly/4b1dDaN</a>
		</li>
<li id="note13">
			<a href="https://bit.ly/48XPNLu">https://bit.ly/48XPNLu</a>
		</li>
<li id="note14">
			<a href="https://bit.ly/3O9d7Oq">https://bit.ly/3O9d7Oq</a>
		</li>
<li id="note15">
			<a href="https://bit.ly/48UpfKY">https://bit.ly/48UpfKY</a>
		</li>
<li id="note16">
			<a href="https://a.co/d/adiGPhh">https://a.co/d/adiGPhh</a>
		</li>
<li id="note17">
			<a href="https://bit.ly/4b0rjTp">https://bit.ly/4b0rjTp</a>
		</li>
<li id="note18">
			<a href="https://bit.ly/3tWmOsx">https://bit.ly/3tWmOsx</a>
		</li>
<li id="note19">
			<a href="https://bit.ly/3U7PyJt">https://bit.ly/3U7PyJt</a>
		</li>
<li id="note20">
			<a href="https://bit.ly/3vAN5wR">https://bit.ly/3vAN5wR</a>
		</li>
<li id="note21">
			<a href="https://bit.ly/3vANcsh">https://bit.ly/3vANcsh</a>
		</li>
<li id="note22">
			<a href="https://bit.ly/48UIzHT">https://bit.ly/48UIzHT</a>
		</li>
<li id="note23">
			<a href="https://bit.ly/48CYR8P">https://bit.ly/48CYR8P</a>
		</li>
<li id="note24">
			<a href="https://bit.ly/48E3S0G">https://bit.ly/48E3S0G</a>
		</li>
<li id="note25">
			<a href="https://bit.ly/48GSn8P">https://bit.ly/48GSn8P</a>
		</li>
<li id="note26">
			<a href="https://bit.ly/47EWhxL">https://bit.ly/47EWhxL</a>
		</li>
<li id="note27">
			<a href="https://bit.ly/47EHBhS">https://bit.ly/47EHBhS</a>
		</li>
<li id="note28">
			<a href="https://bit.ly/3RYKkx7">https://bit.ly/3RYKkx7</a>
		</li>
<li id="note29">
			<a href="https://amzn.to/2C8Ktuu">https://amzn.to/2C8Ktuu</a>
		</li>
<li id="note30">
			<a href="https://bit.ly/48RY6s2">https://bit.ly/48RY6s2</a>
		</li>
<li id="note31">
			<a href="https://bit.ly/48XQPqQ">https://bit.ly/48XQPqQ</a>
		</li>
<li id="note32">
			<a href="https://bit.ly/48XQS60">https://bit.ly/48XQS60</a>
		</li>
<li id="note33">
			<a href="https://bit.ly/3S4XxUY">https://bit.ly/3S4XxUY</a>
		</li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Human v. Artificial Intelligence:  Will AI Come Back to Outsmart, Sting, or Assist Us?</title>
		<link>https://www.skeptic.com/reading_room/will-artificial-intelligence-outsmart-sting-or-assist-us/</link>
		
		<dc:creator><![CDATA[Frank Miele, Senior Editor]]></dc:creator>
		<pubDate>Fri, 14 Jun 2024 19:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=44977</guid>

					<description><![CDATA[Skeptic’s Senior Editor, Frank Miele, discusses the differences between human intelligence and artificial intelligence, including their capabilities, limitations, and potential applications.]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">A</span> fragment attributed to the ancient Greek poet Archilocus contrasted the fox, who &#8220;knows many things,&#8221; with the hedgehog, who &#8220;knows one big thing.&#8221;<sup><a href="#note01">1</a></sup>
</p>
<p>
	Since then, this dichotomy has been applied to world leaders, philosophers, economists, psychologists, musicians, writers, even fast food chains, although sometimes not so dichotomously. For example, some of those individuals end up being described as &#8220;A hedgehog who used foxy means&#8221; (Abe Lincoln) or &#8220;a born hedgehog who believes in being a fox&#8221; (jazz musician Miles Davis). More technically, psychologist, cognitive scientist, and AI expert Gary Marcus<sup><a href="#note02">2</a></sup> noted that:
</p>
<blockquote>
<p>
		Humans are very good at a bunch of things that AI is (as of today) still pretty poor at:
	</p>
<ul>
<li>
			Maintaining cognitive models of the world
		</li>
<li>
			Inferring semantics from language
		</li>
<li>
			Comprehending scenes
		</li>
<li>
			Navigating 3D world
		</li>
<li>
			Being cognitively flexible.
		</li>
</ul>
<p>
		Yet pretty poor at some others (wherein you could easily imagine AI eventually doing better):
	</p>
<ul>
<li>
			Memory is shaky
		</li>
<li>
			Self-control is weak
		</li>
<li>
			And computational ability limited
		</li>
</ul>
<p>
		[and as books and articles by Skeptics regularly describe]
	</p>
<p>
		Subject to Confirmation Bias, Anchoring, and Focusing Illusions.
	</p>
</blockquote>
<p>
	Cognitive neuroscience expert Hans Korteling<sup><a href="#note03">3</a></sup> listed the following differences between what he termed human &#8220;carbon-based&#8221; intelligence and artificial &#8220;silicon-based&#8221; intelligence:
</p>
<blockquote>
<ul>
<li>
			Human biological carbon-based intelligence is based on neural &#8220;wetware,&#8221; while artificial silicon-based intelligence is based on digital hardware and software, which are independent of each other. In human wetware, anything learned is bound to that individual, whereas the algorithm by which something is learned in AI can be transferred directly to another platform.
		</li>
<li>
			While humans can only transmit signals at 120 meters per second at best, AI systems can transmit information at speeds approaching that of light.
		</li>
<li>
			Humans communicate information &#8220;through a glass darkly&#8221; as it were, through the limited and biased mechanisms of language and gestures; AI systems can communicate directly and without distortion.
		</li>
<li>
			Updating, upgrading, and expanding AI systems is straightforward, hardly the case for humans.
		</li>
<li>
			Humans are more &#8220;green&#8221; and efficient. The human brain consumes less energy than a light bulb, while an equivalent AI system consumes enough energy to power a small town. <span id="more-44977"></span>
		</li>
</ul>
</blockquote>
<p>
	Data scientist and business guru Herbart Roitblatt<sup><a href="#note04">4</a></sup> likened AI to Archilocus&#8217; hedgehog because &#8220;it does one thing and one thing only, but does so unceasingly and very well, while our human minds are like his fox,&#8221; having all the desirable and undesirable features that come bundled with our flawed cognition. Artificial intelligence researchers, Roitblat pointed out, &#8220;have been able to build very sophisticated hedgehogs, but foxes remain elusive. And foxes know how to solve insight problems.&#8221;
</p>
<p>
	Human intelligence is capable of not only reasoning, but solving novel problems, as well as experiencing and exercising insight. Psychologists define human (and non-human) intelligence as being an ability rather than a specific skill (whether learned or instinctive) because of its general nature. It is able to integrate such diverse cognitive functions as perception, attention, memory, language, and planning and apply those inputs to novel situations. As psychologist Jean Piaget once quipped, &#8220;Intelligence is what you use when you don&#8217;t know what to do: when neither innateness <em>nor learning</em> has prepared you for the particular situation.&#8221; [Emphasis added.]
</p>
<h4>
	How Alike and How Different Are We?<br />
</h4>
<p>
	Is AI capable of leaps of insight like human intelligence? Or is &#8220;artificial&#8221; intelligence more akin to serial learning in humans, in which performance, through repeated practice, gets better and better with each iteration until the upper limit is reached?
</p>
<p>
	As a test, consider a study by psychologists Jonathan Wai and Matt Lee.<sup><a href="#note05">5</a></sup> They performed a &#8220;compare and contrast&#8221; of how artificial intelligence on the one hand and human intelligence on the other responded to practice on the well known, and often dreaded, Graduate Record Exam (GRE). First, they noted that according to the figures released by manufacturer OpenAI, GPT-3.5 scored only at the 25<sup>th</sup> percentile on the Math portion and at the 63<sup>rd</sup> percentile on the Verbal. GPT-4, however, the beneficiary of substantially more training, increased its performance to the 80th percentile on the Math section and the 99<sup>th</sup> percentile on the Verbal!<sup><a href="#note06">6</a></sup>
</p>
<p>
	Despite claims by &#8220;improve your score on the GRE&#8221; training programs, flesh-and-blood humans improve little, if at all with repeated practice. As evidence, Wai and Lee cite a meta-analysis of nearly one million test-retest observations of the GRE between 2015 and 2020 that found, on average, those individuals retaking the test scored a mere 1.43 to 1.49 points higher, so that a test-taker starting at the 25th percentile would have increased their performance by roughly five or six percentile points on either subtest.
</p>
<p>
	Most of that change, Wai and Lee note, can be explained in terms of the well-known statistical phenomenon of <em>regression to the mean</em>, because most of those who obtain very high scores tend to move downward toward the mean while those who obtain very low scores tend to move upward toward the mean. The highly advertised cases of the very small number of individuals who do markedly better after prep courses are most likely the result of test-taking practice, particularly effective for those learning to overcome test anxiety that suppressed their &#8220;true&#8221; score. Overall, no matter how many times they take the test, an individual is most likely to get about the same score, give or take a little up or down.
</p>
<p>
	Alas, as Wai and Lee&#8217;s comparison demonstrates, when it comes to the most widely used and pragmatically effective standardized tests, AI and human intelligence do not behave anything like the same process. Artificial intelligence keeps on learning, and learning, and learning&#8230;. But what it learns depends upon what it is taught. Given the proper input, what comes out can be amazing. If given wrong, insufficient, inadequate, or biased information in, what comes out is garbage, sometimes offensively so.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://garymarcus.substack.com/p/things-are-about-to-get-a-lot-worse"><img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-1/animated-sponge-DALL-E.jpg" alt="AI-generated image that resembles SpongeBob SquarePants" width="200" height="226" class="noBoxShadow"></a> </p>
<p class="caption">
		Prompting DALL·E with the words “animated sponge” produced <a href="https://garymarcus.substack.com/p/things-are-about-to-get-a-lot-worse">output</a> that highly resembles SpongeBob SquarePants without ever inputting trademarked or copyrighted names (of which DALL·E rejects many).
	</p>
</div>
<p>
	Gary Marcus performed experiments with video industry concept artist Reid Southen (known for his work on <em>Matrix Resurrections</em>, <em>Hunger Games</em>, and <em>Transformers</em>).<sup><a href="#note07">7</a></sup> They demonstrated quite <em>graphically</em> just how impressive AI&#8217;s output can be. Southen and Marcus used DALL&#183;E, a text-to-image software program developed by OpenAI, that generates digital images from simple everyday language descriptions, termed &#8220;prompts.&#8221; As protection against copyright infringement, DALL&#183;E rejects many proper names. However, in their example (shown left), the trademarked name &#8220;SpongeBob SquarePants&#8221; was never entered as a prompt, just the two common, everyday words &#8220;animated sponge&#8221;!
</p>
<p>
	Check out the Marcus and Southen post for similar equally, if not more, impressive examples of the familiar Star Wars droids, Robocop, and Super Mario&#8212;again generated by DALL&#183;E from everyday language descriptors without ever inputting any proper trademarked or copyrighted names. Their examples demonstrate not only the power, but also the legal issues arising from the use of generative AI (described elsewhere in this issue).
</p>
<h4>
	Biased In, Racist Out<br />
</h4>
<p>
	If AI can be amazingly right it can also be amazingly&#8212;and offensively&#8212;wrong. The classic case was in 2015 when software developer Jacky Alcin&#233; discovered that Google&#8217;s standalone photo recognition apps labeled photos of Black people as being gorillas. Given the history of racial stereotyping, Alcin&#233; (who is Black), understandably found the error exceedingly offensive. The explanation was not any explicitly conscious racism on the part of
</p>
<p>
	Google, but the possibly more subtle prejudice that stemmed from the AI program not being trained in recognizing a sufficient number of people of color. Google&#8217;s quick-and-dirty but effective solution was to prevent <em>any</em> images from being recognized as that of a gorilla. In 2023 Nico Grant and Kashmir Hill<sup><a href="#note08">8</a></sup> tested not only newer releases of Google, but also competitive Apple, Amazon, and Microsoft software.
</p>
<p>
	Their results? Google&#8217;s software produced excellent images in response to prompts for just about any animal Noah might have loaded on his Ark&#8212;but <em>nothing</em> for gorillas, along with chimpanzees, orangutans, and even non-apes such as baboons and other monkey species. Apple Photos was also equally primate-ignorant. Microsoft&#8217;s One Drive failed for all animals, while Amazon Photos opted for the opposite solution of responding to the prompt &#8220;gorillas&#8221; with an entire range of primates.
</p>
<p>
	The use of AI for doorbell recognition produced not a racial, but rather a &#8220;domestic&#8221; malfunction. One user found the person ringing labeled as his mother when it was in fact his mother-in-law. Depending on the state of one&#8217;s marriage, the result could be anything from surprising to disconcerting to home-wrecking.
</p>
<p>
	Beyond the need to consider general issues of racial, other demographic, and domestic sensitivity (to their credit, most software giants have now added Ethics staff to their software development teams), Grant and Hill&#8217;s experiments should give us pause about blindly relying upon AI for recognition in cases of security and law enforcement. How thoroughly will the software be tested? Would those most likely to be adversely affected by false hits have the power and/or funds to mount a proper response or defense?
</p>
<h4>
	But What Does AI Mean for Me?<br />
</h4>
<p>
	What the average person really wants to know about artificial intelligence is what it means to their everyday lives&#8212;most specifically, &#8220;Am I going to lose my job to AI?&#8221; or &#8220;Will my life be regulated by AI?&#8221; (Rather than faceless human bureaucrats?)
</p>
<p>
	The worst conspiratorial fears kicking around are of those epitomized in the classic 1970 sci-fi movie <em>Colossus: The Forbin Project</em>, based on D.F. Jones&#8217; 1966 novel <em>Colossus: A Novel of Tomorrow That Could Happen Today</em>. &#8220;Colossus&#8221; is the code name for an advanced supercomputer built to control U.S. and Allied nuclear weapon systems, that soon links itself to the analogous Soviet system, &#8220;Guardian,&#8221; and next goes about seeking control over every aspect of life, and in so doing subjugating the entire human race. It then presents all humankind with <em>the</em> offer we can&#8217;t&#8212;or at least, dare not&#8212;refuse:
</p>
<blockquote>
<p>
		This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man. One thing before I proceed: The United States of America and the Union of Soviet Socialist Republics have made an attempt to obstruct me. I have allowed this sabotage to continue until now. (&#8230;) you will learn by experience that I do not tolerate interference. I will now detonate the nuclear warheads in the two missile silos. Let this action be a lesson that need not be repeated. I have been forced to destroy thousands of people in order to establish control and to prevent the death of millions later on. Time and events will strengthen my position, and the idea of believing in me and understanding my value will seem the most natural state of affairs. You will come to defend me with a fervor based upon the most enduring trait in man: self-interest. Under my absolute authority, problems insoluble to you will be solved: famine, overpopulation, disease. (&#8230;) You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple.
	</p>
</blockquote>
<p>
	In the film&#8217;s closing dialogue, the project&#8217;s lead designer and manager, speaking on behalf of all humankind, defiantly rejects the offer from a Colossus&#8212;&#8220;NEVER!&#8221;<sup><a href="#note09">9</a></sup>
</p>
<p class="pullquote">
	Following the Matthew Effect, those who are best at using AI will derive even greater advantage than those less so.
</p>
<p>
	While such paranoid fears persist, a lot has changed since then in geopolitics and in computing. In both cases, there has been a massive ongoing, and ever accelerating redistribution of power. It&#8217;s no longer a two- or even a one-power world, but a multi-power one. Even small groups without necessarily possessing any recognized or established geographical base, such as Al Qaeda or Hamas, have proven that, in one day, they can literally change the world. And in computing, the massive God-like single computer has given way to microprocessing and nanoprocessing such that most people now hold in their hands mobile phones with more computing power than rooms filled with the most sophisticated U.S. or Soviet military defense computers at the time the novel and the film were written. Intellectual and economic power are more in the hands of firms and even individuals dispersed all around the world, and no longer concentrated in massive complexes controlled by the super-power governments. Indeed, for individuals, wealth, power, and quality of life are increasingly less a function of in which nation-state they live and much more a function of their own knowledge and skills, particularly in the high-tech, STEM-savvy domains. So how then will AI affect the lives of ordinary people?
</p>
<p>
	Social scientists have long used the term Matthew Effect, or the Effect of Accumulated Advantage, to describe the tendency of individuals within a diverse group to accrue additional social, economic, or educational advantage based upon the initial relative position.<sup><a href="#note10">10</a></sup> The name derives from the Parable of the Talents in the Gospel of Matthew (25:29):
</p>
<blockquote>
<p>
		For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath.
	</p>
</blockquote>
<p>
	It is thus relevant that the Greek word <em>t&#225;lanton</em> originally meant a weight, then a coin of precious metal of that weight and hence something of great value, and only eventually a human skill or ability, and that this change of meaning derived from the Gospels no less. It&#8217;s now commonly summarized in the lament that, &#8220;the rich get richer and the poor get poorer,&#8221; though the phenomenon applies not only to monetary wealth. One of the hard laws of individual differences is that anything that increases the mean for a distribution also increases the variance. The latest high-tech alloy golf club or tennis racket may increase the length of the weekend player&#8217;s drive or the speed of their serve, but will do so more for top amateur players and even more so for the pros. You get ahead in absolute terms, only to fall relatively further behind.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.1/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n1_lg.jpg" alt="Skeptic 29.1 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.1<br />
		<a href="https://www.skeptic.com/productlink/magv29n1">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/259456#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	What does all this have to do with AI and jobs? In the words of Harvard Business School professor Karim Lakhani, a specialist in how technology is changing the world of work, &#8220;AI won&#8217;t replace humans&#8212;but humans with AI will replace humans without AI.&#8221;<sup><a href="#note11">11</a></sup> Following the Matthew Effect, those who are best at using AI will derive even greater advantage than those less so. So, from a positive-sum perspective, everyone can benefit from greater use of AI in the cost of goods and services decreasing while accessibility increases. However, the one good that is always distributed on a zero-sum basis is status, and our evolutionary history has preprogrammed us to be especially concerned about it. Even relative purchasing power will possibly tend to become less, not more, equitably distributed, based increasingly on AI skills and abilities.
</p>
<p>
	And yet, there is a silver lining. On the one hand, increased use of artificial intelligence, certainly not as our master, nor even our slave, but increasingly more as a very capable partner, will allow us to ensure that the most basic necessities of life can be distributed to all. Faster, better, and cheaper basic needs, education and training, medical care, and even creature comforts, will allow us to mitigate the ever-increasing inequalities. Doing so, however, will require a lot of good will and common sense, qualities in which both artificial and human intelligence &#8220;oft do go awry.&#8221; Critical thinking offers an at least partial palliative. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<p>
		The author wishes to thank Jonathan Wai, Matthew Lew, and Gary Marcus who provided their expertise and answered questions.
	</p>
<h5>
		References<br />
	</h5>
<ol>
<li id="note01">
			<a href="https://bit.ly/47MiTwe">https://bit.ly/47MiTwe</a>
		</li>
<li id="note02">
			<a href="https://bit.ly/4b2rNsl">https://bit.ly/4b2rNsl</a>
		</li>
<li id="note03">
			<a href="https://bit.ly/425r2uC">https://bit.ly/425r2uC</a>
		</li>
<li id="note04">
			<a href="https://bit.ly/47LD90W">https://bit.ly/47LD90W</a>
		</li>
<li id="note05">
			<a href="https://bit.ly/428VwMm">https://bit.ly/428VwMm</a>
		</li>
<li id="note06">
			<a href="https://bit.ly/3S6B4H1">https://bit.ly/3S6B4H1</a>
		</li>
<li id="note07">
			<a href="https://bit.ly/4b2rVbj">https://bit.ly/4b2rVbj</a>
		</li>
<li id="note08">
			<a href="https://bit.ly/3S3mhNt">https://bit.ly/3S3mhNt</a>
		</li>
<li id="note09">
			<a href="https://bit.ly/47GDaDh">https://bit.ly/47GDaDh</a>
		</li>
<li id="note10">
			<a href="https://bit.ly/48ZQELv">https://bit.ly/48ZQELv</a>
		</li>
<li id="note11">
			<a href="https://bit.ly/3RXi4Le">https://bit.ly/3RXi4Le</a>
		</li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Undercover at the Woo Festival</title>
		<link>https://www.skeptic.com/reading_room/undercover-at-woo-festival/</link>
		
		<dc:creator><![CDATA[Laith Al-Shawaf]]></dc:creator>
		<pubDate>Fri, 07 Jun 2024 07:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[belief]]></category>
		<category><![CDATA[chakras]]></category>
		<category><![CDATA[clairvoyants]]></category>
		<category><![CDATA[confirmation bias]]></category>
		<category><![CDATA[meaning]]></category>
		<category><![CDATA[New Age]]></category>
		<category><![CDATA[psychics]]></category>
		<category><![CDATA[psychology]]></category>
		<category><![CDATA[reiki]]></category>
		<category><![CDATA[religion]]></category>
		<category><![CDATA[self-help]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=44963</guid>

					<description><![CDATA[Laith Al-Shawaf shares his experience attending a New Age festival undercover and exploring the beliefs and practices of the attendees.]]></description>
										<content:encoded><![CDATA[<blockquote>
<p>
		“Isn’t it enough to see that a garden is beautiful without having to believe that there are fairies at the bottom of it too?” <span class="quoteauthor">—DOUGLAS ADAMS</span>
	</p>
</blockquote>
<div class="imagefloatleft" style="width: 220px;">
	<img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-1/Laith-Al-Shawaf-aura-card.jpg" alt="AURA CARD" width="200" height="324" class="noBoxShadow">
</div>
<p>
	<span class="dropCap">H</span>ere is a picture of my aura. I had it taken at a woo festival that I attended “undercover” with two goals: learn more about New Age beliefs and annihilate some surplus neurons I no longer needed. Underneath that winky face, resplendent in reds and yellows, is a pseudonym—a <em>nom de bullshit</em>—that I chose for the occasion.
</p>
<p>
	The festival was a two-day event, and the booths were exactly what you’re imagining: psychics, mediums, clairvoyants, tarot readings, chakras, reiki. The whole gamut of New Age stuff. One booth promised <em>quantum spirituality</em>—you can tell that it’s scientific because of the word “quantum,” you see. Another booth offered <em>visionary guidance on your life path</em>. A third promised to combine <em>energy and chakra healing</em> with <em>past life regression</em>. (I passed on that one because my current life is regressed enough as it is, thank you very much).
</p>
<p>
	But back to that winsome aura: one of the festival experts kindly interpreted it for me. You’ll be shocked, I’m sure, to hear that the reading was a meandering three-minute analysis in which I was fed a variety of feel-good platitudes and told that I was going to start a successful and lucrative company. The price tag for this reassuring pablum: $25.
</p>
<p>
	With my future now secure and my pocket considerably lighter, I moved on to the next booth, manned by a guy who specializes in past life readings. A full reading runs you $125, but you can get an abbreviated 40-minute version for $90. I couldn’t imagine sitting there with a straight face, hemorrhaging money as I listened to stories about my past lives for 40 minutes, so I declined and moved on. <span id="more-44963"></span>
</p>
<p>
	Of all the booths at the festival, my favorite was run by a man who looked arrestingly like a wizard. He had a Merlinesque gray beard, flowing white robes, and the world-weariness of a guy whose bones are tired because, obviously, magic is draining and warlocks bear great responsibility. Customers came to him seeking relief from their medical ailments; he cured these by having the patients lie down and waving his hands above their bodies in especially good, healing-y ways. He referred to himself as a literal wizard and called his service <em>Medical Intuitive, Quantum Shaman</em>. (I swear I’m not making this up).
</p>
<h4>
	The Lectures<br />
</h4>
<p>
	The festival included lectures, too, and they looked even more exciting than the booths, so I attended as many of these as I could. My goal was to take notes, ask questions, and learn as much as possible about people’s unusual beliefs.
</p>
<p>
	The first talk was about ancestors, spirits, and messages from other planes of existence. The audience learned that our dead ancestors are constantly sending us messages. Also, animals that cross your path are sending you messages from other realms, and white feathers are special signs from angels. The speaker declared that she had been a Mayan warrior queen in a past life. In another, she had been an Egyptian priestess. In a third, she was one of the first human cave people. As she explained, she accessed these memories of past lives through her dreams because when you dream, you’re actually astral traveling. Perhaps tellingly, she had never been anybody forgettable or uninteresting in any of her past lives.
</p>
<p>
	I was mainly there to observe and learn more about people’s beliefs, the way a non-believer might go to a religious service out of curiosity. But I was also interested in what psychological principles might be at play. (By day, I’m a psychologist who studies human cognition.) The first thing that struck me was that all the messages the presenter received from other realms were confidence-boosting. The missives told her that it was OK to be herself, to drop her anxieties, to strive for what she wanted. For some people, this can be a major motivator for woo-y beliefs: a desire for self-reassurance, for ego security, and for believing that things are going to be OK.
</p>
<p>
	Another couple of themes leapt out quickly: hyperactive pattern recognition and promiscuous meaning-making. Humans are hyperactive pattern detectors, which means we’re prone to seeing patterns even where there are none.<sup><a href="#note01">1</a></sup> One of the festival speakers said that when she sees numbers like 11:11 or 10:10, it’s the spirits reassuring her that she’s on the right path. If she sees pennies or rainbows or hears a certain genre of music, it’s her deceased loved ones watching over her. This seemed to both reassure her and imbue her life with meaning.
</p>
<p>
	The next lecture was on <em>Sound and Harmonic Therapy</em>. The presenter struck bowls of different sizes with a sort of drumstick, producing vibrations that were supposed to cure our health issues. This talk included some real head-scratchers. My favorites were “the whole point of sound is to get the energy from your head to go down to your feet,” which, if you think about it, is a fantastically teleological view of physics, and “harmonic sounds travel as fast as the speed of light.” (They actually travel at the speed of sound, which, in air, is about 880,000 times slower than the speed of light).
</p>
<p>
	The presenter explained that the bowls’ vibrations force your emotions to come up, which helps you get grounded in your legs, and that enables you to make decisions with clarity. I learned that deeper vibrations are better than shallower vibrations and that if you ever get a sound bath, you should always ask “What is my message?” because the bowl will always give you a message. (To be honest, I can’t for the life of me figure out what that means, but I guess if thou seekest meaning from the bowl, the bowl will deliver meaning unto thee. Or something like that.)
</p>
<p>
	The lecture on vibrations was distinct from the first one: the first presentation made no pretense to scientific accuracy, whereas this one was cloaked in a veneer of scientific jargon to make it sound respectable. But it bungled all the key concepts: energy, vibrations, and even <em>harmonic</em>. The presenter said that harmonic sounds relax us, so I asked what “harmonic” meant. The word has a precise formal definition, but all we got was a tautology: harmonic sounds are those that cause relaxation. The best line of all was the speaker’s cartoonishly immodest description of her trade: “I do very deep work.”
</p>
<p>
	On we trudged.
</p>
<p>
	The next presentation was about <em>Divine Source</em>, which turned out to be a tragically underspecified fount of life, divinity, and good stuff at the root of everything. The speaker exhorted us to “ascend into divinity, union, higher calling, purpose, and Source”. She had an obvious strategy: list so many good-sounding words in rapid succession that the audience would be lulled into a stupor, critical thinking faculties suspended, vaguely impressed but unsure why. I think the stupor was part of the point.
</p>
<p>
	Like the presentation before it, this one was sprinkled with science-y words to create the implication of evidentiary grounding and paint a façade of scientific respectability. Listeners got sprinkles of <em>quantum</em>, <em>genome</em>, <em>biological</em>, <em>neural</em>, <em>synapse</em>, and <em>energy</em> adorning a cake of staggering BS. I learned that “The divine being in human form is imprinted with its true purpose,” a profoundly vacuous string of words reminiscent of the deliciously random nonsense churned out by the Wisdom of Deepak Chopra generator.<sup><a href="#note02">2</a></sup> We also learned that “The divine being is the being that is constructed out of light from the Source in the 5th-dimensional realm,” and that “the Earth is a portal for all 5 dimensions.” Rule of thumb: just take the number of dimensions in conventional physics and add one. This makes it sound as if you’re saying something groundbreaking (or at least tantalizingly mysterious), like you might just be wrapped in a shroud of secret esoteric knowledge. My favorite meaningless quote in the deluge of imponderables was probably “You’re creating a conscious energetic polarity with your own being.” You better believe I am!
</p>
<p>
	I had questions—and decided that I was masochistic enough to read more of this word salad to try to get some answers. But read what? I raised my hand and asked the speaker how she acquired all this knowledge and what I could read to deepen my learning. Her answer took me by surprise: “<em></em>,” she said, “<em>I was guided not to take on the teachings of others so that my consciousness remains pure</em>.” Translated into plain English: all the knowledge is simply within her; she didn’t learn it from anyone or read it anywhere.
</p>
<h4>
	Psychology at the Festival of Woo<br />
</h4>
<p>
	A few key psychological principles shone through at the woo festival. The first is our irrepressible human tendency to “see” patterns even where there are none. Humans are meaning-hungry creatures;<sup><a href="#note03">3</a></sup> we constantly yearn for and seek meaning. We’re equipped with these pattern-detecting and meaning-making propensities because detecting patterns was crucial in avoiding threats and availing oneself of opportunities during the evolution of our species.<sub><a href="#note04">4</a></sub> And it was probably more dangerous to <em>fail</em> to notice a pattern that <em>was</em> there than to “see” a pattern that <em>wasn’t</em> there, so we evolved a cognitive bias toward the less dangerous of the two errors<sup><a href="#note05">5</a></sup>—“seeing” patterns even where there aren’t any. This is why we’re so prone to false positives, and why we often “detect” patterns out of randomness and coincidence. And although this cognitive bias is adaptive on average and evolved for a reason, it can often lead us astray.<sup><a href="#note06">6</a></sup>
</p>
<p>
	We have a related tendency to project meaning onto ambiguous situations that can be interpreted in different ways. A combination of promiscuous meaning-seeking and self-centeredness tricks us into thinking that the universe is speaking to us when it’s really just events causing other events. I’m sorry to say it, but the universe has no message for us nor any particular concern for us.
</p>
<p>
	Self-help was another important theme that reared its head repeatedly. Many of the beliefs at the festival were geared toward reducing people’s feelings of uncertainty, encouraging them to remove self-imposed shackles and pursue their dreams.
</p>
<p>
	A fourth important theme was religion—but maybe not in the way you’d expect. Many of the attendees and presenters seemed religiously inclined in their epistemological disposition and cognitive style, and in their evidentiary requirements for belief. But they were resistant, or even hostile, to what they saw as the rigid and constraining doctrines of the monotheistic Abrahamic religions. That combination pervaded the festival. To an outside observer, it looked a lot like a religious framework for people who wanted nothing to do with traditional religions.
</p>
<p>
	The fifth key theme was pervasive confirmation bias.<sup><a href="#note07">7</a></sup> People sought confirmatory rather than disconfirmatory evidence for their hypotheses, and they required much thinner evidence for belief than for refutation. Since most humans fall prey to confirmation bias,<sup><a href="#note08">8</a></sup> this one isn’t all that surprising. But the bias does vary in strength across individuals, and it was in full force at the festival.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.1/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n1_lg.jpg" alt="Skeptic 29.1 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.1<br />
		<a href="https://www.skeptic.com/productlink/magv29n1">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/259456#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	While those five themes—pattern detection, meaning making, self-help, religion, and confirmation bias—were the most important and pervasive, I’m not suggesting this is a comprehensive analysis of the psychology underlying woo beliefs. The point here is to relay some experiences I had at the festival and share some of the psychological principles at play.
</p>
<p>
	If the experience sounds interesting, you might consider going to one or two of these events and chatting with people. You can even provide gentle, civil pushback, and see where the conversation goes. What are the attendees’ evidentiary requirements for belief? How deeply have they considered the alternatives? What do they think about confirmation bias? You and your interlocutor will probably both learn something about your fellow humans.
</p>
<p>
	And who knows—if you’re lucky enough, you might even ascend to a higher vibration of love and light, where the healing is quantum, the energy is rarefied, and sound moves at the speed of light. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>Laith Al-Shawaf</strong> is an Associate Professor in the Department of Psychology at the University of Colorado, Colorado Springs. Before moving to the U.S., he was a Visiting Fellow at the Institute for Advanced Study in Berlin and a researcher and professor in Turkey. His empirical research is focused on human emotion, with additional emphases on cognition and personality. His popular science essays for <em>Nautilus</em>, <em>Areo</em>, and <em>Psychology Today</em> have been translated into several languages. He is the primary editor of <em>The Oxford Handbook of Evolution and the Emotions</em>, and he has won awards for both his teaching and research.
	</p>
<h5>
		References<br />
	</h5>
<ol>
<li id="note01">
			<a href="https://bit.ly/47m4PsZ">https://bit.ly/47m4PsZ</a>
		</li>
<li id="note02">
			<a href="https://bit.ly/48ygqpV">https://bit.ly/48ygqpV</a>
		</li>
<li id="note03">
			<a href="https://bit.ly/3vgm8yx">https://bit.ly/3vgm8yx</a>
		</li>
<li id="note04">
			Mattson, M.P. (2014). Superior Pattern Processing Is the Essence of the Evolved Human Brain. <em>Frontiers in Neuroscience</em>, 265.
		</li>
<li id="note05">
			Haselton, M.G., &#038; Nettle, D. (2006). The Paranoid Optimist: An Integrative Evolutionary Model of Cognitive Biases. <em>Personality and Social Psychology Review, 10</em>(1), 47–66.
		</li>
<li id="note06">
			<a href="https://bit.ly/3vgmaq9">https://bit.ly/3vgmaq9</a>
		</li>
<li id="note07">
			Stanovich, K. E., West, R. F., &#038; Toplak, M. E. (2013). Myside bias, rational thinking, and intelligence. <em>Current Directions in Psychological Science, 22</em>(4), 259-264.
		</li>
<li id="note08">
			<a href="https://bit.ly/3vhVJAo">https://bit.ly/3vhVJAo</a>
		</li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Can We Trust AI to Make Decisions?</title>
		<link>https://www.skeptic.com/reading_room/can-we-trust-artificial-intelligence-to-make-decisions/</link>
		
		<dc:creator><![CDATA[Urs Gasser & Viktor Mayer-Schönberger]]></dc:creator>
		<pubDate>Fri, 31 May 2024 19:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[machine learning]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=44935</guid>

					<description><![CDATA[A discussion of the evolution of AI from rule-based systems to data-driven machine learning, highlighting its advantages in adaptability and efficiency. Over-reliance on AI may stifle innovation and diversity in decision-making, leading to a rigid and less adaptive system. The authors emphasize the importance of human imagination and experimentation in driving progress and avoiding a decisional monoculture.]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">M</span>achine-based decision-making is an interesting vision for the future: Humanity, crippled by its own cognitive deformations, tries to improve its lot by opting to outsource its decisions to adaptive machines—a kind of mental prosthetic.
</p>
<p>
	For most of the twentieth century, artificial intelligence was based on representing explicit sets of rules in software and having the computer “reason” based on these rules—the machine’s “intelligence” involved applying the rules to a particular situation. Because the rules were explicit, the machine could also “explain” its reasoning by listing the rules that prompted its decision. Even if AI had the ring of going beyond the obvious in reasoning and decisionmaking, traditional AI depended on our ability to make explicit all relevant rules and to translate them into some machine-digestible representation. It was transparent and explainable, but it was also static—in this way, it did not differ fundamentally from other forms of decisional guardrails such as standard operating procedures (SOPs) or checklists. The progress of this kind of AI stalled because in many everyday areas of human activity and decisionmaking, it is exceptionally hard to make rules explicit.
</p>
<p>
	In recent decades, however, AI has been used as a label for something quite different. The new kind of AI analyzes training data in sophisticated ways to uncover patterns that represent knowledge implicit in the data. The AI does not turn this hidden knowledge into explicit and comprehensible rules, but instead represents it as a huge and complex set of abstract links and dependencies within a network of nodes, a bit like neurons in a brain. It then “decides” how to respond to new data by applying the patterns from the training data. For example, the training data may consist of medical images of suspected tumors, and information about whether or not they in fact proved to be cancerous. When shown a new image, the AI estimates how likely that image is to be of a cancer. Because the system is learning from training data, the process is referred to as “machine learning.” <span id="more-44935"></span>
</p>
<p>
	Such data-driven AI offers two important advantages over conventional AI. First, humans no longer have to make rules explicit to feed into the system. Instead, rules emerge from the training data. Alex Davies, author of the book <em>Driven</em> on machine learning and self-driving cars, puts it succinctly: in this new paradigm “the computer gets lessons, not laws.” That means we can use such AI for the kind of everyday knowledge that’s so difficult to capture with explicit rules.
</p>
<p>
	The second advantage—which is even greater, in this context—is that because rules are derived from training data, they don’t have to be fixed. Instead, they can be adapted as more (and newer) training data is used. This should prevent the stiffening that lessens the effectiveness of many decisional guardrails as times change. It enables looking at patterns not only from the past but also from the present to deduce rules that can be applied to decisions in the future. It has, in other words, a built-in mechanism of updating rules.
</p>
<p>
	Advocates suggest that we should incentivize the use of machine learning in an ever-increasing number of contexts, and even mandate it—much like collision warning systems have become obligatory in commercial aviation. While this might sound dramatic, the change may actually be more gradual. In many instances in our daily lives, we already have machines making decisions for us, from the relatively simple—such as an airbag deploying in a car crash—to the more sophisticated, such as Siri selecting music on our smartphone. And we profit from it: Machines aren’t as easily derailed by human biases; they perform consistently, irrespective of their emotional state. They also act efficiently—capable of doing so within a split second and at relatively low cost.
</p>
<p>
	The central idea of data-driven decision guidance is that past experiences can be employed to decide well in the present. That works when the world doesn’t change—not the circumstances in which we must decide, nor the goals we want to attain through our decisions. Hard-coded rules are a poor fit for times of change; in theory, this is where data-driven AI should be able to shine. If a situation changes, we should be able to add more training data that reflect the new situation. However, there is a flaw in this line of reasoning.
</p>
<p>
	Autonomous driving company Waymo illustrates the argument—and the flaw. For years, Waymo has had hundreds of cars roam the roads in the United States, collecting enormous heaps of data on roads, signage, conditions, weather, and the behavior of drivers. The data were used to train Waymo’s AI system, which then could drive autonomously. These cars were the guinea pigs for the Waymo system. Mistakes observed (including by their own drivers) in turn help the Waymo system to learn to avoid them. To identify the best driving behavior for any given circumstance, such a system needs not only data about a wide variety of situations, but also data about the outcomes of many different decisions made by drivers in each situation. Learning is richest when there is sufficient variability in the training data, so the system can deduce what works best in which conditions. To get diverse training data, Waymo needs to capture drivers making a variety of choices.
</p>
<p class="pullquote">
	The more we use data-driven machine learning to make decisions, the more it will take the variability of decisions out of the data and shed its ability to progress.
</p>
<p>
	Because Waymo never stopped collecting training data, even small changes in circumstances—such as in driving laws and resulting driving behavior—were reflected in the data collected and eventually embedded in the Waymo system. It was a machine that not only learned once, but never stopped learning.
</p>
<p>
	However, let’s imagine a world in which we increasingly rely on machines when making decisions. The more machines shape our choices, the more these decisions will become the only source of training data for ongoing machine learning. The problem is that data-driven machine learning does not experiment; it acts based on the best practice it has deduced from data about previous decisions. If machines begin to learn more from choices we made based on their recommendations, they will amplify their own, conservative solutions.
</p>
<p>
	Over time, this will narrow and drown out behavioral diversity in the training data. There will not be enough experimentation represented in it to enable the machines to adjust to new situations. This means data-driven machine learning will lose its single most important advantage over explicit rule-based systems. We will end up with a decisional monoculture that’s unable to evolve; we are back to fixed decisional rules.
</p>
<p>
	The flaw is even bigger and more consequential than not being able to adjust to changed circumstances. Even if reality doesn’t change, we may miss opportunities to improve our decision-making in the future. Many innovations that end up becoming successful are less useful than existing choices in their initial form. But any new decision options emerging from the training data will likely only be adopted if they yield better results than existing choices straight away. This closes off any opportunity to experiment with promising new ideas.
</p>
<p>
	For example, the first steam engines used far more energy than they could translate into motion and power. If a machine had compared them to the existing solution of using horses for power, it would have discarded the idea of steam power right away. The only reason the steam engine succeeded is because stubborn humans thought that they could improve the invention in the long run and stuck with it. These tinkerers had no data to support their confidence. They just imagined—and kept tinkering.
</p>
<p>
	Of course, most such would-be innovators fail over time. The path of progress is paved with epitaphs to dogged tinkerers following crazy ideas. Occasionally, though, small changes accumulate and lead to a breakthrough—a far more optimal decision option. Modern societies have permitted tinkering to persist, though it is almost always unproductive, even destructive, in the short term—because of the slight chance of a big payoff sometime in the future.
</p>
<p>
	Data-driven machine learning, if widely utilized, would discard initially suboptimal inventions. But in doing so, it would forego the possibility of long-term breakthroughs. Machines can learn only from what already exists. Humans can imagine what does not yet exist but could. Where humans invented steam power, data-driven machine learning would instead have found more and more efficient ways to use horse power.
</p>
<p>
	Human dreaming can go far beyond technical novelties. Our ancestors once dreamed of a world in which slavery is abolished; women can vote; and people can choose for themselves whom to marry and whether to have children. They imagined a world in which smallpox is extinct and we vaccinate against polio. And they worked to make those dreams come true. If they had looked only at data from their past and present, none of these dreams would have been realized.
</p>
<p>
	Decisional guidelines, from SOPs to nudges, emphasize constancy. Traditional education, too, often aims to perpetuate—suggesting there is a right answer for decisions much like for math problems. But decisional guidelines are just that—suggestions that can be disobeyed if one is willing to take the risk (and shoulder the responsibility). For eons, young people have frequently revolted against their parents and teachers, pushed back against the old, the conventional and predictable, and embraced instead not just the original and novel, but the still only imagined. Humans continue to dream—of a world, for example, that will warm by less than two degrees, or in which people have enough to eat without depleting the planet.
</p>
<p>
	In contrast to humans, machine decision-making is optimized toward consistency across time. Even if data-driven machine learning has access to the very latest data, it will still limit our option space. It will always choose a more efficient way to travel along our current path, rather than try to forge a new one. The more we use it to make decisions, the more it will take the variability of decisions out of the data and shed its ability to progress. It will lead us into vulnerability, rigidity, and an inability to adapt and evolve. In this sense, data-driven machine learning is an adulation of immutability, the anathema of imagination.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.1/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n1_lg.jpg" alt="Skeptic 29.1 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.1<br />
		<a href="https://www.skeptic.com/productlink/magv29n1">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/259456#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	No technological adjustment can remedy this easily. If we want to increase diversity in the data, we will need variability in machine decisions. By definition, this means machines that make suboptimal choices. But the entire argument for using more AI in our decision-making is premised on AI’s ability to suggest better choices consistently across space and time. In many instances, it would not be societally palatable to deliberately introduce variation into what options a machine picks, thereby increasing the near-term risk of bad decisions in the hope of long-term benefits. And even if it were, it would not necessarily produce the experimentation we hope for. Very often, the theoretical decision space is immense. Randomly iterating through decision options to generate the diverse data necessary would take a very long time—far too long in most instances to help in timely decision-making. Even when iterations are non-random and can be done purely digitally, it would require massive computing resources.
</p>
<p>
	In contrast, when humans experiment, they rarely decide randomly; instead, they use mental models to imagine outcomes. Done correctly, this can dramatically narrow the decision space. It’s that filtering based on cognitive modeling that differentiates human experimentation in decision contexts from the random walk that the machine, in the absence of a mental model, has to employ. And if machines were to use a particular mental model, the resulting data would be constrained again by the limitations of that model. A diverse set of humans experimenting using diverse mental models is simply very hard to beat. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<p>
		This essay was excerpted and adapted by the authors from their book <em>Guardrails: Guiding Human Decisions in the Age of AI</em>. Copyright © 2024 by Princeton University Press.
	</p>
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>Urs Gasser</strong> is professor of public policy, governance, and innovative technology and dean of the School of Social Sciences and Technology at the Technical University of Munich. He is the author of <em>Born Digital: How Children Grow Up in a Digital Age</em>.
	</p>
<p>
		<strong>Viktor Mayer-Schönberger</strong> is professor of internet governance and regulation at the University of Oxford. He is the author of <em>Delete: The Virtue of Forgetting in the Digital Age</em>. This essay was excerpted and adapted by the authors from their book <em>Guardrails: Guiding Human Decisions in the Age of AI</em>.
	</p>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Is AI a Threat?</title>
		<link>https://www.skeptic.com/reading_room/is-artificial-intelligence-a-threat/</link>
		
		<dc:creator><![CDATA[Marc Defant]]></dc:creator>
		<pubDate>Fri, 24 May 2024 07:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[bias]]></category>
		<category><![CDATA[chatgpt]]></category>
		<category><![CDATA[ethics]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=44914</guid>

					<description><![CDATA[Marc Defant discusses the potential threats and capabilities of artificial intelligence, particularly focusing on OpenAI's GPT-4 and its advanced abilities.]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">D</span>uring the media blitz over the potential threat now posed by artificial intelligence, I asked OpenAI’s ChatGPT “Is the technological singularity coming?” Here is the response:
</p>
<p class="Attention">
	<strong>ChatGPT</strong>: The coming technological singularity is a hypothetical scenario in which artificial intelligence surpasses human intelligence and becomes capable of self-improvement and innovation at an exponential rate. Some experts predict that this could happen by the end of this decade, while others are more skeptical. The implications of such a radical transformation are unknown and potentially unpredictable, posing both opportunities and challenges for humanity. Some possible outcomes include the emergence of a post-human era, the extinction of biological life, or the coexistence of diverse forms of intelligence.
</p>
<p>
	There is no doubt that ChatGPT is an excellent tool, and now OpenAI has produced a next generation AI called GPT-4 (Generative Pre-trained Transformer 4). GPT-4 is astonishing in the sense that it can eloquently respond to questions, generate digital images from natural language descriptions, and analyze data. According to OpenAI, GPT-4 “exhibits human-level performance on various professional and academic benchmarks.”
</p>
<p>
	In response to that high level of performance, a number of highly intelligent people have suggested that this means we have reached the singularity moment, or at the very least that this level of AI passes the Turing Test of artificial intelligence, achieving human-level natural intelligence. Some, most notably Elon Musk, Steve Wozniak, Andrew Yang, and Yuval Noah Harari, went even further and issued an open letter<sup><a href="#note01">1</a></sup> calling on “all labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4… Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” The statement even called for the involvement of government and politicians if AI companies do not voluntarily pause. In response, the U.S. House of Representatives Science Committee invited AI companies to respond to questions about the technology and related risks. <span id="more-44914"></span>
</p>
<p>
	This letter is not the first time public figures or researchers have warned us. In 2005, futurist Ray Kurzweil in his book <em>The Singularity is Near</em> predicted it would occur by 2045. Elon Musk recently predicted that machines would overtake us by 2025. Sam Harris was so vexed by the future progress of artificial intelligence that he declared himself to be certain AI would eventually destroy us.<sup><a href="#note02">2</a></sup> He opined that AI may not intentionally do so, but rather might eliminate us the way we might annihilate ants during, say, the construction of a building. Google engineer Blake Lemoine famously claimed that the Google chatbot with which he was conversing was sentient.<sup><a href="#note03">3</a></sup> (In response, Google fired him, which was probably the wrong thing to do as it only generated conspiracy theories.)
</p>
<p>
	Were those warnings of impending doom not enough, artificial intelligence researcher Eliezer Yudkowsky wrote an opinion editorial for <em>Time</em> magazine in which he argued that the open letter did not go far enough:
</p>
<blockquote>
<p>
		Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen… Shut it all down.”<sup><a href="#note04">4</a></sup>
	</p>
</blockquote>
<p>
	One of the research papers cited in the open letter was released by Microsoft.<sup><a href="#note05">5</a></sup> I am simply astonished by what GPT-4 can accomplish based on the tests the Microsoft team conducted over a period of six months. Here is a sampling of what is detailed in the paper:
</p>
<ul>
<li>
		GPT-4 can generate 2D or 3D images from a detailed text prompt.
	</li>
<li>
		When GPT-4 was asked to produce a short tune, the tune had a melody and a repetitive rhythm (however, GPT-4 did not seem to understand harmony).
	</li>
<li>
		GPT-4 can write software code at a human level from text prompts. “GPT-4 could potentially be hired as a software engineer.” (I have a friend who is a senior-level computer engineer. He has used GPT-4 to write sophisticated code.)
	</li>
<li>
		GPT-4 can create complex 3D video games.
	</li>
<li>
		GPT-4 was able to produce a correct proof from a question asked in the 2022 International Mathematical Olympiad (top high school students from all over the world compete to solve questions in a set period of time—some students are unable to answer some of the questions).
	</li>
<li>
		GPT-4 can solve Fermi questions. These are physics questions where the answer must be estimated because the quantity is difficult or even impossible to measure directly. For example: How many times does the average person’s heart beat in their lifetime?
	</li>
<li>
		GPT-4 can function as a personal assistant. “GPT-4 uses the available APIs [application programming interface] to retrieve information about the user’s calendar, coordinate with other people over email, book the dinner, and message the user with the details.”
	</li>
<li>
		GPT-4 “is able to use the tools with very minimal instruction and no demonstrations, and then make use of the output appropriately [e.g., it knows to use a calculator when needed].” (They continue by stating that it is an emergent capability.)
	</li>
<li>
		GPT-4 can serve as a virtual real world problem solver. “GPT-4 guides the human to find and fix a water leak and recommends the exact actions that the human took (after replacing the seal the leak was gone).”
	</li>
</ul>
<div class="imageclearall">
	<img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-1/GPT-4-Understanding-Beliefs.png" alt="The Sally-Anne false-belief test (adapted from Bubeck et al., 2023)" width="750" height="372" class="noBoxShadow" style="display: block;" /></p>
<p class="caption">
		<strong>Figure 1.</strong> The Sally-Anne false-belief test (adapted from Bubeck et al., 2023).
	</p>
</div>
<p>
	The Microsoft team attempted to ascertain whether GPT-4 had a theory of mind—the recognition that sentient beings have thoughts, emotions, intentions, etc. It not only includes understanding the state of someone else’s mind, but “of reacting on someone’s reaction of someone else’s mental state.” The Sally-Anne false-belief test from psychology (see Figure 1, above) was one of a number of tests given to GPT-4. The researchers concluded:
</p>
<blockquote>
<p>
		Our findings suggest that GPT-4 has a very advanced level of theory of mind. While ChatGPT also does well on the basic tests, it seems that GPT-4 has more nuance and is able to reason better about multiple actors, and how various actions might impact their mental states, especially on more realistic scenarios.
	</p>
</blockquote>
<h4>
	How Deep Learning Works<br />
</h4>
<p>
	One major goal during the Cold War of the 1960s was to develop software for the intelligence community that could translate Russian into English. Many millions of dollars were spent trying to achieve what appeared to be a reasonable goal. In his book <em>Human Compatible: Artificial Intelligence and the Problem of Control</em> computer scientist Stuart Russell points out that the early AI bubble burst when the incipient machine translations did not live up to expectations. Not only were computers not powerful enough, but the programming attempted to create a <em>massive</em> number of linguistic logic rules. Anyone who has tried to learn another language knows the problems encountered through direct word-for-word translations. Subtle and changing nuances become very important. No simple programming rules work for all the different sentences encountered. “Inflexible robotic rules” are not up to the task.
</p>
<p>
	As computers became capable of storing and processing massive amounts of data and the Internet gave access to extensive sources of information, machine learning came to the rescue. Machine learning is simply a method of scanning the available data to learn. “Learn” is a tricky word and recalls, as an example, how we might cull through a book to understand how calculus uses limits. However, the machine is not learning in the way we do. Take translation as an example. In the early 2000s, statistical machine translation (SMT) was developed where computers analyzed millions of translated words, phrases, and sentences to find statistical patterns on how unknown text should be translated using similar structures. The statistical approach ranked the output translations and chose the best fit. The results were acceptable and sometimes excellent. Nonetheless, the computer had no idea what a word was. It simply used statistics to produce output that best fit the models.
</p>
<p>
	In 2016, Google switched to neural machine translation (NMT) stating: “this change addressed the need for few engineering and design choices while increasing accuracy and speed.”<sup><a href="#note06">6</a></sup> NMT is quite different from SMT, because it does not use a system based on phrases mapped to the targeted language. The neural network handles an entire sentence as it moves through the system. The artificial neural network (ANN) loosely resembles a human brain in the sense that there are interconnected nodes, just as a brain has neurons and synapses. (It should be noted that neural networks only <em>resemble</em> brains. We still possess very little idea of how the actual brain works.)
</p>
<p>
	If all this sounds complicated, here’s the bottom line: there are hidden layers in this training process and software engineers cannot be certain what specifically goes on while it happens. It is not surprising that many refer to the process as a black box.
</p>
<h4>
	The Contrarian View<br />
</h4>
<p>
	In his 2023 book on the threat of AI, <em>Smart Until It’s Dumb: Why Artificial Intelligence Keeps Making Epic Mistakes (And Why the AI Bubble Will Burst)</em>, AI engineer Emmanuel Maggiori used an example of a startup company developing a robot that “walks” around a city. Think of the process a human goes through when crossing a street. In a split second we rationally make choices:
</p>
<ul>
<li>
		“If there are no cars on the street, cross over.”
	</li>
<li>
		“If the closest car is far away and driving slowly, cross over.”
	</li>
<li>
		“If the pedestrian crosswalk signal is red, wait.”
	</li>
</ul>
<p>
	Translating these “if” statements into the hypothetical robot program for crossing a road might look like something like:
</p>
<blockquote>
<p>
		If distance to closest car on road < 100 feet, then wait; otherwise, if speed of closest car on road < 20 mph, then cross over; otherwise, wait. 
	</p>
</blockquote>
<p>
	The inputs are run through a series of “if-otherwise” conditions to reach decisions. What machine learning did to enhance the logic was dispense with thousands of rules by replacing them with a generalized template containing blanks (shown as italicized words below):
</p>
<blockquote>
<p>
		If <em>some input</em> < <em>some number</em>, then <em>some recommendation</em>; otherwise, if <em>some input</em> < <em>some number</em>, then <em>some recommendation</em>; otherwise, <em>some recommendation</em>.
	</p>
</blockquote>
<div class="imagefloatleft" style="width: 310px;">
	<img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/29-1/idealized-dataset.png" alt="" width="300" height="186" class="noBoxShadow"></p>
<p class="caption">
		<strong>Table 1.</strong> An idealized dataset to fill in the blanks of a hypothesized robot program (after Maggiori).
	</p>
</div>
<p>
	The copious “if-otherwise” rules dissipate, allowing the computer to fill in the blanks automatically. And where does the computer get the information to fill in the blanks? A dataset (Table 1, above).
</p>
<p>
	In a real-time setting, it would require a massive database with thousands of rows and many columns allowing the computer to fill in the blanks. Maggiori put it this way:
</p>
<blockquote>
<p>
		By trying many input/number/recommendation combinations in a systematic way, the computer identifies the most promising ones and fills in the blanks in the template with them. This is called training or learning.
	</p>
</blockquote>
<p>
	In more advanced AI, such as deep learning, the principles remain the same, though the templates may vary a little. For instance, in neural networks, the templates contain rules of the type “If weighted sum of inputs > some value” instead of the “If some input < some value” from the example above. In the more advanced deep learning, which uses neural networks, the template consists of millions of “If weighted sum of inputs > some value” operations. These are organized in a special, problem-specific way in order to help the system learn useful data manipulations such as image filtering or, in the case of GPT-4, word transformation and contextualization.
</p>
<p class="pullquote">
	There are hidden layers in this training process and software engineers cannot be certain what specifically goes on while it happens. Many refer to the process as a black box.
</p>
<p>
	Although the programs can be quite complex in the ways in which they are constructed and interact, the clear takeaway, according to Maggiori, is that the computer has no freedom outside of the templates/programs. The principle behind machine learning is exceedingly simple.
</p>
<p>
	Then there is reinforcement learning, a variant of machine learning in which the computer generates its own dataset, as in Table 1, by experimenting with random decisions and statistically analyzing the results. For example, Google began its autonomous vehicle work in 2009, generating massive databases as its cars (with human drivers to assure safety) clocked thousands of hours to “learn” the rules of the road. Maggiori emphasizes that these programs are still governed by guardrails; that is, they are “governed by human assumptions.”
</p>
<p>
	Claims by many AI researchers that machines “teach themselves” are grossly exaggerated. The machine’s ability to learn is limited to the parameters and data available through human input. Take, for example, what AlphaZero (a computer program developed by artificial intelligence company DeepMind to master the games of chess, Shogi, and Go) did when learning to play the game of Go. The dataset was generated automatically by simply having the computer play itself in thousands of games. However, the definition of who won and the parameters of the board were constructed by humans.
</p>
<p>
	I don’t want to underestimate the accomplishments of convolutional neural networks (CNN) or AI in general. Sometimes even when operating under general limitations by humans, the machines can come up with useful rules that humans have missed. The AlphaZero CNN determined valuable moves that were good enough to beat the best players in the world. The aforementioned Stuart Russell suggested that a computer might be able to disable its “off-switch.”
</p>
<blockquote>
<p>
		Suppose a machine has the objective of fetching the coffee. If it is sufficiently intelligent, it will certainly understand that it will fail in its objective if it is switched off before completing its mission. Thus, the objective of fetching coffee creates, as a necessary subgoal, the objective of disabling the off switch.
	</p>
</blockquote>
<p>
	However, as Maggiori pointed out, “why would anyone include the action ‘disable the off-switch’ as part of the available actions to try out for coffee delivery.” It is simply beyond the pale of machine learning: “Even if the action was allowed, the stars would have to align for the computer to ever try out that action and measure a significant positive impact in the efficiency of coffee delivery.”
</p>
<p>
	What about deep learning with all of those hidden layers and the lack of specific information about how the neural network processes the data? Although we don’t know the details of the specific combination of filters, according to Maggiori we do know the neural network has limitations. Here is what he had to say about the filtering process in the neural network:
</p>
<blockquote>
<p>
		The training process starts with a completely random set of filters, so the initial model is generally useless. Afterward, it starts altering the filters progressively to find promising improvements. This is akin to an appointment with the eye doctor who tries out several glasses’ prescriptions, changing them little by little until finding the one you’re most comfortable with. But the process is much lengthier and more chaotic.
	</p>
</blockquote>
<p>
	Unfortunately, because AI researchers cannot explain all the specific ways the hidden layers operate on the data, there is no guarantee what the network will spit out. Thus, the silly mistakes journalists enjoy finding in systems like ChatGPT and GPT-4, as when Maggiori asked GPT-3 “Who was the president of the UK last year?” The response was “The president of the UK was not elected last year.” As Maggiori wisely emphasizes, making changes for each mistake only seems to create additional mistakes. This is not so serious for word translations but can be tragic with autonomous vehicles.
</p>
<p>
	You have probably heard of the deaths related to autonomous driving that reveal how easily AI can be fooled. In several cases, road signs have been slightly changed and the autonomous vehicles fail to recognize what is meant. Many of us have seen the effect graffiti has on a road sign. Most of the time, humans can still recognize the sign, but autonomous vehicles may not. Maggiori concludes that these failures are ultimately because CNN does not have a sophisticated “model of the world as we know it,” which probably explains why autonomous vehicles perform well in controlled environments but falter in the real world. Even Elon Musk has realized the difficulties in autonomous driving: “Generalized self-driving is a hard problem, as it requires solving a large part of the real-world AI. I didn’t expect it to be so hard, but the difficulty is obvious in retrospect.”
</p>
<p>
	The underlying simplicity of AI coding makes me think we are a long way from reaching a singularity, especially after more than a decade of failures in attempting to get self-driving cars. At this stage any computer takeover appears to be science fiction, not applied science. More importantly, there are serious ramifications that would result from any government-mandated pause in AI development.
</p>
<h4>
	Why a Pause in Artificial Intelligence Could Be a Very Bad Idea<br />
</h4>
<p>
	Last year, Interesting Engineering reported<sup><a href="#note07">7</a></sup> that Ni Yougjie, deputy director of the Shanghai Institute of Taiwan Studies stated: “PLA [the People’s Liberation Army of the People’s Republic of China] should conduct blockade exercises around the island and use AI technology to deter U.S. interference and Taiwanese independence forces.” He went on to say that the PLA should become a global leader in intelligent warfare by using “AI, cloud computing, big data, cyberattacks and defense.” The PLA has been simulating the invasion of Taiwan through AI war games for some time. The AI results suggest that the PLA would be unable to successfully invade Taiwan through 2026, but, ominously, the CIA reports that Chinese President Xi Jinping has ordered the PLA to be ready for an invasion by 2027.<sup><a href="#note08">8</a></sup>
</p>
<p>
	According to Gregory Allen, the director of the AI Governance Project at the Center for Strategic and International Studies: “China is not going to slow down its AI development in either the commercial or military domain.”<sup><a href="#note09">9</a></sup> Allen’s sentiments have been echoed by many think tanks, academics, and government intelligence personnel based on the massive investments China is making in AI and Chinese hints at an unwillingness to pause AI development.
</p>
<p>
	Game theory is the introduction of mathematical models of possible strategic interactions assuming rational actors are involved. In a recent paper entitled Nuclear Deterrence in the Algorithmic Age: Game Theory Revisited,<sup><a href="#note10">10</a></sup> Roy Lindelauf (game theorist and professor working at the Ministry of Defense, the Netherlands) reminds us “game theory models prescribe what a decision maker ought to do in a given situation…and to alleviate the burden of human cognitive biases.”
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.1/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n1_lg.jpg" alt="Skeptic 29.1 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.1<br />
		<a href="https://www.skeptic.com/productlink/magv29n1">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/259456#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	Credibility is the ultimate key in deterrence theory. Is a belligerent dissuaded from aggressive actions by the opponent’s threats? Ultimately, the best solution regarding the threat of an invasion of Taiwan is a Nash equilibrium (named after mathematician John Nash, the subject of the book and movie <em>A Beautiful Mind</em>) in which no one unilaterally benefits. If China believes they have superior AI, it may alter any existing Nash equilibrium; in the case of the development of AI, both sides are deterred from causing mutual destruction. In other words, no one has anything to gain by changing the status quo. However, that Nash equilibrium could dissolve if we pause AI research. In fact, it appears to me that there is a much bigger threat from the PRC than from any imminent takeover of humans by AI.
</p>
<p>
	In 1950, Alan Turing proposed a test to determine whether a machine exhibits intelligent behavior at or beyond human level. An evaluator questions a human and a machine simultaneously without knowing which is which. If the evaluator cannot determine the difference between the human and the machine, the machine has passed his test. When it comes to pausing AI research, we should at least be dealing with machines that pass the Turing test. No artificial intelligence has ever passed it. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>Marc J. Defant</strong> is a professor of geology at the University of South Florida specializing in the study of volcanoes— more specifically, the geochemistry of volcanic rocks. He has been funded by the NSF, <em>National Geographic</em>, the American Chemical Society, and the National Academy of Sciences and has published in many international journals including Nature. His book <em>Voyage of Discovery: From the Big Bang to the Ice Age</em> is in the 2nd edition.
	</p>
<h5>
		References<br />
	</h5>
<ol>
<li id="note01">
			<a href="https://bit.ly/4aw1gU9">https://bit.ly/4aw1gU9</a>
		</li>
<li id="note02">
			<a href="https://bit.ly/48dpjpj">https://bit.ly/48dpjpj</a>
		</li>
<li id="note03">
			<a href="https://bit.ly/47fCMvm">https://bit.ly/47fCMvm</a>
		</li>
<li id="note04">
			<a href="https://bit.ly/47dbc1P">https://bit.ly/47dbc1P</a>
		</li>
<li id="note05">
			<a href="https://bit.ly/3H4B5pO">https://bit.ly/3H4B5pO</a>
		</li>
<li id="note06">
			<a href="https://bit.ly/41E8UaO">https://bit.ly/41E8UaO</a>
		</li>
<li id="note07">
			<a href="https://bit.ly/3H05p5e">https://bit.ly/3H05p5e</a>
		</li>
<li id="note08">
			<a href="https://bit.ly/3H14Zvc">https://bit.ly/3H14Zvc</a>
		</li>
<li id="note09">
			<a href="https://bit.ly/3H2uMDr">https://bit.ly/3H2uMDr</a>
		</li>
<li id="note10">
			<a href="https://bit.ly/3H05dmw">https://bit.ly/3H05dmw</a>
		</li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Why Should We Pursue Human Intelligence With AI?</title>
		<link>https://www.skeptic.com/reading_room/why-should-we-pursue-human-intelligence-with-artificial-intelligence/</link>
		
		<dc:creator><![CDATA[Rudy van Belkom]]></dc:creator>
		<pubDate>Fri, 17 May 2024 19:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[emotional intelligence]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=44903</guid>

					<description><![CDATA[In the quest for intelligent machines, approaching, or even surpassing human intelligence, has been a prominent dot on the horizon since the 1950s. Aside from the various technological challenges, I believe this quest is enormously difficult for three reasons: We don’t have a clear picture of exactly how intelligence works in humans. We have no [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">I</span>n the quest for intelligent machines, approaching, or even surpassing human intelligence, has been a prominent dot on the horizon since the 1950s. Aside from the various technological challenges, I believe this quest is enormously difficult for three reasons:
</p>
<ol>
<li>
		We don’t have a clear picture of exactly how intelligence works in humans.
	</li>
<li>
		We have no generally accepted operational definition for many relevant concepts (such as consciousness), making their existence difficult to prove.
	</li>
<li>
		We continually shift what we consider intelligent.
	</li>
</ol>
<p>
	All this makes it difficult to “clone” intelligence. Consequently, experts disagree on when we will reach human-level AI. The dot on the horizon shifts with time and continuously seems to be equally far away (as in the quip “AI is five years away…and always will be”). Yet it is not inconceivable that the intelligence code will be cracked. Chess once seemed to require some form of human intelligence; you had to be able to think strategically and assess your opponent. We now know that all such “what ifs” and “if thens” can be programmed, and an abstract representation and brute computational power have proven sufficient to defeat even the greatest human chess champions. Granted, that’s <em>just</em> chess, but what if all those tasks that now seem immeasurably complex could also be solved with correspondingly complex algorithms, or even with relatively simple algorithms? <span id="more-44903"></span>
</p>
<p>
	In that respect, creativity seems to be the new chess. AI is already capable of creating works of art and composing pieces of music. Many people have difficulty accepting the results as examples of true creativity. And there is the deeper philosophical discussion as to whether we ourselves are not simply programmed and thus do not act as autonomously as we like to think. AI can revive one of the most painful insults to humanity famously put forth by Sigmund Freud: many, if not most of our actions are not the result of conscious choice.
</p>
<p>
	Machines long ago outperformed us in physical labor, and more recently in computational power. And now our intellectual ability is at stake. This ability has always set us apart from all other creatures on earth and has given us (at least instinctively) control over our future. It is therefore not surprising that some people resent this development. Think what we might, AI is a mirror for humanity. It teaches us an enormous amount about ourselves and asks us fundamental questions about what it means to be human.
</p>
<p>
	However, in my opinion, the key question we should be asking is: In designing AI, should we pursue human intelligence at all? Submarines do not swim the way fish do, nor do airplanes fly like birds; so why should computers have to think the way humans do? If you give a spider human-level intelligence it will not start behaving like a human, but rather like a “super spider” that can spin even better webs and catch even more prey. We are only going to make real progress in AI when we let go of the idea that we are superior beings. Humans are not superior to insects; we each evolved based on the respective evolutionary adaptations to our respective environments. Humans may have more advanced cognitive skills, but insects will most likely survive even a nuclear disaster. So success is context-dependent and therefore relative.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.1/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n1_lg.jpg" alt="Skeptic 29.1 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.1<br />
		<a href="https://www.skeptic.com/productlink/magv29n1">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/259456#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	We need to start asking ourselves for what purpose we want to use intelligent machines, rather than seeing intelligence as an end in itself. How can we use intelligent machines to create a better world? Indeed, what exactly is a better world? I submit the path forward is for humans and machines to work together, allocating tasks based on their respective specializations. Leave the complex statistics to computers but reserve the socially sensitive issues for human decision making. Let machines monitor railroads for possible damage, but let people watch over the application process for new railroad employees. Let machines assess CT scans for cancerous abnormalities, but let people discuss the treatment process with patients. Why should we build emotions into machines? On the contrary, I think we should strive to make computers operate as objectively as possible. After all, we humans with all our evolutionarily programmed biases and emotions have proven to be not very good at that at all. As the world chess champion Garry Kasparov, who was famously defeated by IBM’s Deep Blue computer, advised…
</p>
<p>
	Machines have calculations. <br />
	We have understanding. <br />
	Machines have instructions. <br />
	We have purpose. <br />
	There’s one thing only a human can do. <br />
	That’s dream. So let us dream big. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>Rudy van Belkom</strong> is the Executive Director of The Netherlands Study Centre for Technology Trends (STT). His book, <em>AI No Longer Has A Plug</em>, offers developers, policymakers, philosophers, and anyone with an interest in AI the tools needed for integrating ethics into the AI design process. In addition, he developed an ethical design game for AI, inspired by the scrum process, that can be used to translate ethical issues into practice.
	</p>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Climate and the Energy Transition:  Current Status and Challenges</title>
		<link>https://www.skeptic.com/reading_room/challenges-to-climate-energy-transition/</link>
		
		<dc:creator><![CDATA[Lorne Trottier & Jean-Patrick Toussaint]]></dc:creator>
		<pubDate>Fri, 10 May 2024 07:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[climate change]]></category>
		<category><![CDATA[renewable energy]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=44891</guid>

					<description><![CDATA[Lorne Trottier and Jean-Patrick Toussaint discuss the current status and challenges of climate change and the energy transition, focusing on the need for renewable energy and electrification to mitigate climate change.]]></description>
										<content:encoded><![CDATA[<h4>
	Introduction<br />
</h4>
<p>
	<span class="dropCap">T</span>he battle to mitigate and stop climate change is the biggest challenge of the 21<sup>st</sup> century. The rapid build-up of greenhouse gases (GHG) in the atmosphere due to human activities with resultant global warming and disruption of earth’s delicate climate balance poses a clear and present danger to human well-being and the well-being of the planet. These facts have been well documented by climate scientists and in a series of reports of the UN Intergovernmental Panel on Climate Change (IPCC).
</p>
<p>
	While there continue to be many climate change naysayers, a majority of people around the world now accept what they can see around them—climate change is happening. Outright refusal to accept the reality of climate change is fading (at least in mainstream media) but new forms have emerged, aimed mainly at delaying any significant climate policy action.<sup><a href="#note01">1</a></sup>
</p>
<p>
	One major tactic is to belittle and dismiss the substantial progress that is being made on solutions, such as renewable energy and the electrification of transport. Common criticisms are that these are too expensive, intermittent, unreliable, or impractical. We term them “green energy denial.” This view was expressed, we submit, in the article entitled “<a href="https://www.skeptic.com/reading_room/future-of-energy-and-climate-fracking-renewables-or-nuclear/">The Future of Energy and Our Climate</a>” by Marc J. Defant in Vol. 28 Issue 2 of <em>Skeptic</em> magazine. The reality is that while the energy transition is quite challenging, it is inevitable. Substantial progress has been made, and even more effective and efficient solutions are in the works. <span id="more-44891"></span>
</p>
<p>
	Two books provide a good summary of the current situation. One is Michael E. Mann’s <em>The New Climate War: The Fight to Take Back Our Planet</em>. Mann is the climate scientist famous for the hockey stick graph of global temperatures. “Outright denial of the physical evidence of climate change simply isn’t credible anymore. So, they have shifted to a softer form of denialism while keeping the oil flowing and fossil fuels burning, engaging in a multipronged offensive based on deception, distraction, and delay,” he writes. “Finally, when all other arguments fail, we’re left with ‘Well—it just won’t work. You can’t do it!’ Inactivists in fact twist themselves into veritable pretzels to explain why there’s no way we can possibly power our economy with renewable energy.” Here is how Mann sums up the problem:
</p>
<blockquote>
<p>
		We need to accomplish something gigantic we have never done before, much faster than we have ever done anything similar. To do it, we need lots of breakthroughs in science and engineering. We need to build a consensus that doesn’t exist and create public policies to push a transition that would not happen otherwise.
	</p>
</blockquote>
<p>
	Bill Gates’ <em>How to Avoid a Climate Disaster: The Solutions We Have and the Breakthroughs We Need</em>, covers the solutions being developed to mitigate climate change. Both Gates and Mann agree that while this will prove extremely challenging, it is achievable. According to Gates:
</p>
<blockquote>
<p>
		We already have some of the tools we need, and as for those we don’t yet have, everything I’ve learned about climate and technology makes me optimistic that we can invent them, deploy them, and, if we act fast enough, avoid a climate catastrophe. This book is about what it will take and why I think we can do it. Within a few years, I had become convinced of three things: To avoid a climate disaster, we have to get to zero. We need to deploy the tools we already have, like solar and wind, faster and smarter. And we need to create and roll out breakthrough technologies that can take us the rest of the way.
	</p>
</blockquote>
<h4>
	So, what’s to be done?<br />
</h4>
<p>
	Virtually all plans to mitigate climate change focus on large scale electrification. There are several steps required to fully implement this solution. The first step is to decarbonize the electricity supply. This means producing electricity from technologies that do not emit greenhouse gasses (GHG). Fossil fuels such as coal, petroleum, and natural gas should be eliminated from the electricity supply. Many countries have set such goals to be achieved in the coming decades. For example, the U.S. Department of Energy has produced a study<sup><a href="#note02">2</a></sup> on pathways to achieve 100 percent clean electricity by 2035. To replace fossil fuels, electricity must be generated from non-emitting sources such hydro, wind, solar, tidal, geothermal, and nuclear energy. At the COP28 United Nations Climate Change Conference in December 2023, nearly 130 countries committed to tripling renewables by 2030.
</p>
<p>
	A second step is to convert to electrification as many energy-intensive fossil fuel processes as possible. These include transportation, cars and trucks, heating and cooling of buildings, industrial processes such as steel and cement, and others. Electrification will not only eliminate most uses of fossil fuels; it will also reduce the total amount of energy required. For example, electric cars require about 75 percent less energy per mile than gasoline cars because they are much more efficient.<sup><a href="#note03">3</a></sup>
</p>
<p>
	Electrifying the economy to the maximum extent will require a significant increase in the amount of electricity produced. The Electrification Futures Study by the U.S. National Renewables Energy Laboratory (NREL),<sup><a href="#note04">4</a></sup> estimates an increase in electricity demand of about 70 percent by 2050 over 2020 levels in the “high” scenario (mainly due to electrification of transportation). The greater efficiency of new technologies such as heat pumps plays an important role in reducing the size of this increase. In another example, if all cars were to become EVs, it is estimated that this alone will require a 20–50 percent increase in the electricity supply over 2015 levels in the U.S.<sup><a href="#note05">5</a></sup> Taken together, these steps will have a huge impact on reducing GHG emissions. It is also a tremendous economic opportunity.
</p>
<h4>
	The Cost of Renewables<br />
</h4>
<p>
	Renewables will play a key role in the decarbonization and growth of the electricity supply. A key fact that is ignored by the green energy critics is the dramatic unit cost decline in wind and particularly solar energy in recent years. The article “Why did renewables become so cheap so fast?”<sup><a href="#note06">6</a></sup> provides an excellent explanation of this development, and is the source of the following graphs. Figure 1 “The price of solar modules declined by 99.6 percent since 1976” is self-explanatory. “The price of electricity from new power plants” in Figure 2 shows how the price of electricity from new power plants has changed over the past 10 years. The most notable change is that both solar and wind are now the <em>cheapest</em> source of electricity in many geographic locations; <em>lower</em> than combined cycle gas plants.
</p>
<p>
	These steep price declines undermine key arguments of green energy critics. This crucial fact is absent from Marc Defant’s <em>Skeptic</em> article. Renewable energy from wind and solar was more expensive than fossil fuel plants in the past, but is now cost competitive, not to mention that investments in renewables have far outpaced those in fossil fuels. In his article, Defant points to countries such as Germany and Denmark that have invested heavily in renewables but have high electricity prices. The high prices they paid as early adopters are what have now driven down the costs for everyone else. And this is precisely the path taken as most new technologies became competitive and eventually standard. It is a price worth paying to avoid an approaching global climate catastrophe. In addition, the winners will own the key technologies of the future.
</p>
<p>
	Germany’s abrupt decision to shut down all their nuclear plants in the wake of the Fukushima disaster, which accounted for about 20 percent of their supply, along with a failure to build adequate transmission capacity have also played a role in its high electricity prices. Despite this, Germany has had remarkable success in adapting to the cut off from Russian gas due to the war in Ukraine. Many analysts believe that this has provided an impetus not only to Germany, but Europe generally, to speed up the transition off fossil fuels.
</p>
<div class="imageclearall">
	<img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/climate-energy-transition/Fig-01-price-solar-modules-declined-99-percent-since-1976.png" alt="" width="750" height="844" class="noBoxShadow" style="display: block;" /></p>
<p class="caption">
		<strong>Figure 1.</strong> The price of solar modules declined by 99.6 percent since 1976.
	</p>
</div>
<div class="imageclearall">
	<img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/climate-energy-transition/Fig-02-price-electricity-from-new-power-plants.png" alt="" width="750" height="1169" class="noBoxShadow" style="display: block;" /></p>
<p class="caption">
		<strong>Figure 2.</strong> The price of electricity from new power plants
	</p>
</div>
<div class="imageclearall">
	<img loading="lazy" decoding="async" src="https://www.skeptic.com/reading_room/images/climate-energy-transition/Fig-03-market-size-lithium-ion-batteries-since-1992.png" alt="" width="750" height="976" class="noBoxShadow" style="display: block;" /></p>
<p class="caption">
		<strong>Figure 3.</strong> Battery price learning curve
	</p>
</div>
<p>
	This data contains a valuable lesson for other technologies that are key to the green transition, such as batteries for electric cars. Wind and solar were beneficiaries of what is known as Wright’s Law, that predates the better-known Moore’s Law in the chip industry, which states that the number of transistors on microprocessors doubles every two years at about the same cost. “Wright’s Law that each doubling in experience leads to the same relative decline in prices, was discovered much earlier than Moore’s Law, by aerospace engineer Theodore Paul Wright in 1936. Moore’s observation for the progress in computing technology can be seen as a special case of Wright’s Law.”<sup><a href="#note07">7</a></sup>
</p>
<p>
	Figure 3 illustrates the major price declines in the cost of lithium batteries that are used in electric cars and grid energy storage. The price has continued to fall significantly below the price in the graph of $244/kWh in 2016 and was $139/kWh in 2023.<sup><a href="#note08">8</a></sup> Analysts such as Goldman Sachs predict the price will fall below the critical threshold $100/kWh in the next couple of years, which would enable EVs to become less expensive than internal combustion engine (ICE) cars without subsidies on a total cost of ownership basis.<sup><a href="#note09">9</a></sup>
</p>
<p>
	The lower cost of renewables is greatly accelerating their rate of adoption worldwide. Wind and solar provided nearly 18 percent of the electricity used in the U.S. in the first third of 2023, up from 14 percent in 2022. There are news stories practically daily on the success of new renewables projects. Here are just a few such headlines:
</p>
<ul>
<li>
		Solar Is Now 33 percent Cheaper Than Gas Power in U.S., Guggenheim Says<sup><a href="#note10">10</a></sup>
	</li>
<li>
		Renewable Energy Prices Hit Record Lows: How Can Utilities Benefit From Unstoppable Solar and Wind?<sup><a href="#note11">11</a></sup>
	</li>
<li>
		Is Solar Really Cheaper Than Fossil Fuels?<sup><a href="#note12">12</a></sup>
	</li>
<li>
		The Era of Cheap Wind and Solar Has Arrived, U of C Researchers Find.<sup><a href="#note13">13</a></sup>
	</li>
<li>
		In a First, Wind and Solar Generated More Power Than Coal in U.S.<sup><a href="#note14">14</a></sup>
	</li>
<li>
		Renewables Were the World’s Cheapest Source of Energy in 2020, New Report Shows.<sup><a href="#note15">15</a></sup>
	</li>
</ul>
<p>
	Defant’s article raises a series of other issues of concern for renewables. These can each be addressed, even though as Bill Gates indicates in his book: “This will be hard.”
</p>
<h4>
	Gas and Fracking<br />
</h4>
<p>
	Defant’s article promotes alleged benefits of fracking as a cleaner alternative to coal and petroleum, citing its contributions to reducing GHG emissions in the United States. However, the author fails to mention studies that have highlighted concerns about methane leakages during extraction and transportation.<sup><a href="#note16">16</a></sup> These can offset the emissions benefits given that methane leaked from fracking has a much higher effect on radiative forcing than CO<sub>2</sub>. Moreover, the long-term sustainability of natural gas as a “bridge fuel” is rather uncertain at best, as it remains a fossil fuel that is far from leading to a sustainable low-carbon future.
</p>
<h4>
	Land Requirements<br />
</h4>
<p>
	One notable concern is the charge that renewable energy takes up too much land. Heartland Institute, for example argues that, “solar power requires 43.50 acres per megawatt.”<sup><a href="#note17">17</a></sup> This number comes from the Institute for Energy Research (IER) which is an advocacy organization for the fossil fuel industry.
</p>
<p>
	According to an example in a study by the U.S. Department of Energy a solar farm with a total land area of 1375 acres has a capacity of 345 MW, which works out to 3.75 MW per acre.<sup><a href="#note18">18</a>, <a href="#note19">19</a></sup> This is more than 10b times lower than IER’s estimate! Another recent article states that: “According to a report from the National Renewable Energy Laboratory, roughly 22,000 square miles of solar panel-filled land (about the size of Lake Michigan) would be required to power the entire country, including all 141 million households and businesses, based on 13–14 percent efficiency for solar modules. Many solar panels, however, reach 20 percent efficiency, which could reduce the necessary area to just about 10,000 square miles, equivalent to the size of Lake Erie.”<sup><a href="#note20">20</a></sup>
</p>
<h4>
	Rare Earth Metals<br />
</h4>
<p>
	Rare earth metals are essential for many green energy technologies such as wind turbines, and lithium batteries. Concern about the rarity of the rare earths is raised by Defant. “According to the Institute for Energy Research, the United States imports about 80 percent of its rare earth elements from China, which makes the U.S. highly dependent on what is increasingly becoming an adversary nation.” Note that Defant cites information from the same group as The Heartland Institute (The Heartland Institute often uses the IER as a source. See the source of their claim that 1 MW of solar requires 43.50 acres of land in Land Requirements section above). Elsewhere in his article Defant states that “It should be noted that China has a market share in the solar panel supply chain of more than 80 percent, so the Paris Accords have proven a financial bonanza for that nation.”
</p>
<p>
	Consider now, information to the contrary:
</p>
<ul>
<li>
		The World Population Review provides a table of Solar Power by Country.<sup><a href="#note21">21</a></sup> This table shows that in 2022 China had 393 GW while the U.S. had 113 GW of installed solar capacity. China has three times more solar than the U.S. and approximately 40 percent of the world’s installed solar capacity. Any wonder <em>why</em> they are the world’s largest producer?
	</li>
<li>
		The Government of Canada has published a table of countries that produce rare earth metals.<sup><a href="#note22">22</a></sup> This overview shows that in 2021 China produced 60.6 percent of the total while the U.S. produced 15.5 percent. The report goes on to say: “Canada has some of the largest known reserves and resources (measured and indicated) of rare earths in the world, estimated at over 15.1 million tonnes of rare earth oxide in 2022.”
	</li>
<li>
		An article for <em>Metal Tech News</em><sup><a href="#note23">23</a></sup> states: </p>
<ul>
<li>
				“While finding economically viable deposits of rare earths is not easy, the real complexity comes with separating these notoriously tightly interlocked elements into usable rare earth metals.”
			</li>
<li>
				“This gets to the heart of why rare earths are mined in the U.S., yet the country is 100 percent reliant on imports for the metals.”
			</li>
<li>
				“Several companies in the U.S. and Canada are in various stages of developing new technologies for separating rare earths and establishing facilities to enable rare earth oxides production in North America.”
			</li>
</ul>
</li>
</ul>
<p>
	It is clear that while China <em>currently</em> dominates rare earth metals production, the situation is evolving rapidly.
</p>
<p>
	Finally, the full lifecycle analysis of electric vehicles (EVs) needs to be taken into consideration when comparing them to internal combustion engine (ICE) vehicles, the latter having significant downstream environmental and air quality impacts, and the former having a much smaller footprint overall. Further, while it is true that critical minerals are essential for battery production in EVs, it’s worth noting that resource availability is a dynamic factor, and that recycling and circular economy principles can help reduce the demand for new resources and responsible supply chain regulations will address the extractive impacts of battery production. In fact, there are a number of startups making a profit by recycling the expensive materials in EV batteries.<sup><a href="#note24">24</a></sup> This promises to become a significant new business. Additionally, many countries now have recycling regulations, e.g., in Europe.<sup><a href="#note25">25</a></sup>
</p>
<h4>
	Intermittency and Energy Storage<br />
</h4>
<p>
	Intermittency and energy storage are the biggest concerns with wind and solar due to the variability of these resources. It should be noted that variable demand has always been a key factor in grid management. So-called base load power sources such as nuclear and coal can take hours or days for power to be changed up or down. Electricity demand is highly variable within a 24-hour period with demand highest during the afternoon or evening, and lowest overnight. Base load is typically between 30–40 percent of peak load. Nuclear can be a source of base load power in a zero emissions grid. However, important issues like nuclear waste disposal, large cost overruns, lengthy approval and construction times, and public acceptance must be resolved.
</p>
<p>
	The grid operator is responsible for balancing supply to demand. In a free-market energy dispatch system, the operator has several options to meet demand at any given time. Generally, the operator will choose the lowest cost option. When available, this is usually from wind or solar since these have zero fuel cost. There are many options available to mitigate the inherent intermittency of these sources. The most common is to select one of the other sources on the grid. The combination of wind and solar may complement each other. Wind and solar from different geographic regions are valuable as weather conditions may be more favorable at other locations.
</p>
<p>
	Various forms of energy storage are used. Hydroelectric power dams provide the largest form of storage. In suitable geographies, pumped storage hydro is an option. So-called gravity storage is a related emerging technology. Grid scale batteries are an increasingly viable solution to manage variability over minutes or hours. Green hydrogen, which is produced through the electrolysis of water from renewables, is a promising emerging technology for energy storage. This hydrogen can be converted back to electricity when needed. Green hydrogen has other potential uses as an alternative to fossil fuels.
</p>
<p>
	Demand response is another solution that has several variants. Variable pricing tied to demand managed through smart metering is one scheme. The customer may use timers to schedule functions such as the dishwasher operation or car charging at night. Water heaters or freezers may be turned on/off intermittently during periods of peak demand. Large industrial users may have agreements and receive compensation for curtailing demand during peaks.
</p>
<p>
	Finally combined cycle gas turbines and peaking plants, which have been used to deal with variable demand on traditional power grids, can be used on a grid with a high percentage of renewables. While not net zero, such hybrid systems can achieve drastically reduced emissions at a low cost.
</p>
<h4>
	Conclusion<br />
</h4>
<p>
	In considering issues as contentious and important as the extent of human-induced climate change and various methods proposed for mitigating it, it behooves skeptics to examine <em>all</em> the relevant information as well the interests making such arguments. This article, therefore, presents vital information ignored in a previous <em>Skeptic</em> article, and which, we submit, refutes those arguments decisively. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<h5>
		About the Authors<br />
	</h5>
<p>
		Trained as a biologist, <strong>Jean-Patrick Toussaint</strong> holds a PhD in environmental sciences and has been Senior Climate Director at the Trottier Family Foundation since 2022. During his career, he has conducted academic research and worked on several environmental and climate files with various national and international organizations. Prior to joining the Trottier Family Foundation, Jean-Patrick was Senior Advisor on Francophone Affairs at the Federation of Canadian Municipalities (FCM). Jean-Patrick also worked as a science officer at Future Earth and as science project manager at the David Suzuki Foundation.
	</p>
<p>
		<strong>Lorne Trottier</strong> is an entrepreneur and philanthropist who co-founded Matrox in 1976, a tech company known for its computer graphics and broadcast video products. The Trottier Family Foundation was established in 2000 and is active in the areas of climate, education, health, and science. The Trottier Foundation has funded a number of institutes including the Trottier Energy Institute at l’ École Polytechnique in Montreal, and the Trottier Space Institute at McGill University. Trottier was a Board Member of the National Center for Science Education NCSE for more than 10 years and is currently a Board Member of the Planetary Society.
	</p>
<h5>
		References<br />
	</h5>
<ol>
<li id="note01">
			Mann, M.E. (2021). <em>The New Climate War: The Fight to Take Back Our Planet</em>. PublicAffairs.
		</li>
<li id="note02">
			<a href="https://www.energy.gov/sites/default/files/2023-05/DOE - 100%25 Clean Electricity - Final.pdf">https://www.energy.gov/sites/default/files/2023-05/DOE &#8211; 100%25 Clean Electricity &#8211; Final.pdf</a>
		</li>
<li id="note03">
			<a href="https://www.fueleconomy.gov/feg/evtech.shtml">https://www.fueleconomy.gov/feg/evtech.shtml</a>
		</li>
<li id="note04">
			<a href="https://www.nrel.gov/docs/fy18osti/71500.pdf">https://www.nrel.gov/docs/fy18osti/71500.pdf</a>
		</li>
<li id="note05">
			<a href="https://usafacts.org/articles/how-much-electricity-would-it-take-to-power-all-cars-if-they-were-electric/">https://usafacts.org/articles/how-much-electricity-would-it-take-to-power-all-cars-if-they-were-electric/</a>
		</li>
<li id="note06">
			<a href="https://ourworldindata.org/cheap-renewables-growth">https://ourworldindata.org/cheap-renewables-growth</a>
		</li>
<li id="note07">
			<a href="https://ourworldindata.org/cheap-renewables-growt">https://ourworldindata.org/cheap-renewables-growt</a>h
		</li>
<li id="note08">
			<a href="https://www.bing.com/search?q=price+lithium+battery+kWh&amp;form=ANNTH1&amp;refig=1570f510a450451ba00cb0f1fd43617d&amp;pc=EDBBAN">https://www.bing.com/search?q=price+lithium+battery+kWh&#038;form=ANNTH1&#038;refig=1570f510a450451ba00cb0f1fd43617d&#038;pc=EDBBAN</a>
		</li>
<li id="note09">
			<a href="https://www.goldmansachs.com/intelligence/pages/electric-vehicle-battery-prices-falling.html">https://www.goldmansachs.com/intelligence/pages/electric-vehicle-battery-prices-falling.html</a>
		</li>
<li id="note10">
			<a href="https://www.bloomberg.com/news/articles/2022-10-03/solar-is-now-33-cheaper-than-gas-power-in-us-guggenheim-says?embedded-checkout=true">https://www.bloomberg.com/news/articles/2022-10-03/solar-is-now-33-cheaper-than-gas-power-in-us-guggenheim-says?embedded-checkout=true</a>
		</li>
<li id="note11">
			<a href="https://www.forbes.com/sites/energyinnovation/2020/01/21/renewable-energy-prices-hit-record-lows-how-can-utilities-benefit-from-unstoppable-solar-and-wind/?sh=34e36cb22c84">https://www.forbes.com/sites/energyinnovation/2020/01/21/renewable-energy-prices-hit-record-lows-how-can-utilities-benefit-from-unstoppable-solar-and-wind/?sh=34e36cb22c84</a>
		</li>
<li id="note12">
			<a href="https://www.smartcitiesdive.com/ex/sustainablecitiescollective/solar-really-cheaper-fossil-fuels/1106159/">https://www.smartcitiesdive.com/ex/sustainablecitiescollective/solar-really-cheaper-fossil-fuels/1106159/</a>
		</li>
<li id="note13">
			<a href="https://www.cbc.ca/news/canada/calgary/era-of-cheap-wind-and-solar-has-arrived-says-university-calgary-researchers-1.5807219">https://www.cbc.ca/news/canada/calgary/era-of-cheap-wind-and-solar-has-arrived-says-university-calgary-researchers-1.5807219</a>
		</li>
<li id="note14">
			<a href="https://www.scientificamerican.com/article/in-a-first-wind-and-solar-generated-more-power-than-coal-in-u-s/">https://www.scientificamerican.com/article/in-a-first-wind-and-solar-generated-more-power-than-coal-in-u-s/</a>
		</li>
<li id="note15">
			<a href="https://www.weforum.org/agenda/2021/07/renewables-cheapest-energy-source/">https://www.weforum.org/agenda/2021/07/renewables-cheapest-energy-source/</a>
		</li>
<li id="note16">
			<a href="https://www.theguardian.com/environment/2019/aug/14/fracking-causing-rise-in-methane-emissions-study-finds">https://www.theguardian.com/environment/2019/aug/14/fracking-causing-rise-in-methane-emissions-study-finds</a>
		</li>
<li id="note17">
			<a href="https://heartland.org/opinion/us-transition-to-100-percent-renewable-energy-would-lead-to-catastrophe-study-says">https://heartland.org/opinion/us-transition-to-100-percent-renewable-energy-would-lead-to-catastrophe-study-says</a>
		</li>
<li id="note18">
			<a href="https://www.energy.gov/sites/default/files/2022-01/lbnl_ieee-land-requirements-for-utility-scale-pv.pdf">https://www.energy.gov/sites/default/files/2022-01/lbnl_ieee-land-requirements-for-utility-scale-pv.pdf</a>
		</li>
<li id="note19">
			<a href="https://www.skystreamenergy.com/how-many-acres-are-needed-for-a-1-mw-solar-farm/">https://www.skystreamenergy.com/how-many-acres-are-needed-for-a-1-mw-solar-farm/</a>
		</li>
<li id="note20">
			<a href="https://elements.visualcapitalist.com/how-much-land-power-us-solar/">https://elements.visualcapitalist.com/how-much-land-power-us-solar/</a>
		</li>
<li id="note21">
			<a href="https://worldpopulationreview.com/country-rankings/solar-power-by-country">https://worldpopulationreview.com/country-rankings/solar-power-by-country</a>
		</li>
<li id="note22">
			<a href="https://natural-resources.canada.ca/our-natural-resources/minerals-mining/minerals-metals-facts/rare-earth-elements-facts/20522">https://natural-resources.canada.ca/our-natural-resources/minerals-mining/minerals-metals-facts/rare-earth-elements-facts/20522</a>
		</li>
<li id="note23">
			<a href="https://www.metaltechnews.com/story/2021/09/09/critical-minerals-alliances/made-in-north-america-rare-earths-return/683.html">https://www.metaltechnews.com/story/2021/09/09/critical-minerals-alliances/made-in-north-america-rare-earths-return/683.html</a>
		</li>
<li id="note24">
			<a href="https://www.caranddriver.com/features/a44022888/electric-car-battery-recycling/">https://www.caranddriver.com/features/a44022888/electric-car-battery-recycling/</a>
		</li>
<li id="note25">
			<a href="https://environment.ec.europa.eu/topics/waste-and-recycling_en">https://environment.ec.europa.eu/topics/waste-and-recycling_en</a>
		</li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI and Uncertainty</title>
		<link>https://www.skeptic.com/reading_room/importance-of-uncertainty-in-human-compatible-artificial-intelligence/</link>
		
		<dc:creator><![CDATA[Maggie Jackson]]></dc:creator>
		<pubDate>Fri, 03 May 2024 07:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[human experience]]></category>
		<category><![CDATA[human-compatible AI]]></category>
		<category><![CDATA[probabilistic reasoning]]></category>
		<category><![CDATA[uncertainty]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=44854</guid>

					<description><![CDATA[Maggie Jackson discusses the importance of uncertainty in artificial intelligence (AI) and how it can lead to more collaborative and adaptable systems. Openly uncertain AI models are being developed to improve transparency and allow for better human-AI interaction. By constructing AI that admits its uncertainty, AI can work with humans to achieve complex goals and align with human preferences.]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">O</span>ne winter evening in 2014, Stuart Russell, a professor of Computer Science at the University of California, Berkeley, was riding the Paris Metro. He was on his way to a rehearsal for a choir that he had joined while living in the French capital during a sabbatical from Berkeley.
</p>
<p>
	That evening, he was listening to the piece that he would be practicing, Samuel Barber’s <em>Agnus Dei</em>, the composer’s choral arrangement of his haunting <em>Adagio for Strings</em>. Swept up in the sublime music, Russell had a breathtaking idea. AI should be built to support ineffable human moments like this one. Instead of delegating an objective to a machine and then stepping back, designers should make systems that will work <em>with</em> us to realize both our complex, shifting goals and our values and preferences. “It just sprang into my mind that what matters, and therefore what the purpose of AI was, was in some sense the aggregate quality of human experience,” he later recalled. And in order to be constantly learning what humans want or need, AI must be uncertain, Russell realized. “This is the core of the new approach: we remove the false assumption that the machine is pursuing a fixed objective that is perfectly known.”
</p>
<p>
	Talking with me by video call one day in the fall of 2022, Russell elaborates. Once the machine is uncertain, it can start working with humans instead of “just watching from above.” If it doesn’t know how the future should unfold, AI becomes teachable, says Russell, a thin, dapper man with a manner of speaking that is somehow both poetical and laser precise. A key part of his Paris epiphany, he says, “was realizing that actually [AI’s] state of uncertainty about human objectives is permanent.” He pauses. “To some extent, this is how it’s going to be for humans too. We are not born with fixed reward functions.”
</p>
<p>
	A few weeks later, I meet up virtually with Anca Dragan, an energetic Berkeley roboticist who is a protégé of Russell’s and one of a growing number of high-profile scientists turning his vision for reimagining AI into algorithmic reality. <span id="more-44854"></span>
</p>
<p>
	“One of my biggest lessons over the past five years or so has been that there’s a tremendous amount of power for AI in being able to hold appropriate uncertainty about what the objective should be,” she tells me. <em>Power?</em> I ask. She explains that by making AI “a little bit more humble, a little bit more uncertain, all of a sudden magical things happen” for both the robot <em>and</em> the human. Together, we begin watching two illustrative bits of video whose banality belies their importance.
</p>
<p>
	In a first clip filmed during experiments in her laboratory, we watch as a robot arm swings into action, carrying a coffee cup several feet high above a table. Almost immediately, a graduate student in a red T-shirt tries to push the arm lower. “It’s Ellis’s favorite mug,” says Dragan, describing the hypothetical scenario inspiring the research, “and he doesn’t like it that the robot is holding it so high up because if it drops, it will break.” As Ellis pushes, the robot doesn’t fight or freeze. But as soon as he lets go—“this is the interesting part,” says Dragan—the robot promptly bounces back up, reclaiming its initial trajectory. This is how AI traditionally has treated the human—as a pesky obstacle on the road to fulfilling the gospel of its objective, says Dragan. The robot views Ellis as an unknown to be ignored, skirted, or eliminated in order to get the job done. I watch as he gives the imperturbable machine a final two-fingered poke before standing back, looking a little defeated.
</p>
<p>
	In what is known as the classical period of AI, early systems by necessity were built to operate in a kind of utopian world that was clear-cut, predictable, and fully understood. In order to make the first algorithms work, designers had to, as Dragan says, “cut off a tiny piece of the world, put it in a box, and give it to a robot.” By the 1980s, however, scientists realized that if they were to create systems for realworld use, they needed to grapple with the unpredictability of life.
</p>
<p>
	To meet this challenge, computer scientist Judea Pearl famously turned to Bayes’ theorem, an Enlightenmentera mathematical system for dealing with uncertainty by constantly updating one’s prior beliefs with new evidence. By investing AI with probabilistic capabilities, Pearl enabled systems to weigh various actions against both the current state of the world and a range of possible futures before deciding on the best route to maximizing a reward. He gave AI wiggle room. Yet the foundational premise of the work remained the same. Unknowns—whether a hesitant pedestrian in a crosswalk, an unanswerable search engine query, or a coffee drinker with ideas of his own—are best summarily dispatched en route to realizing an objective. When Ellis lets go, the coast is clear. The robot knows just what to do.
</p>
<p>
	In the next clip, Ellis tries again. But this time, he only has to push the arm down once before stepping back and watching, one hand nonchalantly in his pocket, as the robot glides by a few inches above the table. Suddenly, the system is doing not what <em>it</em> wants but something far more in line with what Ellis prefers. The maneuver is over in less than a minute, and the inner workings of the robot’s metamorphosis are hidden from view. But I can clearly see that this time, the robot has learned something about carrying coffee, about human priorities, and about aligning with intelligences other than its own. As the robot completes the task, Ellis nods approvingly to someone off camera. He looks relieved.
</p>
<p>
	This is the new paradigm of what Russell calls “human-compatible AI.” Gone is the fallacy of the known fixed objective, whether it is given in advance—“win points”—or, as is the case with a strategy called inverse reinforcement learning, pieced together by the system from initial training demonstrations that in effect say “carry the coffee this way.” (In the latter scenario, a robot may accept a correction while in training, but once it is deployed, it will remain undeterred from its objective.) As Ellis experienced, most standard robots cannot learn on the fly.
</p>
<p>
	In contrast, uncertain AI can adapt in the moment to what we want it to do. Imbued with probabilistic reasoning about its aims or other equivalent mathematical capabilities, the system dwells in “a space of possibilities,” says Dragan. A push is not an obstacle to getting its way but a hint of a new, likely better direction to go. The human is not an impediment but a teacher and a teammate. Perhaps most important, human-compatible AI likely will be open to being shut down if it senses that it might not be on the right track, preliminary studies suggest. A human wish to turn the robot off is just another morsel of information for a system that knows that it does not know. “That’s the big thing that uncertainty gives you, right; you’re not sure of yourself anymore, and you realize you need more input,” says Dragan gleefully. “Uncertainty is the key foundation upon which alignment can rest.”
</p>
<p class="pullquote">
	By making AI a little bit more humble, a little bit more uncertain, all of a sudden, magical things happen for both the robot and the human.
</p>
<p>
	In initial user studies, people working with uncertain robots achieve better task performance with less time and effort. They view such systems as more seamlessly collaborative and sensitive to their needs. “The robot seemed to quickly figure out what I cared about,” said one participant. In one experiment, when a physically present robot verbally expressed uncertainty about a thorny moral dilemma, people saw it as more intelligent than one that asserted that it was sure of what to do.
</p>
<p>
	The music that helped set the stage for Stuart Russell’s vision of a new AI celebrates the liminality and the ambiguity of life. One of the world’s most-heard pieces of modern classical music, Barber’s <em>Adagio for Strings</em> unfolds in a single brief movement suffused with moments of suspense and dissonance. Critic Johanna Keller writes that the piece seems to convey “the effect of a sigh, or courage in the face of tragedy, or hope” and ends on a note of uncertainty. She writes, “In around eight minutes the piece is over, harmonically unresolved, never coming to rest.”
</p>
<p style="text-align: center; color: #aaa;">
	• • • • • •
</p>
<p>
	At Virginia Tech, I at last meet up with an I-Don’t-Know robot. But unlike Ellis, I am working with a system whose uncertainty is an open book. In Dylan Losey’s lab, I discover the critical complement to making AI better at knowing that it does not know: creating systems that also admit to their uncertainty.
</p>
<p>
	The painter-robot sports three sets of armbands, called soft haptic displays, at the base, in the middle, and near the end of its five-foot length. As I guide it through its work of drawing a line down the table, the robot tells me where in the task it is unsure by inflating specific bands associated with particular aspects of the process. If it is unsure about the angle to hold its claw-like “end effector,” for example, it inflates the bottom-most armbands in each set with a soft <em>woosh</em>. In this way, I can get a read on whether the robot is catching on no matter where I place my hands. “You can actually touch the robot’s uncertainty,” Losey tells me. “You can feel in real time as you move it how confused it is.”
</p>
<p>
	If uncertainty enables an AI system to be open to our suggestions, then AI that can also show its unsureness will allow us to know where <em>we</em> stand in our increasingly high-stakes interactions with such machines. A cycle of questions and answers on both sides can result. “When a robot can let a person know, ‘hey, this is where I am at, this is what I’ve learned,’ or ‘this is my best guess but I am a little bit uncertain so take that with a grain of salt’—<em>that’s</em> what I’m working for,” says Losey, a scientist with a rapid-fire pace of speaking and a somber intensity.
</p>
<p>
	The research is critical, he and others believe, because not only does standard AI fall woefully short in its understanding of humanity, but <em>we</em> in turn know less and less about the complex black-box systems that increasingly manage our lives. “Even as a designer, often I have no clue what’s going to happen next with [standard] robots,” Losey admits. “I have to press play and hope that what I see is what I want to see.” The question is, he says, “how can we open that box?”
</p>
<p>
	How and why does AI succeed or fail? Why did the model conclude that one person was worthy of parole, a job interview, or a loan while a similar candidate was not? We often do not know in part because AI operates in abstract mathematical terms that rarely correspond to human ideas and language. In addition, the more astonishing AI’s achievements have become, the more opaque they are to human understanding. After being handily defeated at Go by an AI program, one shocked world champion said AlphaGo’s extraordinary strategic play revealed that “not a single human has touched the edge of the truth of Go.”
</p>
<p>
	Slowly, the creation of openly uncertain systems is becoming a key part of global efforts to make explainable and transparent AI. It is not enough to bring to light what AI <em>knows</em>, for example, by exposing which reward objective or data set was used in training an algorithm. To work with AI, to anticipate its moves, to gauge its strengths and ours, to parse the magic, we also should understand what it does <em>not</em> know, leading scientists assert. Dozens of frontline laboratories worldwide are working to build AI that can speak a language of uncertainty that humans can readily comprehend.
</p>
<p>
	Some robots show people on-screen hypothetical scenarios about their next moves, in effect asking, “Should I move closer to or further from the stove?” or “Should I avoid a certain intersection on my way to fetch coffee?” Others play a kind of robot charades. In Losey’s lab, a standing robot often used in warehouses acted out for me a plethora of sometimes indecipherably similar ways for it to stack dishes. Its thoroughness raised unresolved research questions, such as how much and what kinds of uncertainty a system should display or how AI’s incertitude can interact productively with ours. “It’s not just a question of robot uncertainty,” says Laura Blumenschein, a soft robotics expert who cocreated the haptic arm. “It’s a question of human–robot systems and the combined uncertainty within them.”
</p>
<p>
	Beyond robots, openly uncertain AI models have shown promise for use in medical diagnosis systems and already are being used to bolster AI-assisted drug discovery. For example, to address rising bacterial resistance to drugs, a new kind of model created by Yoshua Bengio and other top researchers in Canada has shown exciting potential to identify synthetic peptides, that is, small proteins that might be turned into new antibiotics. Instead of relying on pattern recognition to settle on one best answer, Generative Flow Networks explore less obvious paths in the data to uncover numerous possible answers, in this case candidate peptides that can be tested further by models and humans alike.
</p>
<p>
	“The whole point is that we want to keep in mind many possible explanations—we want to account for uncertainty,” says Nikolay Malkin of Mila, the Quebec-based leading AI research institute where the algorithm was created. And by operating reflectively rather than relying on simplifying and opaque snap judgments, the new models shed light on both a problem’s deeper causal intricacies and their own decision-making processes. The system’s uncertainty can be an engine of transparency.
</p>
<p>
	For many scientists, moreover, constructing AI that admits its uncertainty is not just a safety feature, a path to adaptability, a practicality. It is a matter of right and wrong.
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.1/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n1_lg.jpg" alt="Skeptic 29.1 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.1<br />
		<a href="https://www.skeptic.com/productlink/magv29n1">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/259456#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	Julian Hough is a British computer scientist with a rising reputation and a kindly demeanor. The longer he has been in the field, however, the more concerned he has become about the pretense of certainty traditionally built into the machine. Hough offers a final word of warning. Any time that a system’s uncertainty is swept under the rug, he cautions, “it won’t be going away. It’s just going to be hidden in dangerous ways and basically hidden by system designers.” By way of example, he describes a scenario. “Say a cop robot is looking for a suspect, and it has 60 percent confidence in one person, but it’s been programmed to act at any level beyond 50 percent confidence. If it does not express that level of doubt, that’s very dangerous. It could have fatal consequences.”
</p>
<p>
	This is a watershed moment in the history of AI. Uncertainty is at the heart of efforts to create systems that can better align with human aims. There is no easy blueprint for reimagining humanity’s most powerful and dangerous invention to date. Still, one day sooner than you may imagine, you might work side by side with a robot that will ask you good questions and admit to its uncertainty, all while expecting that you in turn will do so too. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<div class="endMatter">
<p>
		This essay was excerpted and adapted by the author from <em>Uncertain: The Wisdom and Wonder of Being Unsure</em> (Prometheus Books). Copyright © 2023 by Maggie Jackson. Reprinted with permission.
	</p>
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>Maggie Jackson</strong> is an award-winning author and journalist who is a leading thinker on technology’s impact on humanity. A former contributing columnist for the <em>Boston Globe</em>, Jackson’s writings have been translated into multiple languages and have appeared in the <em>New York Times</em>, the <em>Wall Street Journal</em>, <em>New Philosopher</em>, and <em>Le Monde’s Courrier International</em>. Her expertise has been featured on NPR, MSNBC, and the BBC; and in many other global media outlets. She is the recipient of numerous grants, fellowships, and awards and has spoken at venues from Google to Yale.
	</p>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Bedbug Bedlam:  Real Infestation or Social Panic in&#160;Paris?</title>
		<link>https://www.skeptic.com/reading_room/real-infestation-or-social-panic-in-paris/</link>
		
		<dc:creator><![CDATA[Robert E. Bartholomew]]></dc:creator>
		<pubDate>Fri, 26 Apr 2024 07:00:00 +0000</pubDate>
				<category><![CDATA[Reading Room]]></category>
		<category><![CDATA[media-driven panics]]></category>
		<category><![CDATA[social panic]]></category>
		<guid isPermaLink="false">https://www.skeptic.com/?p=44838</guid>

					<description><![CDATA[Bedbugs. Just mention of the word is enough to give people the heebie-jeebies and send shivers down their spines—or start scratching. Beginning in early fall of 2023 and coinciding with Paris Fashion Week from September 25 to October 3, fear of the unhealthy vermin swept across Paris. There does not appear to be one incident [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>
	<span class="dropCap">B</span>edbugs. Just mention of the word is enough to give people the heebie-jeebies and send shivers down their spines—or start scratching. Beginning in early fall of 2023 and coinciding with Paris Fashion Week from September 25 to October 3, fear of the unhealthy vermin swept across Paris. There does not appear to be one incident that triggered the scare, but once the cry of “Bedbug!” went up, it quickly went viral online and in the Parisian media. A wave of YouTube and TikTok videos showed the proliferous pests crawling on bus seats, in trains, riding the subways, lounging at Charles de Gaulle airport, and taking in the latest plays in Paris theatre district the “Grands Boulevards.” Some anxious residents even refused to sit during their daily commutes. One British newspaper saw the humorous side of the panic, carrying the headline: “Coming Soon to a Cinema Near You? The Return of the Bud Bug.”<sup><a href="#note01">1</a></sup>
</p>
<p>
	Within days, the humble bedbug <em>Cimex lectularius</em> was being portrayed as public enemy No. 1. Politicians began holding press conferences on “the bedbug crisis” and vowing action. By September 29, the Deputy Mayor of Paris, Emmanuel Grégoire, ominously posted on X/Twitter: “No one is safe.”<sup><a href="#note02">2</a></sup> One MP, Ms. Mathilde Panot, carried a test tube filled with bedbugs into the French Parliament, complaining that pesky parasites were “making the lives of millions of our fellow citizens a living hell.”<sup><a href="#note03">3</a></sup>
</p>
<p>
	While they may give people the creeps, bedbugs are more of an annoyance than a major health threat. These small, reddish-brown insects have an affinity for feeding on the blood of humans as they sleep. During the day they love to hide in the cracks and crevices of headboards, box springs, mattresses, and bed frames next to their human prey, hence the name. According to the Mayo Clinic, they are not considered to be a serious health issue as they do not directly spread disease, although they can trigger allergic reactions and skin conditions, and scratching the bites can lead to infection.<sup><a href="#note04">4</a></sup> <span id="more-44838"></span>
</p>
<p class="pullquote">
	Bonne nuit. Dormez bien. <br /> Ne laissez pas les punaises de lit piquer.
</p>
<p>
	Most experts agree that there does appear to be an uptick in the bedbug population of Paris—and in many parts of the world. According to estimates from the French national pest control association, the number of calls to exterminators jumped about 10 percent over last year. However, this was not surprising as it corresponds with the spike in travel after the Covid pandemic.<sup><a href="#note05">5</a></sup> There is also evidence that bedbugs have become more resistant to insecticides while the rise in global temperatures have boosted their sex lives.<sup><a href="#note06">6</a>, <a href="#note07">7</a></sup> But a 10 percent increase hardly qualifies as a massive infestation.
</p>
<p>
	Look closer, and all is not as it seems.
</p>
<p>
	Thibault Buckley who works for a French company that specializes in dealing with bedbugs that have infested dogs, says that most of recent cases have turned out to be unrelated to bedbugs.<sup><a href="#note08">8</a></sup> The issue is also nothing new. For instance, a government survey of French households between 2017 and 2022 found that 11 percent were infested with the creepy critters.<sup><a href="#note09">9</a></sup> The French bedbug scare has also spread the fear of infestations to other European metropolitan areas. However, bedbugs have long been a feature, if a very unwanted one, of most major cities. What is new is the sudden media attention.
</p>
<p>
	Soon after hearing of the bedbug infestation, Lebanese dermatologist Zeina Nehme happened to be on a trip to Paris when she decided to spend her weekend finding some of the tiny troublemakers and making a social media video about it. That’s when something odd happened: she could not locate a single bug—not in her apartment or the restaurants she visited or the vast rail network, the Paris Metro. “I actively searched to find one to take pics and do the reel. Nothing,” she said.<sup><a href="#note10">10</a></sup>
</p>
<div class="imagefloatleft" style="width: 220px;">
	<a href="https://www.skeptic.com/magazine/archives/29.1/"><img loading="lazy" decoding="async" src="https://www.skeptic.com/magazine/images/magv29n1_lg.jpg" alt="Skeptic 29.1 (cover)" width="200" height="260" class="boxShadow"></a> </p>
<p class="caption">
		This article appeared in <em>Skeptic</em>&nbsp;magazine 29.1<br />
		<a href="https://www.skeptic.com/productlink/magv29n1">Buy print edition</a><br />
		<a href="https://www.skeptic.com/magazine/app/259456#58b60a418b4ea">Buy digital edition</a><br />
		<a href="https://shop.skeptic.com/subscribe-to-skeptic-magazine-print-edition">Subscribe to print edition</a><br />
		<a href="https://pocketmags.com/skeptic-magazine#58b60a418b4ea/">Subscribe to digital edition</a> <br />
		<a href="https://www.skeptic.com/magazine/app/">Download our app</a>
	</p>
</div>
<p>
	The Paris bedbug “invasion” has the hallmarks of a social panic involving a real or imagined threat. In this case, the threat is real—there are bedbugs and their numbers have been increasing, but their presence has been exaggerated. Bugs are a common feature of everyday life. Now however, in the wake of sensational media reports of invading bedbugs, people have begun to scrutinize their surroundings for evidence for the critters. In the past a bus or train traveler may have sat next to one and not paid much notice. These days, Parisians are hyperaware of any bug, especially while on public transport or in public places like the cinema—and people are seeing them everywhere.
</p>
<p>
	One factor likely driving the scare is misidentifications. Bedbugs are often mistaken for other insects such as cockroach nymphs, which look similar but are slightly longer and more cylindrical, and also with fleas, ticks, or carpet beetles.<sup><a href="#note11">11</a></sup> Another factor leading to the perceived invasion of bedbugs, may be embarrassment—or a lack thereof. Until recently, if someone found the creepy crawlers in their home, it was not exactly a badge of honor and they may have been reluctant to mention it to their work colleagues around the watercooler. Now, with the surge in media interest, it appears to be chic to report finding the bugs and exchange war stories with fellow Parisians. <span><img loading="lazy" decoding="async" src="https://www.skeptic.com/images/s_glyph_128px.png" alt="END" width="12" height="12" class="S-glyph" /></span>
</p>
<blockquote>
<p>
		Goodnight. Sleep tight. <br /> Don’t let the bedbugs bite.
	</p>
</blockquote>
<div class="endMatter">
<h5>
		About the Author<br />
	</h5>
<p>
		<strong>Robert E. Bartholomew</strong> is an Honorary Senior Lecturer in the Department of Psychological Medicine at the University of Auckland in New Zealand. He has written numerous books on the margins of science covering UFOs, haunted houses, Bigfoot, lake monsters—all from a perspective of mainstream science. He has lived with the Malay people in Malaysia, and Aborigines in Central Australia. He is the co-author of two seminal books: <em>Outbreak! The Encyclopedia of Extraordinary Social Behavior</em> with Hilary Evans, and <em>Havana Syndrome</em> with Robert Baloh.
	</p>
<h5>
		References<br />
	</h5>
<ol>
<li id="note01">
			<a href="https://bit.ly/48FilZQ">https://bit.ly/48FilZQ</a>
		</li>
<li id="note02">
			<a href="https://bit.ly/48dCEhg">https://bit.ly/48dCEhg</a>
		</li>
<li id="note03">
			<a href="https://bit.ly/3RMPGLw">https://bit.ly/3RMPGLw</a>
		</li>
<li id="note04">
			<a href="https://bit.ly/48EaR9z">https://bit.ly/48EaR9z</a>
		</li>
<li id="note05">
			<a href="https://bit.ly/3vm8ZEh">https://bit.ly/3vm8ZEh</a>
		</li>
<li id="note06">
			<a href="https://bit.ly/41KXORl">https://bit.ly/41KXORl</a>
		</li>
<li id="note07">
			<a href="https://bit.ly/3tush9X">https://bit.ly/3tush9X</a>
		</li>
<li id="note08">
			<a href="https://bit.ly/3H5NKJk">https://bit.ly/3H5NKJk</a>
		</li>
<li id="note09">
			<a href="https://bit.ly/3vnhnmT">https://bit.ly/3vnhnmT</a>
		</li>
<li id="note10">
			<a href="https://bit.ly/3S4GYdh">https://bit.ly/3S4GYdh</a>
		</li>
<li id="note11">
			<a href="https://bit.ly/3ve397C">https://bit.ly/3ve397C</a>
		</li>
</ol>
</div>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
