<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Culture Digitally</title>
	<atom:link href="https://culturedigitally.org/feed/" rel="self" type="application/rss+xml" />
	<link>https://culturedigitally.org</link>
	<description>Examining Contemporary Cultural Production</description>
	<lastBuildDate>Mon, 28 Dec 2020 23:53:42 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=5.5.3</generator>

 
	<item>
		<title>Announcement</title>
		<link>https://culturedigitally.org/2020/12/announcement/</link>
		
		<dc:creator><![CDATA[Hector Postigo]]></dc:creator>
		<pubDate>Mon, 28 Dec 2020 23:49:50 +0000</pubDate>
				<category><![CDATA[announcement]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://culturedigitally.org/?p=9171</guid>

					<description><![CDATA[Scott Brennen&#8217;s and Dan Kriess&#8217; post Digitalization and Digitization has been translated into Japanese by Akiko Uchida. For our readers who might want to read in Japanese see IT Lexicon.]]></description>
										<content:encoded><![CDATA[
<p>Scott Brennen&#8217;s and Dan Kriess&#8217; post <a href="https://culturedigitally.org/2014/09/digitalization-and-digitization/">Digitalization and Digitization</a> has been translated into Japanese by Akiko Uchida.  For our readers who might want to read in Japanese see <a href="https://ittechlexicon.blogspot.com/2020/09/1.html">IT Lexicon</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Are Surveillance Capitalists Behaviorists? No. Does It Matter? Maybe.</title>
		<link>https://culturedigitally.org/2020/12/are-surveillance-capitalists-behaviorists-no-does-it-matter-maybe/</link>
		
		<dc:creator><![CDATA[Shreeharsh Kelkar]]></dc:creator>
		<pubDate>Mon, 07 Dec 2020 12:10:01 +0000</pubDate>
				<category><![CDATA[algorithms]]></category>
		<category><![CDATA[Platforms]]></category>
		<category><![CDATA[behaviorism]]></category>
		<category><![CDATA[cognitive science]]></category>
		<category><![CDATA[surveillance capitalism]]></category>
		<guid isPermaLink="false">https://culturedigitally.org/?p=9141</guid>

					<description><![CDATA[In this post I want to pose and answer two questions: are surveillance capitalists behaviorists? And does it matter? Short answer: surveillance capitalists are not behaviorists, but behavioralists. Behavioralists are okay with guiding individual level behavior as long as it leads to higher-order system behavior that they think is useful; in other words, they have a different theory of freedom than behaviorists. Painting Silicon Valley engineers as behaviorists is no doubt politically useful (on which more below) but will it be persuasive when push comes to shove in the battle to regulate the digital economy? I try to untangle some of these contradictions below.]]></description>
										<content:encoded><![CDATA[
<p>The release of the documentary <em>The Social Dilemma</em> has understandably irritated scholars who study the social dimensions of science and technology. Lisa Messeri’s <a rel="noreferrer noopener" href="https://twitter.com/lmesseri/status/1306962471685705730?s=20" target="_blank">Twitter thread</a> has an excellent summary of all that’s wrong with the documentary (all of which I agree with).</p>



<p>But the documentary’s starting point — that the technical mechanisms these companies have created to <em>cognitively</em>  direct user attention (a.k.a. algorithms that make you doom-scroll)  have deleterious consequences and are our biggest problem today— is  something that at least some scholars agree with. I want to highlight  one problem with this account (that keeps recurring in technology  criticism broadly): the presumption that surveillance capitalists and  Silicon Valley engineers are behaviorists — intellectual descendants of <a rel="noreferrer noopener" href="https://en.wikipedia.org/wiki/B._F._Skinner" target="_blank">B. F. Skinner </a>— whose  guiding principle is to build elaborate technologies of stimulus  designed to provoke particular responses in their users.  </p>



<p>In this post I want to pose and answer two questions: is that correct?  And does it matter? Short answer: surveillance capitalists are not  behaviorists, but <em>behavioralists</em>. Behavioralists are okay with  guiding individual level behavior as long as it leads to higher-order  system behavior that they think is useful; in other words, they have a  different <em>theory of freedom</em> than behaviorists. Painting Silicon  Valley engineers as behaviorists is no doubt politically useful (on  which more below) but will it be persuasive when push comes to shove in  the battle to regulate the digital economy? I try to untangle some of  these contradictions below. </p>



<p>The argument that Silicon Valley engineers are behaviorists is made most explicitly in Chapter 12 of Shoshana Zuboff’s <em>The Age of Surveillance Capitalism. </em>(It also occurs, in some form, in other works of tech criticism that I love: Audrey Watters’ writing on the <a rel="noreferrer noopener" href="http://hackeducation.com/2017/07/08/what-went-wrong" target="_blank">history of ed-tech</a> and in Yarden Katz’ <a rel="noreferrer noopener" href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3078224" target="_blank">critique of AI</a>). </p>



<p>In Zuboff’s telling, the flagship companies of “surveillance
capitalism” — your Googles and Facebooks — accumulate fine-grained data about
their users’ (i.e. our) actions, extract knowledge about their users from this
data, and then bring this knowledge to bear by actively trying to shape their
users’ attention/behaviors in ways that reap more profits. This style of value
extraction, Zuboff argues, can only end in the destruction of basic human
freedom. As more data is collected, more predictive knowledge extracted from
it, self and society will be increasingly automated, as all of us users do the
bidding of our new masters, the surveillance capitalists. The book describes in
great detail many of the technical-legal decisions that had to be taken in
order to establish surveillance capitalism.</p>



<p>Zuboff connects this particular impulse to extract value from
data to shape behavior to a strain of social thought that she calls
“instrumentarianism,” a close cousin of totalitarianism. “Totalitarianism
operated through the means of violence,” she argues, “but instrumentarian power
operates through the means of behavioral modification.” According to Zuboff,
instrumentarianism is an approach to human action that does not put much
premium on the <em>insides</em> of human beings; it only cares about what they
do, especially in what can be quantified, so that this behavior can then be
modified over and over. She argues that the origins of instrumentarianism lie
in behaviorist research with its focus on stimulus-and-response, operant
conditioning, and the like. But if the means of instrumentarianism are
different from totalitarianism, the end is roughly similar: the inhibition of
human freedom and autonomy.</p>



<p>In her emphasis on human freedom, Zuboff has something in common
with Tristan Harris, the ex-Googler and prominent talking head in <em>The
Social Dilemma</em>, who had a crisis of conscience after a decade of working
for Google, and <a href="https://www.wired.com/story/our-minds-have-been-hijacked-by-our-phones-tristan-harris-wants-to-rescue-them/?src=longreads" target="_blank" rel="noreferrer noopener">now argues that</a> “the tech industry[‘s&nbsp;…] design
techniques to keep people hooked to the screen for as long and as frequently as
possible” are “hijacking […] the human mind. [S]ystems […] are better and
better at steering what people are paying attention to, and better and better
at steering what people do with their time than ever before.” If Zuboff sounds
like Max Weber lamenting the iron cage, Harris has a whiff of an old-fashioned
moral crusader who is comfortable in a TED talk; one gets the feeling that in a
different time he would be equally at home fulminating against alcohol or TV
because they kept the human mind captive.</p>



<hr class="wp-block-separator"/>



<p>Who were the behaviorists? Coming to the fore in the early 20th
century, behaviorists were a group of psychologists who argued that human action
was best understood as responses to external stimuli (complicated actions could
arise as a result of chained stimuli: a stimulus that leads to a response that
leads to a different stimulus and so on). In making these claims, behaviorists
were guided by two impulses. Psychology in the late 19th century had chosen
introspection as a way to understand the human mind. Behaviorists found this
method too unscientific, too reliant on the researcher’s subjectivity. In an
era of quantification, they sought to build a psychology that was a science,
open to quantitative measurements and refutation, i.e. around what <a href="https://press.princeton.edu/books/paperback/9781890951795/objectivity" target="_blank" rel="noreferrer noopener">Peter Galison and Lorraine Daston</a> have called “mechanical
objectivity.” Behaviorism’s rise was also predated on the social changes of its
time: the rise of advertising and bureaucratic organizations raised the
possibility of applying psychological insights to stimulate consumption, work,
and efficiency.</p>



<p>If behaviorism was the dominant school of thought in psychology
in the early 20th century, a backlash began to set in by the end of the 1930s.
It gathered steam in the 1950s and reached its climax in Noam Chomsky’s <a href="https://chomsky.info/1967____/" target="_blank" rel="noreferrer noopener">famous takedown of B. F.
Skinner’s <em>Verbal
Behavior</em></a>. By the 1960s, behaviorism had been replaced by
cognitivism (or if you prefer, cognitive science). If behaviorists famously
restricted themselves to thinking about stimuli and responses, and dismissed
the mind as irrelevant, cognitivists conceived of the human mind as an
information processing system. They argued that, when conceived this way, the
mind could be studied scientifically. Indeed, cognitivists believed that the
computer program — the artifact and the concept — offered a rigorous way of
studying what the mind does: you could build computer programs to simulate the
mind, and indeed, you could also see the mind as a computer program itself, as
an entity that engages in “planning” and execution.</p>



<figure class="wp-block-embed-flickr aligncenter wp-block-embed is-type-photo is-provider-flickr"><div class="wp-block-embed__wrapper">
<a href="https://www.flickr.com/photos/usmcarchives/36716075585/in/photolist-XWtu6X-bdvSHD-bdvSwM-efwAjh-bdvSXx-2jfe6qT-76FcTS-MEQcXs-vv6RFB-bBrojV-vKxPHz-igFnvq-NhMeHf-bdvTy8-LEkrbF-bdvTke-4u7QtL-EnZfZk-9gbrht-CkC3t5-NhMfqY-ivrKhp-294XBP3-YozEyS-XSq5Nq-7nNkoK-ecJnTz-dpUgmG-8ryKqz-6qqyP2-vKynRT-igFewb-ioG3YS-ioG3ah-igFC4n-vMYMUx-igFy1c-igFgF1-ioFZKi-2dSAVts-Lvoirv-r4mx9f-FRjuWs-ecJnJ6-2dSuriy-73xwQ3-WF6mu5-6MHzxi-dYg5Df-LKzD4"><img loading="lazy" src="https://live.staticflickr.com/4363/36716075585_d988e8d0bc_z.jpg" alt="Marine Anti-Aircraft Gun, Tulagi, circa 1942" width="640" height="499" /></a>
</div><figcaption> <br><a rel="noreferrer noopener" href="https://www.flickr.com/photos/usmcarchives/36716075585/in/photolist-XWtu6X-bdvSHD-bdvSwM-efwAjh-bdvSXx-2jfe6qT-76FcTS-MEQcXs-vv6RFB-bBrojV-vKxPHz-igFnvq-NhMeHf-bdvTy8-LEkrbF-bdvTke-4u7QtL-EnZfZk-9gbrht-CkC3t5-NhMfqY-ivrKhp-294XBP3-YozEyS-XSq5Nq-7nNkoK-ecJnTz-dpUgmG-8ryKqz-6qqyP2-vKynRT-igFewb-ioG3YS-ioG3ah-igFC4n-vMYMUx-igFy1c-igFgF1-ioFZKi-2dSAVts-Lvoirv-r4mx9f-FRjuWs-ecJnJ6-2dSuriy-73xwQ3-WF6mu5-6MHzxi-dYg5Df-LKzD4" target="_blank">Anti-aircraft guns circa 1942</a>. <a rel="noreferrer noopener" href="https://creativecommons.org/licenses/by/2.0/" target="_blank">CC-BY-2.0</a>. </figcaption></figure>



<p>Where did this revolt against behaviorism come from? Historians
like <a href="https://jhupbooks.press.jhu.edu/title/between-human-and-machine" target="_blank" rel="noreferrer noopener">David Mindell</a>, <a href="https://mitpress.mit.edu/books/closed-world" target="_blank" rel="noreferrer noopener">Paul Edwards</a>,
<a href="https://press.uchicago.edu/ucp/books/book/chicago/H/bo3769963.html" target="_blank" rel="noreferrer noopener">Katherine Hayles</a>, and <a href="https://press.uchicago.edu/ucp/books/book/chicago/O/bo16998335.html" target="_blank" rel="noreferrer noopener">Jamie Cohen-Cole</a> locate this revolt at the intersection of
two different trends. First, electrical engineers working on difficult
technical problems of servomechanisms, radar, amplifiers, and anti-aircraft
guns had been forced to conceive of these operator-controlled technical
mechanisms (see the figure above) as “systems” which responded to the
“feedback” from their environments by adapting themselves. Once these
human-machine assemblages started being understood as systems that adapted
themselves by passing “information” and messages with their environment, it was
only a matter of time before human beings (and human minds), electricity grids,
bureaucracies, organizations, corporations, and even societies were all re-interpreted
as “systems.” The new techniques of linear programming, operations research,
and computer programming became the tools — conceptual and practical — through
which such systems could be managed and manipulated.</p>



<p>Second, and equally important, the behaviorist take on human
nature was simply incompatible with the politics of the cold war in the United
States. The behaviorist argument that people were shaped by their environments
might be applicable to citizens living in totalitarian states but simply could
not do for citizens of a free society like the US. Cognitive scientists argued
that the human mind was always, in potentiality, an “open mind” (unless it had
been corrupted by authoritarian states) that strives to process information in
a non-ideological way. This was both a technical and normative move: it
suggested a way of studying the mind as well as the way a mind <em>should</em>
be. Amusingly, the fight between behaviorism and cognitivism got quite
personal: personality psychologists sought to show, through their psychological
survey instruments, that <em>behaviorist psychologists themselves</em>
exhibited authoritarian tendencies (rather than an open mind).</p>



<hr class="wp-block-separator"/>



<p>It was in this ferment of the cognitive revolution that the world
of Artificial Intelligence (AI), the precursor to today’s surveillance
capitalism, was born. As with other flag-bearers of the cognitive
revolution — linguists, psychologists, anthropologists, neuroscientists,
philosophers, communication engineers — AI researchers too saw themselves as
part of a revolt against behaviorism and were committed to a model of the mind
as an information processor.</p>



<p>But wait, you might say, <a href="http://blog.castac.org/2014/02/whats-the-matter-with-artificial-intelligence/" target="_blank" rel="noreferrer noopener">today’s AI research is very different from the AI research of
its first few decades</a>. That is indeed correct. The early decades of AI
research was premised on the notion that when human beings did putatively
intelligent things, they were enacting a plan that they had worked out in their
heads. The folk notion of “planning” was wedded to a highly specific technical
machinery of state-space searching and utility maximization. Proving a theorem,
playing chess, and diagnosing medical patients, according to AI researchers,
all involved some sort of planning on the part of human beings (and therefore
could be modeled using computer programs).</p>



<p>The kind of AI that is done today is usually referred to as
“machine learning.” Rather than understanding intelligence as an expression of
linguistically-rendered rules or planning, machine learning researchers build
“classifiers” that consist of a function that is computed using “training
data.” Want to know if an image contains a bridge in it? Then come up with a
bunch of images with bridges in it (i.e. training data) and use them to train a
statistical classifier. Once trained, the classifier can (with widely varying
levels of confidence) tell you which images contain bridges. The key here is to
not start out with any particular model of what a bridge is but to leave it to
the training data and the learning algorithm (both human choices, to be noted).</p>



<p>Or take the topic with which this article began: the ubiquitous
recommendation algorithm that puts stuff on your screen. How is that
accomplished? Well&nbsp;, to figure out what to put on a user’s timeline,
Twitter engineers try to collect data on which tweets a user reads and how much
time she spends on them, then build a classifier that will come up with a score
for whether that user will read a tweet. They then deploy this
classifier — which is itself massive and sucks in hundreds of inputs to give
its output — in Twitter’s enormous software infrastructure such that all
possible tweets this user might receive (say from all the accounts that she
follows) are run through this classifier, and recommend to the user only those
that have a high score. Of course, the user’s response to the recommended tweet
just becomes more training data for the algorithm, and on and on.</p>



<p>Well, that sounds pretty behaviorist though, doesn’t it? Isn’t
the algorithm offering up a stimulus, gauging your response, and then switching
the stimulus again, all to get you to act in a certain way? On the surface, it
certainly seems that way. But there are a few complicating factors.</p>



<p>First, the designers of recommendation algorithms seem to be
motivated less by behaviorism proper than by <em>behavioral economics — </em>an
approach to institution design that came out of social psychology and
economics. Thus Nir Eyal, the author of <em>Hooked: How to Build Habit-Forming
Products, </em>begins his book by describing where he found the insights that
he then tried to implement: “I looked for insights from academia: drawing upon
consumer psychology, human-computer interaction, and behavioral economics
research.” What did Eyal find there? He says: “The field of behavioral
economics, as studied by luminaries such as Nobel Prize winner Daniel Kahneman,
exposed exceptions to the rational model of human behavior.”</p>



<p>This notion, that human beings are not the most rational of decision-makers,
and need a robust and well-designed “choice architecture” to help them do the
things they want to do, is not so much a behaviorist tenet as it is a a part of
the cognitive revolution. <a href="https://link.springer.com/chapter/10.1057/9781137013224_6" target="_blank" rel="noreferrer noopener">The
historian Hunter Heyck</a> argues that starting in the middle of the 20th
century, American social scientists (including psychologists) reformatted the
object of their inquiry: rather than the human being who chose from different
options, they started to study the <em>process of choosing at the level of
“systems,</em>” not just at the level of human beings or individuals but for
animals, machines, organizations, and even societies — all <em>systems</em>.
Social scientists could thus raise decision-making to the art of the highest
democratic good while simultaneously showing that human beings were limited
decision-makers: satisficing agents, according to Herbert Simon or systemically
irrational, according to Daniel Kahneman, both Nobel Prize winners.</p>



<p>Now, to be fair, Eyal does draw on B. F. Skinner, while using the concept of “variable rewards” that he argues well-designed habit-forming apps must give to their users in rewarding their sense of “tribe, hunt, and self.” But one can argue that “reward” here is mostly just synonymous with “feedback” (and feedback at multiple system levels) and the analysis is carried out more in the spirit of designing a choice architecture than building stimuli. Can these habit-forming techniques lead to bad results? Absolutely, says Eyal. But he quotes Thaler and Sunstein to argue that these techniques should be “used to help nudge people to make better choices (as judged by themselves).” Designers, according to Eyal, should “<em>build products to help people do the things they already want to do but, for lack of a solution, don’t do</em>” (my emphasis).</p>



<p>What do actual Silicon Valley engineers think as they go about building their algorithms? The evidence is mixed but it suggests that many, if not most, engineers see themselves not as behaviorists but as choice architects. In his study of engineers designing algorithmic music recommendation systems, <a rel="noreferrer noopener" href="https://journals.sagepub.com/doi/abs/10.1177/1359183518820366" target="_blank">the anthropologist Nick Seaver </a>finds that engineers do think that their users need to be “hooked” but hooking is merely the first step on a journey that has a whole lot of paths and destinations. Music recommendation engineers think about their relationship to music listeners in a variety of ways: as guides, as educators, and service providers. In my own <a rel="noreferrer noopener" href="https://dspace.mit.edu/handle/1721.1/107312?show=full" target="_blank">ethnographic research with algorithm designers working in the world of Massive Open Online Courses (MOOCs)</a>, I saw the same approach: engineers saw themselves as empowering learners by building for them choice architectures (of resources, problem sets, instructional material) through which they could learn better. And Canay Ozden-Schilling, in her <a rel="noreferrer noopener" href="http://blog.castac.org/2015/07/hardwired-hayek/" target="_blank">ethnographic work on electricity grid designers</a>, finds them using similar tropes as they go about their project of of turning passive electricity consumers into active users.</p>



<p>On the other hand, <a href="https://press.princeton.edu/books/paperback/9780691160887/addiction-by-design" target="_blank" rel="noreferrer noopener">Natasha Schull’s ethnographic research on Las Vegas casinos</a>
tells a very different story. Casino designers and slot machine engineers do
not seem to have any exalted notions of their relationship to their users
beyond “hooking” them. And the “hook,” as one addiction counselor tells Schull
ominously, is simply “the drive-in to the zone” — the zone being that area of
consciousness in which nothing exists for the habitual gambler than the machine
and the game. No choice architecture here: just the hook and then the zone.
(Some of the “<a href="https://cacm.acm.org/magazines/2020/9/246937-dark-patterns/fulltext" target="_blank" rel="noreferrer noopener">dark patterns</a>” stuff would fall into this category as
well.)</p>



<p>To recapitulate, most, if not all, surveillance capitalists and Silicon Valley engineers do not see what they do as being necessarily in conflict with individual autonomy because on a broader conceptual level, everything in the cognitivist conceptual apparatus is modeled as decision-making at different levels of abstraction (machine, individual, organizational, social). In such a scenario, manipulating informational parameters of individual decision-making to make higher level “decisions” more optimal, is no abridgment of individual autonomy.</p>



<hr class="wp-block-separator"/>



<p>Does any of this matter? It is here that we reach a double-bind.
On the one hand, engineers themselves do not see what they do as inhibiting
individual freedom and autonomy. On the other, it is often by drawing on tropes
of individual freedom and autonomy —to show how they are restricted through <em>designed</em>
algorithmic systems — that scholars and activists have often succeeded in
drawing <em>public attention</em> to questions of algorithmic governance. (For
now, I will conveniently ignore the other value that has helped create public
awareness of algorithmic systems: that they need to be <a href="https://harpers.org/archive/2018/01/the-digital-poorhouse/" target="_blank" rel="noreferrer noopener">fair</a> and <a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing" target="_blank" rel="noreferrer noopener">non-discriminatory</a>.)</p>



<p>Take the two biggest controversies around Facebook: its emotional
contagion study and Cambridge Analytica. The ways in which these played out in
public discourse mirror some of the early fights between the cognitivists and
the behaviorists. Cognitivists argued that behaviorism was illiberal (and so
were behaviorists) because it explicitly violated the autonomy of individuals.
Similarly, it is often the illiberality of Facebook — writing about its
workings using behaviorist tropes — that <em>Surveillance Capitalism</em> and <em>The
Social Dilemma </em>highlight.</p>



<p>Are we forever doomed to arguing about autonomy in a world of
algorithmic systems through the lens of behaviorism? More important, will the
argument that social media algorithms are behaviorist/illiberal actually help
us win the public debate around how social media should be regulated?</p>



<p>The first question is hard to answer. But the second one is even
more important. I hope that invoking the specter of behaviorism helps us win
public support in the battle to regulate social media but I worry on two
counts.</p>



<p>First, arguing that Facebook is addictive because of its recommendation algorithms or because it gives political campaigns the ability to uncannily target persuadable voters <a href="https://boingboing.net/2019/04/18/bork-bork-bork.html">ends up hyping — even if unintentionally so — the personalization algorithms of Facebook and Google and YouTube</a>. Many scholars have argued that the problem with Zuboff and <em>The Social Dilemma</em> is that they end up amplifying the self-serving narratives of these companies about their latest magic trick, be it the <a rel="noreferrer noopener" href="https://thecorrespondent.com/100/the-new-dot-com-bubble-is-here-its-called-online-advertising/13228924500-22d5fd24" target="_blank">persuasive power</a> of their <a rel="noreferrer noopener" href="https://www.newstatesman.com/science-tech/social-media/2020/10/how-cambridge-analytica-scandal-unravelled" target="_blank">political advertising</a> algorithms or the awesomeness of their <a rel="noreferrer noopener" href="https://www.nytimes.com/2019/08/16/technology/ai-humans.html" target="_blank">artificial intelligences</a>. This has most been the <a rel="noreferrer noopener" href="https://www.wired.com/story/the-noisy-fallacies-of-psychographic-targeting/" target="_blank">case with Cambridge Analytica</a>: rather than evidence of Facebook’s sloppiness while dealing with third-party app developers, the controversy turned into a question of whether CA had “manipulated” citizens into voting for Donald Trump — which was exactly <a rel="noreferrer noopener" href="https://www.motherjones.com/politics/2018/03/cloak-and-data-cambridge-analytica-robert-mercer/" target="_blank">CA’s pitch to various campaigns</a>.</p>



<p>Second, surveillance capitalists have the intellectual resources at their disposal to counter the charge that their apps are sites of Pavlovian manipulation of users. In the years since cognitivism ousted behaviorism, cold war researchers and computer programmers created a new ideology of freedom. <a rel="noreferrer noopener" href="https://press.uchicago.edu/ucp/books/book/chicago/F/bo3773600.html" target="_blank">Historian Fred Turner</a> has argued that even as cold war computer labs built technologies that were “large, complex, [and] centralized,” the labs themselves were sites of “flourishing […] non-hierarchical interdisciplinary collaboration” (p18); these cold war labs helped “perpetuate an extraordinarily ﬂexible, entrepreneurial, and, for its participants, often deeply satisfying style of research” (p17). Cultural entrepreneurs like Stewart Brand and Howard Rheingold connected these free-wheeling non-hierarchical practices of collaboration within cold war computer labs to the emerging counter-cultural movements and their desire to free themselves from the military-industrial-bureaucratic system creating the ideology that might be called “cyberutopianism.” These entrepreneurs imbued the digital computer, otherwise a symbol of the hated system and the government, with the notion of liberation; the digital computer thus came to be seen as the way through which the counterculture could liberate itself from the hateful forms of bureaucracy (corporate or government) that it despised.</p>



<p>As the <a href="https://kelty.org/or/papers/Kelty_2014_Fog_of_Freedom.pdf" target="_blank" rel="noreferrer noopener">anthropologist
Chris Kelty</a> argues, cyberutopians believe that their apps make users more
free, not less. Theirs is a theory of “positive liberty” meaning that there is
no contradiction here between shaping human behavior to make humans even more
autonomous and free. As Kelty puts it:</p>



<blockquote class="wp-block-quote"><p> <br>If there is something to be concerned about in Silicon Valley’s approach to liberty, it is not that it is overly libertarian, but that it is a kind of positive liberty imposed not through government action, but through the creation and dissemination of technologies [… that have] been designed to liberate (or coerce) the individual into being a freer, and more individual, individual. </p></blockquote>



<p>As an example of cyberutopianism, look no further than Mark
Zuckerberg’s post-election in-the-midst-of-Cambridge-Analytica manifesto from
2017 which is titled “<a href="https://www.facebook.com/notes/mark-zuckerberg/building-global-community/10154544292806634" target="_blank" rel="noreferrer noopener">Building Global Community</a>.” The word “community” appears in
it more than 100 times and Zuckerberg argues that Facebook is essentially a
tool that people the world over draw on to build communities. Facebook’s goal,
however, is to create the social infrastructure that will help people fulfill
their potential: “the most important thing we at Facebook can do is develop the
social infrastructure to give people the power to build a global community that
works for all of us.”</p>



<p>In the fight to regulate algorithmic systems, will people believe Zuckerberg or will they believe Zuboff or Harris? I hope it’s the latter but if the <a rel="noreferrer noopener" href="https://www.thenation.com/article/politics/prop-22-labor/" target="_blank">fight over California’s Prop 22</a> is any indication, I’m worried it’s the former. And if so, it raises a different question: in the fight over regulating digital platforms, how might we think of countering cyberutopianism?Empirical research (e.g. by <a rel="noreferrer noopener" href="https://morganya.org/charisma.html" target="_blank">Morgan Ames on OLPC</a>, <a rel="noreferrer noopener" href="https://press.princeton.edu/books/hardcover/9780691163987/disruptive-fixation" target="_blank">Christo Sims on digital schools</a>, and <a rel="noreferrer noopener" href="https://escholarship.org/uc/item/3239b1qv" target="_blank">Lilly Irani on design</a>) points to some ways but that’s a post for another time.</p>



<p>[Cross-posted on <a href="https://medium.com/swlh/are-surveillance-capitalists-behaviorists-does-it-matter-no-and-maybe-a7327265eead">Medium</a> and the <a href="https://www.getrevue.co/profile/TQE/issues/tqe-newsletter-issue-6-290242">TQE newsletter</a>.]</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Talk about gender in the early issues of Byte</title>
		<link>https://culturedigitally.org/2019/05/talk-about-gender-in-the-early-issues-of-byte/</link>
		
		<dc:creator><![CDATA[Kevin Driscoll]]></dc:creator>
		<pubDate>Fri, 24 May 2019 11:00:07 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[computing]]></category>
		<category><![CDATA[gender]]></category>
		<category><![CDATA[history]]></category>
		<guid isPermaLink="false">https://culturedigitally.org/?p=9100</guid>

					<description><![CDATA[In the mid-1970s, before you could buy a computer at the store and no one yet called them &#8220;personal,&#8221; tech enthusiasts in the U.S. gathered around newsletters and magazines to imagine a more computerized future. For the past two years, my research assistant Rahul Zalkikar and I have been systematically exploring the first five years [&#8230;]]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image"><img loading="lazy" width="760" height="1024" src="https://culturedigitally.org/wp-content/uploads/2019/05/byte-cover-1975-12-760x1024.png" alt="" class="wp-image-9101" srcset="https://culturedigitally.org/wp-content/uploads/2019/05/byte-cover-1975-12-760x1024.png 760w, https://culturedigitally.org/wp-content/uploads/2019/05/byte-cover-1975-12-111x150.png 111w, https://culturedigitally.org/wp-content/uploads/2019/05/byte-cover-1975-12-223x300.png 223w, https://culturedigitally.org/wp-content/uploads/2019/05/byte-cover-1975-12-768x1035.png 768w, https://culturedigitally.org/wp-content/uploads/2019/05/byte-cover-1975-12-640x862.png 640w, https://culturedigitally.org/wp-content/uploads/2019/05/byte-cover-1975-12.png 1033w" sizes="(max-width: 760px) 100vw, 760px" /><figcaption>Some readers bristled at this representation of their hobby from <em>Byte </em>in December, 1975.</figcaption></figure>



<p>In the mid-1970s, before you could buy a computer at the store and no one yet called them &#8220;personal,&#8221; tech enthusiasts in the U.S. gathered around newsletters and magazines to imagine a more computerized future. For the past two years, my research assistant <a href="https://github.com/zalkikar">Rahul Zalkikar</a> and I have been systematically exploring the first five years of <a href="https://archive.org/details/byte-magazine"><em>Byte</em></a> magazine, paying special attention to the commentary submitted by readers. Amid countless requests for help with sourcing parts and assembling kits, we found readers negotiating the norms and values of their fledgling <a href="https://mitpress.mit.edu/books/ham-radios-technical-culture">technical culture</a>. What sort of person could become a computer expert? How would non-experts make sense of these machines? And where would such a thing fit into the life of a family?</p>



<p>The preponderance of men in the microcomputing hobby became a recurring topic of discussion in the early issues of <em>Byte. </em>This spring, Rahul undertook a close examination of reader letters concerning gender, families, and computing. The result is <a href="https://zalkikar.github.io/">a concise, provocative exploration</a> of three moments in which <em>Byte</em> readers pushed back on the presumption that microcomputing was to remain the exclusive domain of (white, middle class) men. </p>



<p>From responses to a biased reader survey to a debate about the proper use of pronouns, Rahul identified a small but vocal group of <em>Byte</em> readers who seemed poised to challenge a social force that he terms &#8220;the dominant masculinity in the new world of personal computing.&#8221; By the early 1980s, this dominant masculinity was fixed in <a href="https://mitpress.mit.edu/books/recoding-gender">the stereotype of the &#8220;computer geek&#8221;</a> but the debates in Rahul&#8217;s study highlight a brief moment of opportunity in which the microcomputer seemed to invite a re-imagining of technical expertise among enthusiasts. Perhaps by returning to this period of uncertainty, we might better understand the historical relationship between gender identity and what Joy Lisi Rankin calls &#8220;<a href="http://www.hup.harvard.edu/catalog.php?isbn=9780674970977">the act of computing</a>.&#8221;</p>



<p>Rahul&#8217;s essay, &#8220;<a href="https://zalkikar.github.io/">The Gender Binary of Computing: Challenging Sexism in Technology</a>,&#8221; is available on his Github page, along with <a href="https://github.com/zalkikar/zalkikar.github.io">the data and source code underlying his statistical analyses</a>. Rahul will be starting the M.S. in Data Science program at NYU this fall.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The gentrification of the internet</title>
		<link>https://culturedigitally.org/2019/03/the-gentrification-of-the-internet/</link>
					<comments>https://culturedigitally.org/2019/03/the-gentrification-of-the-internet/#comments</comments>
		
		<dc:creator><![CDATA[Jessa Lingel]]></dc:creator>
		<pubDate>Wed, 13 Mar 2019 17:01:59 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[economics]]></category>
		<category><![CDATA[platforms]]></category>
		<guid isPermaLink="false">https://culturedigitally.org/?p=9086</guid>

					<description><![CDATA[This is an essay about technology, power relations and basic dignity.&#160;&#160;It is about the commercialization of online platforms and the difficulties of retaining individual power and autonomy online.&#160;&#160;It is about the gentrification of the internet.&#160;When I call the internet gentrified, I’m describing shifts in power and control that limit what we can do online. I’m [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>This is an essay about technology, power relations and basic dignity.&nbsp;&nbsp;It is about the commercialization of online platforms and the difficulties of retaining individual power and autonomy online.&nbsp;&nbsp;It is about the gentrification of the internet.&nbsp;When I call the internet gentrified, I’m describing shifts in power and control that limit what we can do online. I’m also calling out an economy and industry that prioritize corporate profits over public good, and pointing to the ways that some forms of online behavior have become the “right” way to use the Web, while other forms of behavior get labeled backwards or out of date. In the early days, the Web was driven by experiments in technology, DIY community building and curiosity around connecting with strangers from across the world. The Web we have now is guided by different principles, like business models that rely on a&nbsp;<a href="https://www.nytimes.com/2018/01/30/opinion/strava-privacy.html">constant transfer of data</a> from people to marketers, social norms of&nbsp;<a href="https://www.healthline.com/health-news/social-media-use-increases-depression-and-loneliness">consumption</a> and&nbsp;<a href="https://yalebooks.yale.edu/book/9780300209389/status-update">self-promotion</a>, and&nbsp;<a href="http://www.hup.harvard.edu/catalog.php?isbn=9780674368279">black boxing</a> the algorithms that structure the platforms we use. The internet is increasingly making us more isolated, less democratic, and beholden to major corporations and their shareholders. In other words, the internet is increasingly gentrified.</p>



<p>I’m painting in broad strokes here—of course corporations have always shaped the internet’s look and feel and of course DIY communities are still an important part of online life. But there’s no denying the fact that a small number of high-powered corporations have come to have significant control over what the web looks and feels like. As&nbsp;<a href="http://www.worldcat.org/oclc/1050083621">Siva Vaidhyanathan</a> has pointed out, a single company, Facebook, dominates the market for social media users, shifting a huge amount of economic and political power to one corporation.&nbsp;&nbsp;Meanwhile, Google dominates online searching with&nbsp;<a href="https://www.smartinsights.com/search-engine-marketing/search-engine-statistics/">75% of the global market</a>. The next most popular search engine, Bing, doesn’t even come close. Amazon’s marketplace has redefined what normal online shopping looks like<em>,&nbsp;</em>predicting our interests and changing our expectations about buying and selling<em>.&nbsp;</em>Power is so concentrated that living without the Big five tech companies isn’t just inconvenient, it’s&nbsp;<a href="https://gizmodo.com/i-cut-the-big-five-tech-giants-from-my-life-it-was-hel-1831304194">almost impossible,</a>  leading&nbsp;some people to call for&nbsp;trust-busting shake ups of the industry (<a href="https://www.nytimes.com/2017/04/22/opinion/sunday/is-it-time-to-break-up-google.html?_r=0">call</a>, <a href="https://www.theringer.com/tech/2018/11/19/18102162/tim-wu-facebook-antitrust-law-book-curse-of-bigness">call</a>, <a href="https://www.cnbc.com/2019/03/08/elizabeth-warren-pushes-to-break-up-companies-like-amazon-and-facebook.html">call</a>).&nbsp;Corporations with almost unlimited resources have monopolized digital culture, pushing out smaller companies and platforms, and in the process, defining what online interactions are possible. Condensing this much control goes beyond a reduction of consumer choice, it’s a form of technological gentrification.&nbsp;</p>



<p>When people connect gentrification to the internet, it’s usually about the tech industry’s role in&nbsp;<a href="https://www.newsweek.com/san-francisco-tech-industry-gentrification-documentary-378628">reshaping</a> neighborhoods that host their company <a href="https://www.wired.com/2017/02/tech-campuses-hinder-diversity-help-gentrification/">headquarters. </a>These (<a href="https://www.huffingtonpost.com/entry/silicon-valley-gentrification-low-wage-workers_us_56d08998e4b0871f60eb3318">very real</a>,&nbsp;<a href="https://www.citylab.com/equity/2015/09/the-complicated-link-between-gentrification-and-displacement/404161/">very important</a>) problems are more about how the industry has created inequalities in the spaces and communities surrounding their corporate headquarters. I see a connected but separate set of issues in the kinds of online spaces and relationships that are increasingly encouraged or restricted online. By calling the contemporary internet gentrified, my goal is both to diagnose a set of problems and lay out what internet activists can do to carve out more protections and spaces of freedom.</p>



<p>Before I get there, it’s important to be clear about what gentrification is and how it helps describe the modern, mainstream web. The term gentrification is divisive, with some seeing&nbsp;<a href="https://www.fastcompany.com/3026000/why-gentrification-can-be-a-good-thing">opportunities</a>  for&nbsp;<a href="https://www.theatlantic.com/business/archive/2015/06/gentrification-bad-word/396908/">economic development </a>and others a&nbsp;<a href="https://www.theatlantic.com/business/archive/2015/06/gentrification-bad-word/396908/">death</a><a href="http://nymag.com/news/intelligencer/62675/">knell</a>  for the social and cultural histories of local communities. To make things more complicated, it’s not like there’s One Thing called gentrification – instead there’s a bunch of processes tangled up in competing stakeholders and institutions. As a starting point, in urban studies gentrification is defined as,</p>



<blockquote class="wp-block-quote"><p>an economic and social process whereby private capital (real estate firms, developers) and individual homeowners and renters reinvest in fiscally neglected neighborhoods through housing rehabilitation, loft conversions, and the construction of new housing stock. Unlike urban renewal, gentrification is a gradual process, occurring one building or block at a time, slowly reconfiguring the neighborhood landscape of consumption and residence by displacing poor and working-class residents unable to afford to live in ‘revitalized’ neighborhoods with rising rents, property taxes, and new businesses catering to an upscale clientele. </p><cite><a href="http://www.worldcat.org/oclc/865276680">Perez</a>, 2004, p. 139</cite></blockquote>



<p>People who see gentrification as a good thing tend to emphasize opportunities for new businesses and real estate development. But these benefits&nbsp;<a href="https://www.theatlantic.com/magazine/archive/2014/06/the-case-for-reparations/361631/">aren’t evenly distributed</a> – they usually go to people who already have wealth and resources. Gentrification changes the physical spaces in a neighborhood, bringing in new architectural aesthetics and new kinds of business. Existing houses seem smaller and more dated, and old businesses lose customers as new residents bring demands for cosmopolitan perks. Gentrification also changes the social norms in a neighborhood, with the potential for clashes over&nbsp;<a href="https://psmag.com/social-justice/gentrification-increaes-noise-complaints-in-nyc">noise</a>,&nbsp;<a href="https://shelterforce.org/2016/02/04/gentrification_and_public_schools_its_complicated/">parenting styles</a>, and even&nbsp;<a href="https://www.citylab.com/environment/2017/08/the-politics-of-the-dog-park/536463/">pets</a>.</p>



<p>Across different cities and neighborhoods, gentrification exacerbates inequality and normalizes certain social values while excluding others. With these tensions in mind, what exactly characterizes a gentrified internet? How can we map conditions of urban gentrification onto digital platforms?&nbsp;&nbsp;I see three key characteristics of gentrification in the contemporary web, all of which limit online freedoms for individuals in order to support the interests of major tech companies.&nbsp;</p>



<p><em>Isolation. </em>Gentrification results in pockets of isolation where longtime residents are boxed in by new neighbors with different income levels and (often) social or cultural expectations of neighborhood behavior. Neighbors can wind up deeply&nbsp;<a href="https://www.coloradotrust.org/content/story/thread-ties-segregation-gentrification">segregated</a>, living next door but going to different churches, sending their kids to different school and shopping at different stores. Compare this to online filter bubbles. Before social media, forming communities online mostly meant meeting new people with a shared interest. No algorithms, no platform-based recommendations of friends or content, just showing up at a message board or in a chatroom and seeing who else was around. (Of course, who showed up was driven largely by who could afford a modem and the time to learn how to use it.) Platforms like reddit and 4chan still operate this way, but most social media platforms use existing IRL personal networks to link users and push content. Overtime, it’s become the norm to push content based on likes and personal affinity, resulting in what&nbsp;<a href="http://www.worldcat.org/oclc/847365101">Eli Pariser</a>  has called filter bubbles. Rather than being exposed to diverse people and content, people are&nbsp;<a href="https://www.mediamatters.org/blog/2018/04/13/lack-diversity-core-social-medias-harassment-problem/219932">increasingly segregated</a>. What’s particularly troubling about online isolation is that offline, most&nbsp;<a href="http://www.leonidzhukov.net/hse/2017/networkscience/papers/McPherson_HomophilyInSocialNetworks.pdf">people are already filter-bubbled</a>  in terms of their social networks, meaning we tend to have friends from the same racial and class background. The promise of early online communities was getting ourselves outside those bubbles, a possibility that mainstream social media platforms increasingly deprioritize.</p>



<p><em>Increasing costs.</em> Gentrification is fundamentally about space, but it’s a process that unfolds over time. Gradually, new neighbors raise property values and taxes. Rising costs make previous communities unlivable in the long term for original residents and people of the same demographic. Online, gentrification happens as older platforms struggle to compete with the resources and values of newer platforms. In my research on&nbsp;<a href="https://mitpress.mit.edu/books/digital-countercultures-and-struggle-community">digital countercultures</a>, I found that communities on the margins struggle to make mainstream technologies meet their needs. For communities that had been online a long time, competing with new platforms like Facebook and Instagram was a losing battle because they were outspent and out-coded by the seemingly-endless resources of big tech.&nbsp;&nbsp;</p>



<p><em>Uneven commercialization.</em> Gentrification isn’t just about who lives where, it’s about the&nbsp;<a href="http://www.governing.com/columns/assessments/gov-gentrification-local-business-extinction.html">kinds of businesses</a>  that can be sustained by the surrounding community. Gentrification often means the destruction of local businesses that supported existing communities in order to make way for new businesses that appeal to newcomers. In my neighborhood in South Philly, I’ve seen locally owned bodegas, diners and community centers turned into yoga studios, gastropubs and brunch spots. A perverse cruelty of gentrification is a shifting of otherness from newcomers to old timers. Neighborhoods with rich histories and community culture are read against the competing expectations and values of new arrivals. With more resources and influences, it’s often newcomers whose values and interests win out. Crucial changes in social norms also happen on the gentrified internet. For example, I’ve been writing a&nbsp;<a href="https://www.tandfonline.com/doi/abs/10.1080/24701475.2018.1478267">social history of craigslist</a>, and over and over again, I heard people describe the site as out of date. In interviews, I heard phrases like “the poor people’s internet” and “a website for the working class” to describe a platform that was viewed as elite and visionary when it first went online in 1996. But as new sites emerged with more features and a newer aesthetic, craigslist began to seem shabbier and shadier, and even as it continued to provide the same services, most people saw the site as backwards, outdated and on the brink of obsolescence.&nbsp;The risk here is of actively leaving people and platforms behind, not because they’ve stopped working or being useful but because they’ve stopped looking or feeling like the rest of the web.</p>



<h4>What to do?</h4>



<p>Working to combat gentrification doesn’t mean an end game of toppling Facebook or rendering the mainstream web irrelevant.&nbsp;&nbsp;A more immediate goal is simply to diagnose a set of problems and suggest steps for demanding change.&nbsp;&nbsp;Here are a couple ideas to get us started.</p>



<p><em>Be your own algorithm.&nbsp;</em>Rather than passively accepting the networks and content that platforms feed us, we need to take more ownership over what our networks look like so that we can diversify that content that comes our way. Platforms like&nbsp;<a href="https://mashable.com/article/facebook-fake-news-uk/#uuwQiAvXvkqk">Facebook</a>  and&nbsp;<a href="https://www.washingtonpost.com/technology/2019/01/25/youtube-is-changing-its-algorithms-stop-recommending-conspiracies/?utm_term=.5b6009428c39">YouTube</a>  are tweaking their algorithms for recommended videos because they&#8217;ve been pressured over facilitating the spread of viral news and extreme content.&nbsp;&nbsp;But as platforms work on this problem, we can take steps to be our own algorithms and deliberately diversify our networks and the content we see. On a practical level, this means doing a casual audit of the people we follow and asking, how can I diversify these voices and perspectives?&nbsp;&nbsp;This might mean seeking out more POCs, women, queer folk, differently abled people or neuro-atypical people, or it might mean trying to expose ourselves to content from people living in rural areas or other countries.&nbsp;Shaking up our networks can create more awareness about how platforms operate and perhaps reclaim some of the early web hype about learning new perspectives by encountering new people.</p>



<p><em>In the city as online, we need regulation</em>. Just as cities are struggling to find&nbsp;<a href="https://abc7ny.com/realestate/newark-out-to-protect-low-income-residents-from-gentrification/4848609/">workable</a><a href="https://newrepublic.com/article/144260/stop-gentrification"> interventions</a>  from local government, it’s become depressingly clear that politicians in the U.S. are&nbsp;<a href="https://www.cnet.com/news/some-senators-in-congress-capitol-hill-just-dont-get-facebook-and-mark-zuckerberg/">ill-informed</a>  and&nbsp;<a href="https://www.washingtonpost.com/business/technology/lawmakers-agree-social-media-needs-regulation-but-say-prompt-federal-action-is-unlikely/2018/04/11/d3ce71b0-3daf-11e8-8d53-eba0ed2371cc_story.html?noredirect=on&amp;utm_term=.7fe9d958d410">unlikely to intervene</a>  when it comes to big tech. But demanding action from legislators at every level can make a big difference. Just like attending local zoning meetings can help new residents understand neighborhood tensions, learning the basics of web platform policies isn’t hard, it just takes a little time. How many ISPs are there in your neighborhood? Are there small providers or mesh network alternatives? How many of your local representatives accept donations from major internet providers like Comcast?&nbsp;&nbsp;Start with your congressperson or city council rep, both of whom will likely have staffers who answer the phone rather than kicking you to a message machine. Ask about their position on net neutrality, about internet penetration, about local support for digital media literacy. Being informed is a crucial step in understanding the barriers to radical change.</p>



<p>But it isn’t just about learning the politics of the internet’s infrastructure, it’s also about&nbsp;<em>learning the politics of platforms</em>. Platforms&nbsp;<a href="https://www.wired.com/story/how-social-networks-set-the-limits-of-what-we-can-say-online/">love to create documents like “community guidelines”</a>  but these texts are hard to read and can change at will. Moreover, they’re always top down rather than bottom up. Just learning these guidelines is an important step, a parallel to learning how local tax codes shape neighborhood gentrification. Platforms&nbsp;<em>can</em>change their policies if enough users make demands. In 2014, Facebook can&nbsp;<a href="https://www.theatlantic.com/technology/archive/2014/10/one-name-to-rule-them-all-facebook-still-insists-on-a-single-identity/381039/">changed its “real” name policy</a>  through concerted efforts of queer, trans and indigenous activism. We can demand change from our platforms, but it takes overcoming a sense of powerlesness, learning the stakes and stakeholders, and being thoughtful about how and with whom we spend our time online.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://culturedigitally.org/2019/03/the-gentrification-of-the-internet/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>New book excerpt! from Documenting Aftermath: Information Infrastructures in the Wake of Disasters, by Megan Finn</title>
		<link>https://culturedigitally.org/2018/11/documenting-aftermath/</link>
					<comments>https://culturedigitally.org/2018/11/documenting-aftermath/#comments</comments>
		
		<dc:creator><![CDATA[Translation-By Hector-Postigo]]></dc:creator>
		<pubDate>Wed, 14 Nov 2018 07:02:03 +0000</pubDate>
				<category><![CDATA[Book Chapter Sneak Peak]]></category>
		<category><![CDATA[infrastructure]]></category>
		<category><![CDATA[public]]></category>
		<guid isPermaLink="false">https://culturedigitally.org/?p=9077</guid>

					<description><![CDATA[Documenting Aftermath (MIT Press) (Amazon) looks at Northern California earthquakes in 1868, 1906, 1989, and today, and asks how information orders shaped post-disaster knowledge. I examine the institutions, infrastructures, and practices that shape how information is produced, circulated, shared, and used as a means of surveillance and control. This excerpt is derived from Chapter 5, which [&#8230;]]]></description>
										<content:encoded><![CDATA[<p><em>Documenting Aftermath</em> (<a href="https://mitpress.mit.edu/books/documenting-aftermath">MIT Press</a>) (<a href="https://www.amazon.com/dp/0262038218/">Amazon</a>) looks at Northern California earthquakes in 1868, 1906, 1989, and today, and asks how information orders shaped post-disaster knowledge. I examine the institutions, infrastructures, and practices that shape how information is produced, circulated, shared, and used as a means of surveillance and control. This excerpt is derived from Chapter 5, which is trying to answer the question: what could the present information order look like after an earthquake in California? &#8211; Meg Finn</p>
<p><em>Thank you, Megan and the MIT Press for sharing! &#8211; Culture Digitally</em></p>
<p><img loading="lazy" class="size-full aligncenter" src="https://www.fabernett.com/pictures/48636_001.jpg?v=1488233013" width="1365" height="1089" /></p>
<p class="p1">If your community had been hit by an earthquake, a strong storm, or other disaster, how would you communicate with your loved ones and your neighbors as well as with the experts charged with responding? Those impacted by a quake would want to assure loved ones of their well-being; some people would attempt to do this with a phone call, and many would notify their loved ones via Facebook or on other social media platforms. People might expect that the Federal Emergency Management Agency (FEMA), a unit of the Department of Homeland Security, along with branches of local government, would help those affected, and that government officials would get on Twitter or television to tell people where to go for aid and how to be safe.</p>
<p class="p1">Contemporary US disaster plans imagine the techniques that the government, as part of the information order, introduced by the late historian C. A. Bayly, should utilize in order to produce information. Yet, prior to earthquakes in 1868 and 1906, discussed in <i>Documenting Aftermath</i>, the government did not plan for disaster response. For example, in the 1868 earthquake along Hayward Fault, earthquake publics struggled with the state&#8217;s role in disaster response. In 1868, no plan existed for how the government would react to an earthquake, and though there was demand for government intervention, there was very little. The follow-up to the calamitous 1906 Earthquake and Fire saw a larger local and federal government response to the disaster, but again, it was not based on a planned and publicly vetted disaster response process. The 1989 Loma Prieta earthquake included planned informational responses—particularly in the production of public information&#8211; and illustrated the <i>bureaucratization disaster response</i> by the state. Throughout the Cold War, complex organizations had been put in place that were specifically charged with planning for and responding to disasters. The 1989 earthquake revealed that these plans envisioned a singular public, awaiting instructions from the government, transmitted by the media.</p>
<p class="p1">These disaster response plans themselves are a form of information infrastructure in the sense that they portray and are representations of bureaucratic technology. In other words, the disaster response plans describe the actions that professional disaster responders should take to produce public information. And the plans themselves are a material representation of disaster information practices. While these bureaucratic technologies in no way constitute the whole of the information order after a disaster, they are worthy of examination because experience tells us that disaster plans critically shape the government&#8217;s actions. In the context of the United States, disaster plans give an idealized picture of government involvement in post-disaster information infrastructure. Plans explain how to both preserve the past and make the future. The plans preserve the past by, in some sense, assuming that the goal is to help people return to a pre-disaster existence, and the plans attempt to make the future by guessing what needs to be done after a disaster.</p>
<p class="p1">In the aftermath of a disaster, today&#8217;s government disaster response plans imagine that it will assumes two roles: as a consumer of public information, via sociotechnical assemblages for situational awareness, and a producer of public information for citizens, the delivery of which is supported by a number of different public information infrastructures. In the case of disaster response plans, situational awareness involves centralizing records associated with incidents as well as informing disaster response both broadly and specifically in the area of public information. Situational awareness is thought to be a state of understanding the implications and context of a disaster such that one can make decisions about what to do next. The idea that the populace might be a source of situational awareness is a fairly new phenomenon. In the 1989 disaster response plans, earthquake publics were not imagined as a source of understanding for the government – knowledge of a crisis was to come from other disaster response professionals and government officials.</p>
<p class="p1">Situational awareness is a goal that is invoked often in new plans for disaster response, particularly around the practices of information and communication. After Hurricane Katrina in 2005, federal post-disaster analysis blamed a lack of situational awareness for what was widely agreed to be an appalling disaster response: &#8220;The lack of communications and situational awareness had a debilitating effect on the Federal response.&#8221; In the government&#8217;s after disaster post mortem reports, situational awareness along with the information supposedly underpinning it is a way to call attention to what people understood to be happening at the time of the disaster, and serves as a target of blame for poor decisions made due to limited or incorrect information. It is a technique that serves people in power, who are often at a distance from a disaster, in making decisions about how to respond. It also legitimizes choices around the mobilization and distribution of resources.</p>
<p class="p1">Beyond the role of the government as a consumer of reports generated by earthquake publics under the rubric &#8220;situational awareness,&#8221; the government is a producer of public information. Government disaster response plans imagine a post-disaster space of communication and often contain explicit instructions for how disaster response professionals are to communicate with earthquake publics. Conceptions of (a singular) &#8220;the public&#8221; in contemporary disaster response plans aim to be inclusive in their outreach. The newest disaster response plans attempt to produce &#8220;public information&#8221; such that a wide swaths of &#8220;the public&#8221; can understand it. While the state increasingly looks to different disaster publics for situational awareness, the plans still treat the government as primary informers of citizens. While the government&#8217;s vision of the public attempts to be inclusive, its conception of itself as an information producer is thoroughly hierarchical, both producing and processing information through the Incident Management System and the National Incident Management System (NIMS). The Incident Command System is a program that intentionally implements &#8220;institutional isomorphism.&#8221; That is, the idea is that everyone who shows up to respond to a disaster in a professional capacity understands the terminology. NIMS is a highly standardized and uniform organizational scheme that envisions a singular path for producing authoritative public information through the Joint Information System.</p>
<p class="p1">The instructions for creating public information aim for the government to be the informational authority. Yet, after a disaster, the government must work within an information order that is partially of its own making, through its production of information for earthquake publics, but also participate in an information order dominated by social media companies. In the United States, social media companies mediate both how people get news and interpersonal relationships, and are influential in shaping contemporary event epistemology. Early Internet proponents imagined that it might be a platform that would make it possible for all voices to be broadcast; ideally social media platforms allow for a plurality of voices – unlike the government plans. On the one hand, the public information infrastructure of today is conceived of in terms of the production of documents by the many—the masses of Google, Twitter, and Facebook users, whose voices are broadcast far beyond the streets from where they access these platforms. On the other hand, the government response plans describe hierarchical organizational systems, such as the Incident Command System and NIMS for producing authoritative information to be distributed to earthquake publics. Though disaster response plans and sociotechnical media platforms are both used to produce information about a disaster, one could characterize bureaucratic technology and information technology as forming a dialectical relationship.</p>
<p class="p1">Yet, over the last decade, the government has been adjusting its practices to communicate with potential earthquake publics on platforms that they already use. As crisis informatics researchers Axel Bruns and Jean Burgess observe, &#8220;Over the past decade, social media have gone through a process of legitimation and official adoption, and they are now becoming embedded as part of the official communications apparatus of many commercial and public sector organizations—in turn providing platforms like Twitter with their own source of legitimacy.&#8221; Social media are a key dimension of the contemporary information order. A series of research projects over the last decade have examined varied &#8220;emergent&#8221; social media practices after US disasters and shown how central social media corporations are to organizing post-disaster information practices and earthquake publics.</p>
<p class="p1">Social media platforms fit in well with the government&#8217;s disaster plans because they can be integrated into the hierarchical and centralized Joint Information System. Platforms like Twitter can also give the government a venue for directly distributing its &#8220;public information.&#8221;<span class="Apple-converted-space">  </span>The government maintains a presence on crowdsourcing websites such as Twitter and Facebook, and uses these platforms to broadcast its messages. In 1989, disaster plans relied on mainstream media outlets to circulate the governments&#8217; messages, and disaster planners would simply hope that people would turn on their radios or televisions to receive information via the Emergency Broadcast System. In some sense, the government, which uses Twitter, now has more control than ever over its communications to citizens.</p>
<p class="p1">The government also uses social media to make sense of a disaster. When FEMA used social media to improve situational awareness after Hurricane Sandy in 2012, social media was important not just for circulating public information but also helping the government understand, and thus govern, the disaster. Outside the government, researchers and businesses have recognized social media&#8217;s potential value as a source of information about disasters, particularly as a source contributing to perpetual situational awareness. Ordinary people&#8217;s voices are potentially folded into the government&#8217;s situational awareness, which in turn informs the production of government public information. Even though there is more room for the voices of various earthquake publics to be incorporated into the government&#8217;s imagination of what is happening after a disaster, the government is still in the powerful position of deciding (or not) to listen to and legitimize certain voices, and these public information infrastructures—especially the ones including social media—have important limitations built into them.</p>
<p class="p1">The situational awareness that is produced by Twitter frequently relies on a &#8220;messy assemblage&#8221; of other services, thereby reshaping and deforming the world it attempts to bring the analyst closer to. Lucy Suchman describes some of these in her discussion of the messy assemblage that produces situational awareness during war. In military situations, the various media used to create situational awareness enables people who operate drones remotely to believe they understand a situation enough to decide who to kill. And Suchman makes it clear that the stakes of situational awareness— &#8220;the messy assemblage of socio-technical mediation&#8221; — are high: identifying objects incorrectly can lead to accidentally killing civilians. The stakes for situational awareness in the disaster context are different; in theory, situational awareness allows decision makers and those in charge of resources to decide what to do as well as where to deploy those resources. Theoretically, Twitter data sets could enable a small number of key decision makers to decide to use their resources to save some people, while others, who may not be visible on social media, perish. The distortions that the messy assemblages producing situational awareness introduce are not obvious because many pieces are owned by social media companies, which are not transparent about what data they collect, what they do with it, and what portion of it is available to whom.</p>
<p class="p1">Today, earthquake publics are often what Tarleton Gillespie calls &#8220;calculated publics.&#8221; They are calculated through the design of aspects of public information infrastructures—social media corporations—and adoption of these sociotechnical practices in ways that reify the limitations of these calculated publics. And it is not just social media platform companies that calculate publics. The government has a calculated public embedded in its imagined post-disaster information practices as well. In disaster response plans, the government envisions inclusive earthquake publics, with different languages and abilities, but these same plans also imagine making sense of disaster impacts by using particular technologies that are not always inclusive. When I examine the role of information practices and technologies in disaster planning, seemingly oppositional forces are intertwined in symbiotic ways. People seek to reach government disaster response organizations using social media, government disaster response organizations use social media to reach the earthquake publics they are trying to help, social media companies make products to account for people after a disaster, and researchers build tools to help government disaster response organizations attempt to use social media information in their response activities. Social media technologies can, and are, being integrated into the centralizing information practices described in disaster response plans.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://culturedigitally.org/2018/11/documenting-aftermath/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>There&#8217;s a reason that misleading claims of bias in search and social media enjoy such traction.</title>
		<link>https://culturedigitally.org/2018/08/theres-a-reason-that-misleading-claims-of-bias-in-search-and-social-media-enjoy-such-traction/</link>
		
		<dc:creator><![CDATA[Tarleton Gillespie]]></dc:creator>
		<pubDate>Wed, 29 Aug 2018 22:22:57 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[bias]]></category>
		<category><![CDATA[Donald Trump]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[platforms]]></category>
		<category><![CDATA[search]]></category>
		<guid isPermaLink="false">https://culturedigitally.org/?p=9073</guid>

					<description><![CDATA[President Trump’s tweets charging that Google search results are biased, against him and against conservatives, are the loudest and latest version of a growing attack on search engines and social media platforms. It is potent, and it’s almost certainly wrong. But it comes at an unfortunate time, just as a more thoughtful and substantive challenge [&#8230;]]]></description>
										<content:encoded><![CDATA[<p class="graf graf--p">President Trump’s <a class="markup--anchor markup--p-anchor" href="https://twitter.com/realDonaldTrump/status/1034371152204967936" target="_blank" rel="noopener" data-href="https://twitter.com/realDonaldTrump/status/1034371152204967936">tweets</a> charging that Google search results are biased, against him and against conservatives, are the loudest and latest version of a growing attack on search engines and social media platforms. It is potent, and it’s almost certainly wrong. But it comes at an unfortunate time, just as a more thoughtful and substantive challenge to the impact of Silicon Valley tech companies has finally begun to emerge. If someone were truly concerned about free speech, news, and how platforms subtly reshape public participation, they would be engaging these deeper questions. But these simplistic and ill-informed claims of deliberate political bias are the wrong questions, and they risk undermining and crowding out the right ones. Trump’s charges against Google, Twitter, and Facebook reveal a basic misunderstanding of how search and social media work, and they continue to confuse “fake news” with bad news, all in the service of scoring political points. However, even if these companies are not responsible for silencing conservative speech, they may be partly responsible for allowing this charge to gain purchase, by being so secretive for so long about how their algorithms and moderation policies work.</p>
<p class="graf graf--p">So what do search engines actually do when users access them for information or news? Search engines deliver relevant results, nothing more. That judgment of relevance is based on hundreds of factors: including popularity, topic relevance, and timeliness. Results are fluid and personalized. There’s plenty of room in this complex process for overemphasis and oversight, and these are important questions to examine. But serious researchers who actually already study this are careful to take into account the effects of personalization, changes over time, and the powerful feedback effects of users. This is a far cry from looking at your own search results and being troubled by what you see.</p>
<p class="graf graf--p">To understand, for instance, the results for “Trump” in Google News, or “Trump news” in Google — different things, by the way — we would need to consider some much more likely explanations then deliberate political manipulation: major outlets like CNN may publish a lot more content a lot more often; more users may click on, read, and forward links from these sources; outspoken right-wing sites like Gateway Pundit may have much less trust outside of their devoted base and then they imagine; CNN may be much more congruent with centrist political leanings then Trump and conservative critics admit; well-established news sources may already circulate more widely and successfully on social media platforms like Facebook and Twitter, boosting their rankings on search engines; users may simply be more convinced by these news sources, “voting” for them with their clicks and links in ways that Google picks up on.</p>
<p class="graf graf--p">In truth, there are important questions to be asked about search engines, social media platforms, and the circulation of news online. There are profound concerns about the economic sustainability of journalism itself when it has to compete on social media platforms. There a profound concerns about the subtle effects of how algorithms work. But the noise that right-wing critics are stirring up is not subtle, it is not helpful, it is not well informed — and more than that, it is clearly about scoring political points. Those claiming political bias seem wholly uninterested in acknowledging the inquiries already underway. (Even the author of the report Trump was likely reacting to <a class="markup--anchor markup--p-anchor" href="https://www.washingtonpost.com/opinions/i-wrote-the-article-about-media-bias-in-google-searches-regulation-isnt-the-answer/2018/08/29/15bdaae2-abaa-11e8-8f4b-aee063e14538_story.html" target="_blank" rel="noopener" data-href="https://www.washingtonpost.com/opinions/i-wrote-the-article-about-media-bias-in-google-searches-regulation-isnt-the-answer/2018/08/29/15bdaae2-abaa-11e8-8f4b-aee063e14538_story.html">acknowledges that it was unscientific </a>and disagrees with the suggestion that regulation of search should follow.)</p>
<p class="graf graf--p">Charges of left-leaning bias are not new, of course. They come from a very <a class="markup--anchor markup--p-anchor" href="https://www.washingtonpost.com/news/posteverything/wp/2018/08/01/how-republicans-trick-facebook-and-twitter-with-claims-of-bias/" target="_blank" rel="noopener" data-href="https://www.washingtonpost.com/news/posteverything/wp/2018/08/01/how-republicans-trick-facebook-and-twitter-with-claims-of-bias/">old playbook</a> conservatives have used against newspapers and broadcasters for decades. Unfortunately, Silicon Valley is partly to blame for why it is working so well today. Search engines and social media platforms have been too secretive about how their algorithms work, and too secretive about how content moderation works. In the absence of substantive explanations, users have been left to wonder why search results look the way they do, or why some posts get removed and others don’t. This uncertainty breeds suspicion, and that suspicion goes looking for other explanations. This leaves room for trolls, conspiracy mongers, and demagogues to suggest that the platforms are silencing them for their political speech — conveniently overlooking the fact that they been suspended for making hateful threats, or can’t reach the first page of search results because readers trust other sources. And Silicon Valley has bruised their users’ trust for so long, that even their genuine explanations sound suspect.</p>
<p class="graf graf--p">Some of the press coverage, when it’s not careful, can inadvertently make the very same easy assumptions that these critics do. Search results, trending lists, and content moderation are not the same thing, they are not managed by the same people, and they are not handled in the same way. Too often, a critic will thread together ill-informed charges against search, one outdated incident regarding trending, and continued uncertainty about moderation practices, and lace them together into a blanket charge of bias. But they are simply different things.</p>
<p class="graf graf--p">It is unnerving to feel like an apologist for these tech companies. There are real and concerning questions about how are search and social media work, and my own research has raised some of these questions. I ask some of these questions in my own <a class="markup--anchor markup--p-anchor" href="https://www.amazon.com/dp/030017313X" target="_blank" rel="noopener" data-href="https://www.amazon.com/dp/030017313X">research</a>, and my field has been thinking about them for years. The ways these companies have addressed, or often failed to address, the public ramifications of search algorithms and moderation policies has been deeply problematic. But these questions of bias distract us from the deeper problems.</p>
<p class="graf graf--p">It is also disconcerting, just as the public is finally grasping the subtle ways in which search and social media platforms matter, that we are ready to fall back on so simplistic a charge as deliberate political bias. I feel a bit like critics of mainstream news media, who for years have tried to highlight the way contemporary US news organizations are subtly centrist, structurally cautious, founded by commercial imperatives, and under attentive to marginalize voices — who now have to bracket those critiques and come to the defense of CNN when the President dismisses them as “fake news.” Those of us who ask hard questions about search and social media should do so, but we must also steadfastly refused to lump these real concerns in with facile, politically motivated charges of bias that miss the deeper point.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Trials of Media Research</title>
		<link>https://culturedigitally.org/2018/07/the-trials-of-media-research/</link>
		
		<dc:creator><![CDATA[Jeff Pooley]]></dc:creator>
		<pubDate>Tue, 31 Jul 2018 11:40:44 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://culturedigitally.org/?p=9059</guid>

					<description><![CDATA[Media research is a vexing enterprise. Trapped in the borderlands between social science and the humanities, the study of media and communication bears the liabilities of both. We have to contend with all the challenges that sociologists and literary scholars face: the subjective baggage of the analyst, her struggle to interpret unstable meanings, the strange [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Media research is a vexing enterprise. Trapped in the borderlands between social science and the humanities, the study of media and communication bears the liabilities of both. We have to contend with all the challenges that sociologists and literary scholars face: the subjective baggage of the analyst, her struggle to interpret unstable meanings, the strange fact that her descriptions double back on the reality she purports to merely describe. Those are our challenges too, but we face them with special ferocity. The stuff that we study—internet memes, for example, or self-learning algorithms—are characterized by ceaseless churn. Even the categories we use, like “audience” or “content” or “producer,” get washed away by the pace of change. There is nothing fixed or frozen to linger on; everything we study is <em>on the move</em>, looping, dynamic, and messy. The word itself, “media,” gets at this fundamental instability: a medium is something <em>in between</em>, the airy space in the interstices of solider things.</p>
<p>So we can’t pretend to be a “science,” not in the confident sense at least. We are better off, as media scholars, submitting to our inadequacy. This means humility as a disposition of principle. Everything we say is tentative and revisable—good enough, at best, for the moment. But not for the next.</p>
<h2>1. The Problem of Researcher Subjectivity</h2>
<p>Scholars of all stripes have values, beliefs, and prejudices, just like all human beings. Try as they might, researchers can never bracket—not all the way—their situated, partial humanity that they bring to bear on their objects of study. A measure of this subjectivity isn’t even conscious, embedded as it is in language and taken-for-granted assumptions. Even the choice of <em>what</em> to study is irredeemably value-laden: why this, and not that? All of which is to say that there is no window on the world, no “god’s-eye” view, no “objectivity” in academic inquiry. This is as true of physics as it is of economics, but the consequences for social research are far more hobbling. A social researcher’s mix of beliefs and assumptions, what we might call his <em>worldview</em>, is of the same, human kind as the people and communities that he studies. He brings subjectivity to bear on other subjectivities. For the historian or anthropologist, this state of affairs poses a special challenge, that of overcoming <em>distance</em>: a gap in time, or in cultural difference.</p>
<p>For the media scholar the problem is reversed: Our subjects of study very often share our world. We use smartphone apps, for example, to record our subjects’ obsession with smartphone apps. Or we may study <em>Game of Thrones</em> fan fiction through the prism of our own consumption. Our problem, in other words, is the blindness of proximity. The task we face is not, like the historian or anthropologist, to make the strange familiar, but the opposite tack: to make the familiar strange.</p>
<h2>2. The Problem of Social Change</h2>
<p>Natural scientists tend to study more-or-less stable things—like rocks for a geologist—with an eye to finding general patterns or even “laws”. Gravity behaves the same way everywhere in the universe. A pathogen that caused pneumonia a hundred years ago will, barring medical intervention, do the same today.</p>
<p>Social researchers, in contrast, have nothing stable to cling to. The shared practices of any given human community may or may not resemble another. The only guarantee is that both will change over time, and in reaction to mutual contact. Scholars of the social, as a result, study a moving target. A 1950s book on European marriage, for example, may have reflected the norms and practices prevailing at the time; but Europeans today are far less likely to marry at all. It’s not that the 1950s book was inaccurate; it’s just outdated.</p>
<p>Media scholars face this problem of change at the pace of Silicon Valley. The interaction of markets, people and technologies means that change, for us, is more like a rolling boil. A networked babel of human meanings feed responsive algorithms that, in turn, circulate new meanings, all of it mediated by relentlessly updated software and hardware. If the plodding pace of academic publishing hardly seems up to the task, that’s because we can never pause long enough to take stock—or if we do, it’s already too late.</p>
<h2>3. The Problem of Interactive Kinds</h2>
<p>Unlike rocks and quasars, human beings can (and do) respond to the way they are described. A quasar, after all, does not change its self-definition after an astrophysicist labels it. Rocks and quasars are <em>indifferent</em> to the words we use to understand them. But humans live in a world of meanings, and these meanings include the ones that social researchers circulate. Take the label “homosexual,” which gained academic currency in the late 19th century to describe same-sex behavior. The term was adopted, in the West, as a clinical diagnosis—as a medicalized, treatable pathology. Many LGBT persons adopted the label, and their self-concepts changed as a result. Over time, and with special force in the first decades of the gay rights movement, the term itself was rejected for its pathologizing residues. Here is an example of what the philosopher Ian Hacking calls the “looping effects of human kinds”: an academic/clinical label was adopted and transformed by the labeled, requiring scholars and clinicians to adapt.</p>
<p>Social researchers, because they study self-interpreting animals (i.e., humans), must make sense of a social world that reacts to their documenting efforts. Social research is an interactive endeavor, an unstable loop of researchers and the researched. There are, as you might guess, ethical implications. The act of studying, prodding, labeling, and measuring is always part of the story, since the observed are thinking, reacting agents themselves. In some sense scholars <em>enact</em> the world they claim to merely depict.</p>
<p>For media scholars the challenge—in a now-familiar pattern—is more acute. The material that we write about, and thereby characterize, celebrate or condemn, is earth-bound and ordinary, the stuff of everyday life. We can’t even pretend to be detached observers, since our language and research tools are bound up in the popular media culture that we aim to understand.</p>
<h2>4. The <em>Verstehen</em> Problem</h2>
<p>Social researchers, as you know, can’t merely describe behavior and institutions like a biologist would an ant colony. The observable patterns aren’t enough for human scientists, since there’s a whole world of meaning and interpretation that stands behind the way people interact. Following the great German sociologist Max Weber, we can use <em>verstehen</em> to refer to the scholarly effort to reconstruct the meanings people make and circulate. A famous American anthropologist, Clifford Geertz, explained the point:</p>
<blockquote><p>Believing, with Max Weber, that man is an animal suspended in webs of significance he himself has spun, I take culture to be those webs, and the analysis of it to be therefore not an experimental science in search of law but an interpretative one in search of meaning.</p></blockquote>
<p>There is an obvious difficulty in all this: how does one gain access to these meanings, and aren’t the scholar’s summaries, anyway, mere interpretations (of interpretations)? Even if anthropologists or others manage to reconstruct these “webs of significance” with more or less fidelity, aren’t these meanings so particular and fleeting as to be worthless?</p>
<p>If pinning down patterned meanings is a problem for all social researchers, the effort is especially taxing for media scholars. What is the meaning of a GIF that spreads from an Irish teenager to a Chinese septuagenarian in five minutes? Since we investigate the in-between—since the detritus of a viral media culture is unstable by definition—the effort to locate stability looks like a fool’s errand. We study the interstitial, the evanescant, the networked: meaning-in-motion.</p>
<h2>5. The Problem of the Unobserved</h2>
<p>The social world that human scientists study is made up of more than meanings. There is, too, the harder stuff of <em>structure</em>, much of which operates outside our everyday experience. Consider a t-shirt featuring a band, that you bought at a concert. The shirt has meanings that a scholar might tease out in terms, say, of your performance of identity, or your taste profile. But what about the global supply chain that brought the shirt to the concert in the first place? The South Asian factory, the Danish shipping company, the New York-based trademark clearance operation, the record company’s bulk order—none of that registered with you, yet your purchase set it all in motion. Wide swaths of social life have this “behind people’s backs” character. The researcher’s task is to describe these hidden structures, to render them legible to policy-makers and, in theory, to the democratic public.</p>
<p>For the 20th century media scholar, this task resembled the sociologist’s or the economist’s. A media industry researcher might study the ownership structure of major media conglomerates, and write about the implications. That kind of work wasn’t too different from a political scientist studying global diplomacy, or an economist studying the labor market: the media scholar, like the others, is representing a complex structure in charts and words. But what about our 21st-century media culture, deeply entangled as it is with <em>algorithms</em>—the complex, self-adjusting software code that governs so much of our online life? Because of their sheer complexity, and because they can “learn” from their human inputs, algorithms are at least partly inscrutable. Even the engineers who maintain Google’s search algorithms claim not to comprehend the ranking system they initially authored. The system is so complex, and constantly evolving “on its own” that they too struggle to guess what’s in the black box. So it’s not just that Google and Facebook guard their algorithms like state secrets; it may be impossible, even in principle, for media scholars to explain these unseen motors of popular culture.</p>
<p><div id="attachment_9061" style="width: 521px" class="wp-caption aligncenter"><a href="https://culturedigitally.org/wp-content/uploads/2018/07/JPEG-image.jpeg"><img aria-describedby="caption-attachment-9061" loading="lazy" class="size-full wp-image-9061" src="https://culturedigitally.org/wp-content/uploads/2018/07/JPEG-image.jpeg" alt="Sisyphus, After Tiziano by Vik Muniz" width="511" height="648" srcset="https://culturedigitally.org/wp-content/uploads/2018/07/JPEG-image.jpeg 511w, https://culturedigitally.org/wp-content/uploads/2018/07/JPEG-image-118x150.jpeg 118w, https://culturedigitally.org/wp-content/uploads/2018/07/JPEG-image-237x300.jpeg 237w" sizes="(max-width: 511px) 100vw, 511px" /></a><p id="caption-attachment-9061" class="wp-caption-text">Sisyphus, After Tiziano by Vik Muniz</p></div></p>
<h2>A Humble Enterprise</h2>
<p>After this catalog of limitations, why go on with media research at all? It’s a fair question. In the Greek myth of Sysiphus, the gods punished the wayward king by fating him to push a boulder up a hill, only to watch it roll back—over and over, through eternity. There’s something Sysiphean about media research.</p>
<p>But the endeavor is still worth something. There are, first, the intellectual pleasures of question-asking themselves. Maybe because it’s changing so fast, the media landscape is endlessly fascinating. Only the truly uncurious would find its study boring.</p>
<p>It’s also true that the stakes are simply high. Nearly every human on earth lives in and around media, to an extent that makes human cultures a hundred years in the past unrecognizable to us. The process-noun that media scholars use to capture this extraordinary bundle of changes is “mediatization.” The centrality of mediatized worlds in every facet of our social life—politics, medicine, war, immigration, the workplace—is both inescapable and <em>recent</em>. The topic is too big to walk away from.</p>
<p>So we need to keep pushing the boulder up the hill, but with all the humility we can muster. Our findings, we should announce unblushingly, are always and already inadequate. We might even make a habit of repeating why this is the case, just to keep academic hubris at bay. By foregrounding these limits, and by sustaining a culture of peer criticism, we can go about our humble work.</p>
<p><em>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-</em></p>
<h5><em>Note: This brief essay was originally written for students in my advanced undergraduate methods course. I am posting here with the hope that it might prove useful in similar or other contexts.</em></h5>
<p>&nbsp;</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>read an excerpt from Mike Ananny&#8217;s new book, Networked Press Freedom</title>
		<link>https://culturedigitally.org/2018/07/networked-press-freedom/</link>
		
		<dc:creator><![CDATA[Mike Ananny]]></dc:creator>
		<pubDate>Mon, 09 Jul 2018 18:25:24 +0000</pubDate>
				<category><![CDATA[Book Chapter Sneak Peak]]></category>
		<category><![CDATA[history]]></category>
		<category><![CDATA[industry]]></category>
		<category><![CDATA[journalism]]></category>
		<category><![CDATA[social media]]></category>
		<guid isPermaLink="false">https://culturedigitally.org/?p=9054</guid>

					<description><![CDATA[In my new book Networked Press Freedom: Creating Infrastructures for a Public Right to Hear [MIT Press &#124; Amazon] I critically examine what press freedom means today.  I argue that, as news production, circulation, and interpretation are increasingly distributed across a new and unstable set of humans and nonhumans—from journalists and algorithms to platform designers [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>In my new book <em>Networked Press Freedom: Creating Infrastructures for a Public Right to Hear</em> [<a href="https://mitpress.mit.edu/books/networked-press-freedom">MIT Press</a> | <a href="https://www.amazon.com/Networked-Press-Freedom-Creating-Infrastructures/dp/0262037742">Amazon</a>] I critically examine what press freedom means today.  I argue that, as news production, circulation, and interpretation are increasingly distributed across a new and unstable set of humans and nonhumans—from journalists and algorithms to platform designers and bots—it is increasingly difficult to say exactly what press freedom means.  What is the press trying to be free from?  To what ends and for which versions of the public?  How do we recognize a free versus an unfree press?</p>
<p>I define networked press freedom as a system of separations and dependencies among humans and nonhumans that helps to ensure not only journalists’ right to speak but publics’ rights to hear.  Engaging with a wide range of literature and analyzing a 7-year corpus of digital news examples, I argue that the networked press earns its freedom to the extent that it creates defensible publics.  Instead of only seeing press freedom as journalists’ right to pursue their visions of the public free from governments, markets, and technologies, the book tells a nuanced and historically grounded story that helps readers ask: what kind of public, what kind of freedom, and what kind of press?  Below is an excerpt. (This excerpt was first posted at the <a href="http://www.niemanlab.org/2018/06/freedom-from-what-its-time-to-broaden-the-definition-of-a-free-press/">Nieman Lab</a>.)</p>
<hr />
<p>&nbsp;</p>
<p>What, exactly, is press freedom, and why does it matter? In the popular discourse of the United States, we do not ask this question very often or very deeply. The answers are obvious and almost cliché: the public has a right to know, journalists are the people’s watchdogs, they afflict the comfortable and comfort the afflicted, democracy dies in darkness, and voters need objective information to be good citizens. Popular histories of modern U.S. journalism celebrate heroes who spoke truth to power and brought down institutions—Ida B. Wells, Nellie Bly, Ida Tarbell, Edward R. Murrow, I. F. Stone, Bob Woodward, Carl Bernstein, Walter Cronkite. They often are remembered as most effective when they were left alone to pursue their visions of what they thought the public needed. These virtuous, creative, public-spirited, hard-working storytellers occupy powerful positions within the modern mythology of press freedom. If we just get out of the way of good journalists and let them tell truth to power, they will produce the information that vibrant democracies need.</p>
<p>This myth is somewhat true, and these heroes were indeed expert storytellers who challenged each era’s norms. But when we think about press freedom only or even mostly as the freedom of journalists from constraints, it becomes a narrow and almost magical phenomenon that depends on individuals and heroism. It says that journalists already know what the public needs, and just need freedom from the state, marketplaces, and audiences to pursue self-evident things like truth and the public interest. These brave journalists and publishers show their commitment to the public and the power of their independence by going to court and sometimes jail to protect sources and fight censorship. If journalists and publishers can get truth to the public, then individual readers and viewers will be able to make informed decisions about how to think and vote. Ultimately, the press wants to be left alone so that <em>you</em> can be left alone. The kind of democracy that dominates this common image of press freedom relies on a lot of independences—a lot of <em>freedoms from</em>.</p>
<p>This book tries to challenge this mythology. I want to complicate the idea of press freedom and show that it emerges not from individual heroes but from social, technological, institutional, and normative forces that vie for power, imagine publics, and implicitly fight for visions of democracy. I see press freedom as a concept to think with—a generative and constructive tool for looking at any given era of the press and public life and asking, “Is <em>this</em> version of press freedom giving us the kind of publics we need? If not, how do we revise the institutional arrangements underpinning press freedom and make a different thing that we agree to call ‘the press’?” Alternatively, how do we adjust our normative expectations about what publics should be, creating a different image of freedom that we then might demand from institutions that make up the press? If we see press freedom not as heroic isolations—journalists breaking free to tell truths to the publics they imagine—but as a subtler system of separations and dependencies that <em>make</em> publics, then we might see each era’s types of press freedom as bellwethers for particular visions of the public. Ideas of press freedom become evidence of thinking about publics. Rethinking press freedom can be a way to see how press power flows, a prompt to ask which flows produce which publics, and a challenge: what types of news, publics, or presses are we <em>not</em> seeing because our vision of press freedom is so narrow?</p>
<p>If you think press freedom is a particular thing, you will likely look for that thing when you want to see whether a democracy is healthy or whether journalists are doing their jobs. Assumptions about press freedom can shut down conversations about the press and democracy: “We have a free press, so the election result is what it should be” or “We have a free press, and corruption is still rampant!” or “If we had a free press, then we’d have a different government” or “A free marketplace is a free press because truth comes from competing viewpoints.” Statements like these—coming from journalists, audiences, politicians, advertisers, publishers—assume that we already know what we mean by a free press and that our problem is just implementing it.</p>
<p>But if we can liberate the idea of press freedom from these assumptions and assumptions that equate it with whatever journalists say publics need, then press freedom becomes a generative and expansive tool—a way to think about publics, self-governance, and democracy. Because, as Edwin Baker puts it, <a href="https://www.amazon.com/Markets-Democracy-Communication-Society-Politics/dp/0521009774">different democracies need different media</a>, we can complicate democracy by thinking more creatively about press freedom.</p>
<p>Given this moment, when media systems are in a fundamental flux, this book offers a way to think about press freedom as sociotechnical forces with separations and dependencies that help to make publics. I aim to engage with and use this moment of fundamental change to show what press freedom could mean. Contrary to the dominant historical myth in the United States, I argue that press freedom should not be seen simply as journalists’ freedom to write and publish. Rather, press freedom is a normative and institutional product of any given era: it is what people <em>think</em> press freedom should mean and how people have arranged people and power to achieve that vision.</p>
<p>Most simply, press freedom is the right and responsibility to create separations and dependencies that enable democratic self-governance. It is the power and obligation to know and defend the publics that its separations and dependences create. Today these separations and dependencies live in distributed, technological infrastructures with new actors and often invisible forces, so for the networked press to claim its autonomy, it needs to show how and why it arranges people and machines in particular ways. It needs to understand how its humans and nonhumans align or clash to create some publics but not others. It needs to be able to defend why it creates such meetings, and when necessary for a particular image of the public, it needs to develop new types of sociotechnical power that let it make new types of publics.</p>
<p>Rather than abandoning or collapsing the idea of press freedom—seeing it as naive or anachronistic—my aim is to revive and redeploy it. I trace the idea of press freedom through theories of democratic self-governance, situate it within the press’s institutional history, argue that each era of sociotechnical change creates a particular meaning of press freedom, and ask how the contemporary, networked press might claim its freedom and make new publics. Instead of being seen as a holdover from a time that no longer exists, press freedom could be viewed as a powerful framework for arguing why and how the networked press could change.</p>
<p>Interspersed with this tour of institutional forces, I try to deploy my framework and use this new notion of press freedom to argue for a particular normative value—a public right to hear. I claim that the dominant, historical, professionalized image of press freedom—as whatever journalists say they need to be <em>free from</em> to pursue self-evident public interest—privileges an individual right to speak over a public right to hear. It confuses journalists’ freedom to publish with publics’ rights to hear what they need to hear in order to sustain themselves as publics—to realize the inextricably shared conditions under which they live, discover and debate their similarities and differences, devise solutions to predicaments, insulate themselves from harmful forces and nurture contrarian viewpoints, recognize the resources that hold them together, and reinvent themselves through means other than the rational, informational models of citizenship that dominate the traditional mythology of U.S. press freedom. For publics to be anything other than what unconstrained journalists imagine them to be, press freedom can be defensible only if it can be shown that the press’s institutional arrangements produce expansive, dynamic, diverse publics.</p>
<p>In an era when many assumptions about communication and information are being reconsidered, it is difficult to say exactly what journalists can or should be free from. A better question to ask might be, “How is the networked press—journalists, software engineers, algorithms, relational databases, social media platforms, and quantified audiences—creating separations and dependencies that enable a public right to hear, make some publics more likely than others, and move beyond an image of the public as whatever journalists assume it to be?”</p>
<p>Three stories can help illustrate the phenomenon. First, in September 2008, high in Google News’s list of results for a search on “United Airlines” was a story in the <em>South Florida Sun Sentinel</em> on United’s recent bankruptcy filing. The story <a href="https://www.wired.com/2008/09/six-year-old-st/">detailed</a> how United had lost significant revenue, could not meet market forecasts, and needed protection from creditors and time to restructure. A Miami investment adviser responsible for publishing news alerts through Bloomberg News Service saw the story and added it to Bloomberg’s newsletter; United’s stock dropped 75 percent in one day before trading was halted. Unfortunately for United, the <em>Sentinel</em>’s website displayed the current date (2008) at the top of its page; it did not include the story’s original date of publication (2002). Google’s Web crawler mistook the old story for a current story, creating a perfect storm of misinformation: the <em>Sentinel</em> displayed dates in a confusing manner; Google’s crawler read the only date it saw and made an assumption; the investment adviser assumed that Google highly ranked recent information; Bloomberg subscribers and high-frequency traders assumed that the newsletter contained timely and actionable information; and the stock market assumed that its behavior was rational and based on true information. This is a story of networked press freedom because although the <em>Sentinel</em> may have tipped the first domino, the failure is the fault of no single actor. A sociotechnical failure of data, algorithms, individuals, and institutions together led to the creation of false news that drove action.</p>
<p>Second, in 2008, the <em>Pocono Record</em> published an online story about Brenda Enterline’s sexual harassment lawsuit against Pocono Medical Center. In comments left by readers under the story, several people anonymously said that they had personal knowledge of incidents relevant to the lawsuit. When Enterline’s attorneys subpoenaed the newspaper for access to the commenters, the paper refused, claiming that it had a right and obligation to protect the <a href="http://www.dmlp.org/threats/enterline-v-pocono-record">commenters’ First Amendment rights</a> to anonymity. The Pennsylvania district court agreed, essentially extending a de facto shield law around the <em>Pocono Record</em>’s reporters and commenters. In contrast, also in September 2008, a grand jury in Illinois successfully subpoenaed the <em>Alton Telegraph</em> for the names, home addresses, and IP addresses of anonymous commenters who left responses to an online story the paper had run about a murder investigation. The paper argued that “the Illinois reporter’s shield law protects the identities of the anonymous commenters as ‘sources,’” but the court disagreed, saying that such a shield <a href="http://www.dmlp.org/threats/illinois-v-alton-telegraph">covers only reporters and not commenters</a>. Such cases have continued, with an Idaho judge ruling in 2012 that the <em>Spokesman-Review</em> had to reveal the identity of an anonymous commenter accused of libel, and a 2014 U.S. federal court ruling that the NOLA Media Group <a href="https://www.poynter.org/news/court-oks-subpoena-nolacom-commenters-identities">had to reveal</a> names, addresses, and phone numbers of its anonymous commenters. Even though the First Amendment protects Americans’ right to <a href="http://www.dmlp.org/blog/2013/when-comments-turn-ugly-newspaper-websites-and-anonymous-speech">speak anonymously</a>  and several states have <a href="http://www.dmlp.org/state-shield-laws">shield laws</a> designed to protect newspapers from releasing information against their will (Digital Media Law Project, 2013), it is unclear exactly where newspapers stop and audiences begin. The press may sometimes be free from compelled testimony, but there is little clarity on what exactly the press is and therefore who can claim its freedoms.</p>
<p>Finally, in 2016, Norwegian writer Tom Egeland posted to his Facebook account a story that included Nick Ut’s Pulitzer Prize–winning photo of Vietnamese children running away from a U.S. military napalm attack. One nine-year-old victim was a naked girl. Facebook removed the post because it contained “fully nude genitalia” and “fully nude female breast,” in violation of the company’s community standards. When Egeland appealed the removal, his account was suspended. The Norwegian newspaper <em>Aftenposten</em> then posted the image and a story on the censorship to its company’s Facebook site—and its post also was censored. The leader of Norway’s conservative party then posted the image and a protest against the censorship—and her post was censored. Facebook initially defended its decisions saying that although it recognized the photo’s iconic status, “it’s difficult to create a distinction between allowing a photograph of a nude child in one instance and not others.” It relented only after the Norwegian prime minister also posted the image with her own protest. Facebook <a href="https://www.theguardian.com/technology/2016/sep/09/facebook-reinstates-napalm-girl-photo">eventually stated</a>: “Because of its status as an iconic image of historical importance, the value of permitting sharing outweighs the value of protecting the community by removal, so we have decided to reinstate the image”.</p>
<p>This is a story of networked press freedom. A Facebook user posts an image that has been recognized with one of journalism’s highest awards. It triggers a review by Facebook’s vast content-moderation <a href="https://www.wired.com/2014/10/content-moderation/">operation</a> tasked with policing millions of pieces of media in near real time. The user is suspended for appealing the decision. The incident attracts the attention of a news organization, political elites, and worldwide audiences. Eventually, Facebook relents after deciding for itself that the image is iconic, historically important, and worthy of sharing. In this incident, the journalist’s right to publish and the public right to hear are not housed within any one organization or profession. They instead are distributed across an image with agreed-on historical significance, platform algorithms surfacing content, social media companies with proprietary community standards, vast populations of piecework censors implementing standards quickly, editorial protests of professional journalists and elite politicians, and an eventual reversal by a private corporation only after <em>it</em> thinks that an image should be shared. Here, press autonomy is not just the freedom of Nick Ut, Tom Egeland, or the <em>Aftenposten</em> to publish. It is the product of a <em>network</em> of humans and nonhumans that make it more or less likely that a public will encounter media and debate its meaning and significance.</p>
<p>There are many more such stories. This book is about putting them in context—to show how these seemingly idiosyncratic incidents are indicative of the larger challenge of figuring out what democratic self-governance requires, what kind of free press should help to secure it, and how such freedom is distributed across a network of humans and machines that together create publics. If nothing else, my hope is that readers will take away from this book both a skepticism about the idea of press freedom and a sense of its promise as a tool for interrogating the networked press. If someone says “We need a free press,” my hope is that this book will nudge you to ask, “What kind of freedom, what kind of press, and for what kind of public?” Inspired by Michael Schudson’s <a href="https://www.amazon.com/Bourdieu-Journalistic-Field-Rodney-Benson/dp/0745633870">question</a> “autonomy from what?,” I try to ask “autonomy <em>of</em> what and <em>for</em> what?”</p>
<hr />
<p>&nbsp;</p>
<p>My aim in this book is not to dismiss earlier theories of press freedom but to argue that they tell only part of the story. That the press is a product of multiple forces and many different kinds of power is nothing new. But if we want to understand the networked press’s potential to create new publics, we might use the idea of networked press freedom as a kind of diagnostic. If we do not like the publics the networked press creates, we should examine its infrastructure and make changes. If we do not like the networked press’s infrastructure, we need to show why it leads to unacceptable publics. If a new element of the networked press appears, we need to be able to say quickly and thoughtfully what its relationships are and how they create new publics. And if we have an idea for a new element that we think should be part of the networked press, we must be able to say why we need the new public it might help create.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Custodians</title>
		<link>https://culturedigitally.org/2018/06/custodians/</link>
					<comments>https://culturedigitally.org/2018/06/custodians/#comments</comments>
		
		<dc:creator><![CDATA[Tarleton Gillespie]]></dc:creator>
		<pubDate>Wed, 06 Jun 2018 18:59:50 +0000</pubDate>
				<category><![CDATA[announcement]]></category>
		<category><![CDATA[moderation]]></category>
		<category><![CDATA[platforms]]></category>
		<guid isPermaLink="false">https://culturedigitally.org/?p=9049</guid>

					<description><![CDATA[I&#8217;m thrilled to say that my new book, Custodians of the Internet, is now available for purchase from Yale University Press, and your favorite book retailer. Those of you who know me know that I&#8217;ve been working on this book for a long time, and have cared about the issues it addresses for a while [&#8230;]]]></description>
										<content:encoded><![CDATA[<p class="p1">I&#8217;m thrilled to say that my new book, <em>Custodians of the Internet</em>, is now available for purchase from <a href="https://yalebooks.yale.edu/book/9780300173130/custodians-internet">Yale University Press</a>, and your favorite book retailer. Those of you who know me know that I&#8217;ve been working on this book for a long time, and have cared about the issues it addresses for a while now. So I&#8217;m particularly excited that it is now no longer mine, but yours if you want it. I hope it&#8217;ll be of some value to those of you who are interested in interrogating and transforming the information landscape in which we find ourselves.</p>
<p class="p1">By way of introduction, I thought I would explain the book&#8217;s title, particularly my choice of the word &#8220;custodians.&#8221; This title came unnervingly late in the writing process, and after many, many conversations with my extremely patient friend and colleague Dylan Mulvin. &#8220;Custodians of the Internet&#8221; captured, better than many, many alternatives, the aspirations of social media platforms, the position they find themselves in, and my notion for how they should move forward.</p>
<p class="p1"><strong>moderators are the web&#8217;s &#8220;custodians,&#8221; quietly cleaning up the mess</strong>: The book begins with a quote from one of my earliest interviews, with a member of YouTube&#8217;s content policy team. As they put it, &#8220;In the ideal world, I think that our job in terms of a moderating function would be really to be able to just turn the lights on and off and sweep the floors . . . but there are always the edge cases, that are gray.&#8221; The image invoked is a custodian in the janitorial sense, doing the simple, mundane, and uncontroversial task of sweeping the floors. In this turn of phrase, content moderation was offered up as simple maintenance. It is not imagined to be difficult to know what needs scrubbing, and the process is routine. As with content moderation, there is labor involved, but largely invisible, just as actual janitorial staff are often instructed to &#8220;disappear,&#8221; working at night or with as little intrusion as possible. yet even then, years before Gamergate or ISIS beheadings or white nationalists or fake news, it was clear that moderation is not so simple.</p>
<p class="p1"><strong>platforms have taken &#8220;custody&#8221; of the Internet</strong>: Content moderation at the major platforms matters because those platforms have achieved such prominence in the intervening years.As I was writing the book, one news item in 2015 stuck with me: in a survey on people&#8217;s new media use, <a href="http://qz.com/333313/milliions-of-facebook-users-have-no-idea-theyre-using-the-internet/">more people said that they used Facebook than said they used the Internet</a>. Facebook, which by then had become one of the most popular online destinations in the world and had expanded to the mobile environment, did not &#8220;seem&#8221; like the Internet anymore. Rather than being part of the Internet, it had somehow surpassed it. This was not true, of course; Facebook and the other major platforms had in fact woven themselves deeper into the Internet, by distributing cookies, offering secure login mechanisms for other sites and platforms, expanding advertising networks, collecting reams of user data from third-party sites, and even exploring Internet architecture projects. In both the perception of users and in material ways, Facebook and the major social media platforms have taken &#8220;custody&#8221; of the Internet. This should change our calculus as to whether platform moderation is or is not &#8220;censorship,&#8221; and the responsibilities of platforms bear when they decide what to remove and who to exclude.</p>
<p class="p1"><strong>platforms should be better &#8220;custodians,&#8221; committed guardians of our struggles over value</strong>: In the book, I propose that these responsibilities have expanded. Users have become more acutely aware, of both the harms they encounter on these platforms, and the costs of being wronged by content moderation decisions. What&#8217;s more, social media platforms have become the place where a variety of speech coalitions do battle: activists, trolls, white nationalists, advertisers, abusers, even the President. And the implications of content moderation have expanded, from individual concerns to public ones. If a platform fails to moderate, everyone can be affected, even those who aren&#8217;t party to the circulation of the offensive, the fraudulent, or the hateful &#8212; even those who aren&#8217;t on social media at all.</p>
<p class="p1">What would it mean for platforms to play host not just to our content, but to our best intentions? The major platforms I discuss here have, for years, tried to position themselves as open and impartial conduits of information, defenders of their user&#8217;s right to speak, and legally shielded from any obligations for how they police their sites. As most platform managers see it, moderation should be theirs to do, conducted on their own terms, on our behalf, and behind the scenes. But that arrangement is crumbling, as critics begin to examine the responsibilities social media platforms have to the public they serve.</p>
<p class="p1">In the book, I propose that platforms become &#8220;custodians&#8221; of the public discourse they facilitate &#8212; not in the janitorial sense, but something more akin to legal guardianship. The custodian, given charge over a property, a company, a person, or a valuable resource, does not take it for their own or impose their will over it; they accept responsibility for ensuring that it is governed properly. This is akin to Jack Balkin&#8217;s suggestion that platforms act as <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2675270">&#8220;information fiduciaries,&#8221;</a> with a greater obligation to protect our data. But I don&#8217;t just mean that platforms should be custodians of our content; platforms should be custodians of the deliberative process we all must engage in, that makes us a functioning public. Users need to be more accountable for making the hard decisions about what does and does not belong; platforms could facilitate that deliberation, and then faithfully enact the conclusions users reach. Safeguarding public discourse requires ensuring that it is governed by those to whom it belongs, making sure it survives, that its value is sustained in a fair and equitable way. Platforms could be not the police of our reckless chatter, but the trusted agents of our own interest in forming more democratic publics.</p>
<p class="p1">If you end up reading the book, you have my gratitude. And I&#8217;m eager to hear from anyone who has thoughts, comments, praise, criticism, and suggestions. You can find me on Twitter at <a href="https://twitter.com/TarletonG">@TarletonG</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://culturedigitally.org/2018/06/custodians/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>“We do software so that you can do education”: The curious case of MOOC platforms</title>
		<link>https://culturedigitally.org/2018/05/we-do-software-so-that-you-can-do-education-the-curious-case-of-mooc-platforms/</link>
		
		<dc:creator><![CDATA[Shreeharsh Kelkar]]></dc:creator>
		<pubDate>Mon, 14 May 2018 14:30:37 +0000</pubDate>
				<category><![CDATA[Platforms]]></category>
		<category><![CDATA[digital labor]]></category>
		<category><![CDATA[expertise]]></category>
		<category><![CDATA[MOOCs]]></category>
		<category><![CDATA[platforms]]></category>
		<guid isPermaLink="false">https://culturedigitally.org/?p=9034</guid>

					<description><![CDATA[[Note: This is a lightly edited re-post of a blog-post originally published on Work in Progress,  a public sociology blog created by the American Sociological Association to disseminate research results.  It summarizes findings from “Engineering a platform: The construction of interfaces, users, organizational roles, and the division of labor” in New Media and Society, first published online in [&#8230;]]]></description>
										<content:encoded><![CDATA[<p><div id="attachment_9035" style="width: 650px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-9035" loading="lazy" class="wp-image-9035 size-large" src="https://culturedigitally.org/wp-content/uploads/2018/04/edx-reinvent_education_2-1024x768.jpg" alt="" width="640" height="480" srcset="https://culturedigitally.org/wp-content/uploads/2018/04/edx-reinvent_education_2-1024x768.jpg 1024w, https://culturedigitally.org/wp-content/uploads/2018/04/edx-reinvent_education_2-150x113.jpg 150w, https://culturedigitally.org/wp-content/uploads/2018/04/edx-reinvent_education_2-300x225.jpg 300w, https://culturedigitally.org/wp-content/uploads/2018/04/edx-reinvent_education_2-768x576.jpg 768w, https://culturedigitally.org/wp-content/uploads/2018/04/edx-reinvent_education_2-640x480.jpg 640w" sizes="(max-width: 640px) 100vw, 640px" /><p id="caption-attachment-9035" class="wp-caption-text">edX president Anant Agarwal—with the words “the future of education” displayed prominently behind him—takes questions from the audience. Image: <a href="https://www.flickr.com/photos/145628015@N05/28942205172/">edX Social Media via Flickr</a> (CC BY-SA 2.0).</p></div></p>
<p>[<em>Note: This is a lightly edited re-post of a <a href="http://www.wipsociology.org/2018/03/27/we-do-software-so-that-you-can-do-education-the-curious-case-of-mooc-platforms/">blog-post</a> originally published on <a href="http://www.wipsociology.org/">Work in Progress</a>,  a public sociology blog created by the American Sociological Association to disseminate research results.  It summarizes findings from “<a href="http://journals.sagepub.com/eprint/vubRqzyfjPaDJWMvHTJj/full">Engineering a platform: The construction of interfaces, users, organizational roles, and the division of labor</a>” in New Media and Society, first published online in September 2017.</em>]</p>
<p>These days, <a href="https://culturedigitally.org/2017/08/platform-metaphor/">the word “platform”</a> is commonly used to refer to entities like Facebook, Twitter, or YouTube. These portals are sites of public discourse and see their role as connecting various sorts of publics: video producers to viewers, journalists to readers, or advertisers to potential consumers, all simultaneously.  YouTube, for instance, started in 2005 as a Friendster-type social network portal that proclaimed, “Show off your favorite videos to the world”; by 2008, under pressure to monetize, it had constructed itself into a “distribution platform for original content creators and advertisers, large and small.” Recent work, both scholarly and popular, has spoken much to our discomfort that so much of public discourse now occurs on these privately-owned, for-profit, and unregulated platforms that lend themselves all too well to <a href="https://www.buzzfeed.com/charliewarzel/a-honeypot-for-assholes-inside-twitters-10-year-failure-to-s#.hgWbzlpo4">unique forms of harassment</a>, <a href="https://theoutline.com/post/1399/how-google-ate-celebritynetworth-com">invisible algorithmic manipulations</a>, and <a href="https://www.nytimes.com/interactive/2018/01/27/technology/social-media-bots.html">sinister forms of corruption</a>.</p>
<p>In a <a href="http://journals.sagepub.com/eprint/vubRqzyfjPaDJWMvHTJj/full">recently published ethnographic study</a>, I found that the platform arrangement does much more than muddy the grounds between public and private, commercial and personal, work and play. It transforms the nature of work, the framing of organizational roles, as well as the construction of substantive expertise. From 2013 to 2015, I followed a <a href="https://www.edx.org/">non-profit start-up called edX</a> through its <a href="https://www.youtube.com/watch?v=GGu74X3HGzY">stated mission of reinventing education</a> by making Massive Open Online Courses (MOOCs). MOOCs <a href="http://www.nytimes.com/2012/11/04/education/edlife/massive-open-online-courses-are-multiplying-at-a-rapid-pace.html">caused a sensation in 2012</a> as three new start-ups (Udacity, Coursera and edX) leveraged the power of networked computing and collaborated with universities to offer prospective students anywhere in the world an interactive distance learning experience.</p>
<p>edX, like many start-ups, aspired to become a “platform.” However, it did not start as one, and more important, becoming a platform was not an inevitable development but a product of structured choices. In its original conception, edX sought to partner collaboratively with instructors at partnering universities to produce tailor-made educational experiences for online learners. But, the need for a sustainable revenue framework made the platform model—in which designers engineer potential software features in the form of multipurpose “tools,” which the instructors then use to produce the educational content autonomously and for their own goals and purposes—more appealing. The edX architects engineered this not just by <a href="http://journals.sagepub.com/doi/full/10.1177/2056305115603080">building technical interfaces</a> but also through a careful formatting of organizational roles: the instructor thus became a creative user of edX’s software (rather than a collaborator) while those at edX themselves became more like managers, building tools for learners, instructors, and researchers, and watching over these “users” to maximize innovation and revenue within the “ecosystem.”</p>
<p>This centralization of software production (and the corresponding decentralization of the production of educational material and knowledge) has one concrete consequence.  I argue that the social organization of MOOC infrastructures reconfigures the meaning of educational expertise. Far from automating them away, MOOC infrastructures construct educational experts (instructors or researchers) as &#8220;<a href="https://culturedigitally.org/2014/09/how-to-give-up-the-i-word-pt-1/">innovative</a>&#8221; information workers: they work with software programs, perform A/B tests and create actionable knowledge claims.  Let me describe this shift and its implications in more detail.</p>
<p><strong>Becoming a platform: from pedagogy to software</strong></p>
<p>edX&#8217;s first employees saw it squarely as an educational institution. There were two main divisions in the edX organization: the engineers and the edX Fellows. The engineers worked on building the edX software while the Fellows interfaced between the engineering team and the instructors at partnering institutions.</p>
<p>In edX’s early job advertisements, the edX Fellow would have to “manage partnerships” with instructors at participating universities, have a disciplinary PhD, and an interest in “innovative pedagogies.” The edX management envisioned a whole host of Fellows who spanned the entire disciplinary spectrum (“social sciences, humanities, natural sciences, engineering, or education”).</p>
<p>An edX Fellow was thus a subject-matter expert (e.g. physics), a software expert, as well as a pedagogy expert. Fellows saw their overlapping job requirements as central to their professional identity. Robin, an edX Fellow, described his job to me “learning engineer”: a term he used to indicate his position on the boundary between pedagogy and software, and between academia and corporate life.</p>
<p>Ultimately, however, the edX architects chose to become a “platform.” edX would produce standardized software features that instructors at partnering institutions could use autonomously without any assistance from edX Fellows. edX could therefore expand the number of its partners so as to produce a large number of courses (from 3 partners and 7 courses in 2012 to 90+ partners and 600+ courses today) thereby increasing the possibility that some learners might pay a small fee to get a “<a href="https://www.edx.org/verified-certificate">verified certificate.</a>”  This, in turn would allow edX, a non-profit, to become revenue-sustainable. (The other approach would have been to produce only a small number of highly innovative courses with select partners and license them out for revenue. For <a href="https://www.chronicle.com/article/The-Document-an-Open-Letter/138937">various reasons</a>, this was deemed unsuitable.)</p>
<p><div id="attachment_9042" style="width: 879px" class="wp-caption aligncenter"><img aria-describedby="caption-attachment-9042" loading="lazy" class="wp-image-9042 size-full" src="https://culturedigitally.org/wp-content/uploads/2018/04/MultipleChoice_SimpleEditor.png" alt="" width="869" height="450" srcset="https://culturedigitally.org/wp-content/uploads/2018/04/MultipleChoice_SimpleEditor.png 869w, https://culturedigitally.org/wp-content/uploads/2018/04/MultipleChoice_SimpleEditor-150x78.png 150w, https://culturedigitally.org/wp-content/uploads/2018/04/MultipleChoice_SimpleEditor-300x155.png 300w, https://culturedigitally.org/wp-content/uploads/2018/04/MultipleChoice_SimpleEditor-768x398.png 768w, https://culturedigitally.org/wp-content/uploads/2018/04/MultipleChoice_SimpleEditor-640x331.png 640w" sizes="(max-width: 869px) 100vw, 869px" /><p id="caption-attachment-9042" class="wp-caption-text">A screenshot of Studio, the content management system of the edX software. As you can tell, Studio resembles the authoring platforms of the web 2.0 in that it separates authoring from presentation and allows those with no coding experience to create web experiences.  Image: <a href="http://edx.readthedocs.io/projects/open-edx-building-and-running-a-course/en/latest/course_components/create_problem.html">edX documentation</a> (<span class="cc-license-identifier">CC BY-SA 4.0)</span>.</p></div></p>
<p>The edX architects accomplished the transformation into a platform in two ways: by building technical interfaces and by formatting organizational roles. edX built a Graphical User Interface (GUI) called “Studio” that instructors would use to build courses with as little handholding as possible from the edX organization. Instead, partnering institutions were asked to support their instructors in any way they thought possible. Studio resembles the authoring interfaces of blogging platforms like WordPress, Blogger, and Drupal; it allows instructors to concentrate on the educational content and not worry about its presentation.</p>
<p>Studio thus <a href="http://journals.sagepub.com/doi/abs/10.1177/0163443708098245?journalCode=mcsa">configured instructors as creative users</a> who generate user-generated content (such as video-makers on YouTube) as well as the innovative users who complement the software itself by building new things on top of it (such as iPhone app developers). In this, the framing of the edX software as a mere yet empowering “tool” played a key role. The category of “tool” (hand-in-hand with “user”) looms large within the history of software: as Fred Turner has shown, it was through imagining computers and software as mere tools that computer pioneers transformed computers from symbols of the capitalist-bureaucratic complex into <a href="http://www.press.uchicago.edu/ucp/books/book/chicago/F/bo3773600.html">agents of self and social transformation</a>.</p>
<p>The architects at edX also re-imagined their own roles as providing “tools” to empower their users (learners, instructors, researchers). As one edX employee put it, “<a href="https://opensource.com/education/14/9/interview-ned-batchelder-openedx">we do software so that you can do education</a>.”</p>
<p>The role of the edX Fellow, originally conceived as pedagogical, was re-conceptualized as managerial, with titles like “program manager” (PM) or “relationship manager.” These managers provided technical and best practices-type (but not pedagogical) support to instructional teams at edX’s partner institutions through documentation, seminars, conferences, phone meetings, virtual meet-ups, and so forth. Job advertisements for these positions no longer mention an interest in pedagogy. As the job definition changed, many of the original edX Fellows left, sometimes for partnering institutions where they could explore their original interest in pedagogy. Newer PMs often came to edX from careers in business consulting.</p>
<p>As the use of Studio became more and more established, an informal inequality started to develop among the now-formally equivalent instructors based on the degrees of support they were offered by their home institutions. At resource-rich sites (such as Harvard or MIT), instructors started to see themselves not just as teachers but also as innovators (<a href="https://www.vox.com/2015/9/29/9411117/silicon-valley-politics-charts">of a certain Silicon Valley vein</a>) who stretched the limits of the software. For instance, MIT instructional teams often thought of themselves as “learning engineers” (which was how edX Fellow Robin had described himself).</p>
<p><strong>Implications for expertise</strong></p>
<p>Why does this matter? Various scholars have argued that platform companies&#8217; disproportionate power, compared to their users, means that they get to disproportionately influence politically salient categories e.g. what counts as “social” (liking something on Facebook) or “innovative” (<a href="http://journals.sagepub.com/doi/abs/10.1177/1461444813511926">designing software for copy-editing that draws on Mechanical Turk</a>).</p>
<p>edX&#8217;s case illustrates one mechanism through which this happens: the construction of organizational roles. Consider the separation of software and pedagogy within the edX ecosystem. As edX expanded its slate of partners, its first clients and patrons, MIT and Harvard, saw a decline in their own ability to set the agenda and control the direction of the software. These “users” argue that the software has an implicit theory of pedagogy embedded in it, and that, as experts on pedagogy, <em>they</em> should have more of a say in shaping the software. While acknowledging this, edX’s architects counter that <em>they</em>—and not the Harvard-MIT folks—should have the final say on prioritizing which features to build, not only because they understand the software the best, but also because they see themselves as best placed to understand which features might benefit the whole eco-system rather than just particular players.</p>
<p>The standard template in the education technology industry is that the technology experts are only supposed to “implement” what the pedagogy experts ask. What is arguably new about the edX platform framework is that the software is prior to, and thereby more constitutive of, the pedagogy.</p>
<p>Consider finally the category of the “learning engineer.” An <a href="https://oepi.mit.edu/files/2016/09/MIT-Online-Education-Policy-Initiative-April-2016.pdf">official MIT report</a> defined “learning engineers” as experts “familiar with state-of-the-art educational technologies, from commercial software to open-source tools, and skilled in the effective use of new online tools.” This is, of course, the teacher incarnated in the image of a Silicon Valley information worker. One can see this from the reaction of some educational experts.  Joshua Kim, head of Dartmouth&#8217;s center for Teaching and Learning and a writer for the Chronicle of Higher Education, <a href="https://www.insidehighered.com/blogs/technology-and-learning/should-instructional-designers-be-called-%E2%80%98learning-engineers%E2%80%99">wrote in response</a>: &#8220;Have you ever heard the job title learning engineer? <em>I haven’t.  </em>At my school, these folks go by the title of instructional designer. At some places they are called learning designers.&#8221;  [Italics in the original.]  If my argument is correct, this is not just a relabeling; it is an attempt to transform current organizational roles.</p>
<p>MOOCs pioneers have been criticized because it is sometimes assumed—<a href="https://theconversation.com/universities-must-prepare-for-a-technology-enabled-future-89354">with reason</a>—that their goal is to replace teachers with artificial intelligence. My fieldwork suggests that the focus is not so much on replacing teachers with machines as it is on turning pedagogical experts (e.g. instructors) into something resembling an “innovative” software worker of Silicon Valley. The platform arrangement, in other words, is a conduit through which Silicon Valley norms travel out into other worlds and through which software pioneers are attempting to <a href="https://www.wsj.com/articles/SB10001424053111903480904576512250915629460">remake the world</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/

Object Caching 118/145 objects using Disk
Page Caching using Disk: Enhanced (Page is feed) 
Database Caching using Disk

Served from: culturedigitally.org @ 2026-03-01 08:22:59 by W3 Total Cache
-->