<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Ready-to-hand</title>
	<atom:link href="http://www.deaneckles.com/blog/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.deaneckles.com/blog</link>
	<description>Dean Eckles on people, technology &#38; inference</description>
	<lastBuildDate>Fri, 28 Jul 2023 13:10:17 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>New research on social media during the 2020 election, and my predictions</title>
		<link>http://www.deaneckles.com/blog/844_new-research-on-social-media-during-the-2020-election-and-my-predictions/</link>
					<comments>http://www.deaneckles.com/blog/844_new-research-on-social-media-during-the-2020-election-and-my-predictions/#respond</comments>
		
		<dc:creator><![CDATA[Dean Eckles]]></dc:creator>
		<pubDate>Fri, 28 Jul 2023 13:10:17 +0000</pubDate>
				<category><![CDATA[average treatment effects]]></category>
		<category><![CDATA[causal inference]]></category>
		<category><![CDATA[experiments]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[influence]]></category>
		<category><![CDATA[news feed]]></category>
		<category><![CDATA[persuasive technology]]></category>
		<category><![CDATA[research methods]]></category>
		<category><![CDATA[social media]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://www.deaneckles.com/blog/?p=844</guid>

					<description><![CDATA[This is crossposted from Statistical Modeling, Causal Inference, and Social Science. Back in 2020, leading academics and researchers at the company now known as Meta put together a large project to study social media and the 2020 US elections — particularly the roles of Instagram and Facebook. As Sinan Aral and I had written about [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>This is crossposted from <a href="https://statmodeling.stat.columbia.edu/2023/07/27/new-research-on-social-media-during-the-2020-election-and-my-predictions/">Statistical Modeling, Causal Inference, and Social Science</a>.</em></p>



<p>Back in 2020, leading academics and researchers at the company now known as Meta <a href="https://medium.com/@2020_election_research_project/a-proposal-for-understanding-social-medias-impact-on-elections-4ca5b7aae10">put together a large project</a> to study social media and the 2020 US elections — particularly the roles of Instagram and Facebook. As <a href="https://doi.org/10.1126/science.aaw8243">Sinan Aral and I had written</a> about how many paths for understanding effects of social media in elections could require new interventions and/or platform cooperation, this seemed like an important development. Originally the idea was for this work to be published in 2021, but there have been some delays, including simply because some of the data collection was extended as what one might call &#8220;election-related events&#8221; continued beyond November and into 2021. As of 2pm Eastern today, the news embargo for this work has been lifted on the first group of research papers.</p>



<p>I had heard about this project back a long time ago and, frankly, had largely forgotten about it. But this past Saturday, I was participating in the <a href="https://www.ssrc.org/programs/digital-platforms-initiative/2023-ssrc-workshop-on-the-economics-of-social-media/">SSRC Workshop on the Economics of Social Media</a> and one session was dedicated to results-free presentations about this project, including the setup of the institutions involved and the design of the research. The organizers informally <a href="https://www.mentimeter.com/app/presentation/al8xgo2seez8j9yfg6d8fcrz3btacxzm/iaavpihekm7q">polled us with qualitative questions about some of the results</a>. This intrigued me. I had recently reviewed an unrelated paper that included survey data from experts and laypeople about their expectations about the effects estimated in a field experiment, and I thought this data was helpful for contextualizing what &#8220;we&#8221; learned from that study.</p>



<p>So I thought it might be useful, at least for myself, to spend some time eliciting my own expectations about the quantities I understood would be reported in these papers. I&#8217;ve mainly kept up with the academic and&nbsp; grey literature, I&#8217;d previously worked in the industry, and I&#8217;d reviewed some of this for <a href="https://www.commerce.senate.gov/services/files/62102355-DC26-4909-BF90-8FB068145F18">my Senate testimony back in 2021</a>. Along the way, I tried to articulate where my expectations and remaining uncertainty were coming from. I composed many of my thoughts on my phone Monday while taking the subway to and from the storage unit I was revisiting and then emptying in Brooklyn. I got a few comments from <a href="https://solomonmg.github.io/">Solomon Messing</a> and <a href="https://tecunningham.github.io/about.html">Tom Cunningham</a>, and then uploaded <a href="https://osf.io/4w75d">my notes</a> to OSF and posted <a href="https://twitter.com/deaneckles/status/1684038624424206337">a cheeky tweet</a>.</p>



<p>Since then, starting yesterday, I&#8217;ve spoken with journalists and gotten to view the main text of papers for two of the randomized interventions for which I made predictions. These evaluated effects of (a) <a href="https://doi.org/10.1126/science.abp9364">switching Facebook and Instagram users to a (reverse) chronological feed</a>, (b) <a href="https://www.science.org/doi/full/10.1126/science.add8424">removing &#8220;reshares&#8221; from Facebook users&#8217; feeds</a>, and (c) <a href="https://doi.org/10.1038/s41586-023-06297-w">downranking content by &#8220;like-minded&#8221; users, Pages, and Groups</a>.</p>



<h1 class="wp-block-heading">My guesses</h1>



<p>My main expectations for those three interventions could be summed up as follows. These interventions, especially chronological ranking, would each reduce engagement with Facebook or Instagram. This makes sense if you think the status quo is somewhat-well optimized for showing engaging and relevant content. So some of the rest of the effects — on, e.g., polarization, news knowledge, and voter turnout — could be partially inferred from that decrease in use. This would point to reductions in news knowledge, issue polarization (or coherence/consistency), and small decreases in turnout, especially for chronological ranking. This is because people get some hard news and political commentary they wouldn&#8217;t have otherwise from social media. These reduced-engagement-driven effects should be weakest for the &#8220;soft&#8221; intervention of downranking some sources, since content predicted to be particularly relevant will still make it into users&#8217; feeds.</p>



<p>Besides just reducing Facebook use (and everything that goes with that), I also expected swapping out feed ranking for reverse chron would expose users to more content from non-friends via, e.g., Groups, including large increases in untrustworthy content that would normally rank poorly. I expected some of the same would happen from removing reshares, which I expected would make up over 20% of views under the status quo, and so would be filled in by more Groups content. For downranking sources with the same estimated ideology, I expected this would reduce exposure to political content, as much of the non-same-ideology posts will be by sources with estimated ideology in the middle of the range, i.e. [0.4, 0.6], which are less likely to be posting politics and hard news. I&#8217;ll also note that much of my uncertainty about how chronological ranking would perform was because there were a lot of unknown but important &#8220;details&#8221; about implementation, such as exactly how much of the ranking system really gets turned off (e.g., how much likely spam/scam content still gets filtered out in an early stage?).</p>



<h1 class="wp-block-heading">How&#8217;d I do?</h1>



<p>Here&#8217;s a quick summary of my guesses and the results in these three papers:</p>



<figure class="wp-block-image"><a href="https://statmodeling.stat.columbia.edu/wp-content/uploads/2023/07/dean_predictions_table.png"><img decoding="async" src="https://statmodeling.stat.columbia.edu/wp-content/uploads/2023/07/dean_predictions_table.png" alt="Table of predictions about effects of feed interventions and the results" class="wp-image-49342"/></a></figure>



<p>It looks like I was wrong in that the <em>reductions in engagement were larger than I predicted</em>: e.g., chronological ranking reduced time spent on Facebook by 21%, rather than the 8% I guessed, which was based on my background knowledge, <a href="https://www.bigtechnology.com/p/facebook-removed-the-news-feed-algorithm">a leaked report on a Facebook experiment</a>, and <a href="https://www.pnas.org/doi/10.1073/pnas.2025334119">this published experiment from Twitter</a>.</p>



<p>Ex post I hypothesize that this is because of the duration of these experiments allowed for continual declines in use over months, with various feedback loops (e.g., users with chronological feed log in less, so they post less, so <a href="https://www.pnas.org/doi/10.1073/pnas.1511201113">they get fewer likes and comments, so they log in even less and post even less</a>). As I dig into the 100s of pages of supplementary materials, I&#8217;ll be looking to understand what these declines looked like at earlier points in the experiment, such as by election day.</p>



<p>My estimates for the survey-based outcomes of primary interest, such as polarization, were mainly covered by the 95% confidence intervals, with the exception of two outcomes from the &#8220;no reshares&#8221; intervention.</p>



<p>One thing is that all these papers report weighted estimates for a broader population of US users (population average treatment effects, PATEs), which are less precise than the unweighted (sample average treatment effect, SATE) results. Here I focus mainly on the unweighted results, as I did not know there was going to be any weighting and these are also the more narrow, and thus riskier, CIs for me. (There seems to have been some mismatch between the outcomes listed in the talk I saw and what&#8217;s in the papers, so I didn&#8217;t make predictions for some reported primary outcomes and some outcomes I made predictions for don&#8217;t seem to be reported, or I haven&#8217;t found them in the supplements yet.)</p>



<p>Now is a good time to note that I basically predicted what psychologists armed with Jacob Cohen&#8217;s rules of thumb might call extrapolate to &#8220;minuscule&#8221; effect sizes. All my predictions for survey-based outcomes were 0.02 standard deviations or smaller. (Recall Cohen&#8217;s rules of thumb say 0.1 is small, 0.5 medium, and 0.8 large.)</p>



<p>Nearly all the results for these outcomes in these two papers were indistinguishable from the null (p &gt; 0.05), with standard errors for survey outcomes at 0.01 SDs or more. This is consistent with my ex ante expectations that the experiments would face severe power problems, at least for the kind of effects I would expect. Perhaps by revealed preference, a number of other experts had different priors.</p>



<p>A rare p &lt; 0.05 result is that that chronological ranking reduced news knowledge by 0.035 SDs with 95% CI [-0.061, -0.008], which includes my guess of -0.02 SDs. Removing reshares may have reduced news knowledge even more than chronological ranking — and by more than I guessed.</p>



<p>Even with so many null results I was still sticking my neck out a bit compared with just guessing zero everywhere, since in some cases if I had put the opposite sign my estimate wouldn&#8217;t have been in the 95% CI. For example, downranking &#8220;like-minded&#8221; sources produced a CI of [-0.031, 0.013] SDs, which includes my guess of -0.02, but not its negation. On the other hand, I got some of these wrong, where I guessed removing reshares would reduce affective polarization, but a 0.02 SD effect is outside the resulting [-0.005, +0.030] interval.</p>



<p>It was actually quite a bit of work to compare my predictions to the results because I didn&#8217;t really know a lot of key details about exact analyses and reporting choices, which strikingly even differ a bit across these three papers. So I might yet find more places where I can, with a lot of reading and a bit of arithmetic, figure out where else I may have been wrong. (Feel free to point these out.)</p>



<h1 class="wp-block-heading">Further reflections</h1>



<p>I hope that this helps to contextualize the present results with expert consensus — or at least my idiosyncratic expectations. I&#8217;ll likely write a bit more about these new papers and further work released as part of this project.</p>



<p>It was probably an oversight for me not to make any predictions about <a href="https://doi.org/10.1126/science.ade7138">the observational paper looking at polarization in exposure and consumption of news media</a>. I felt like I had a better handle on thinking about simple treatment effects than these measures, but perhaps that was all the more reason to make predictions. Furthermore, given the limited precision of the experiments&#8217; estimates, perhaps it would have been more informative (and riskier) to make point predictions about these precisely estimated observational quantities.</p>



<p><em>[I want to note that I was an employee or contractor of Facebook (now Meta) from 2010 through 2017. I have received funding for other research from Meta, Meta has sponsored a conference I organize, and I have coauthored with Meta employees as recently <a href="https://doi.org/10.1073/pnas.2211062120">as earlier this month</a>. I was also recently a consultant to Twitter, ending shortly after the Musk acquisition. You can find <a href="https://www.deaneckles.com/disclosures/">all my disclosures here</a>.]</em></p>
]]></content:encoded>
					
					<wfw:commentRss>http://www.deaneckles.com/blog/844_new-research-on-social-media-during-the-2020-election-and-my-predictions/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Does the “Table 1 fallacy” apply if it is Table S1 instead?</title>
		<link>http://www.deaneckles.com/blog/835_does-the-table-1-fallacy-apply-if-it-is-table-s1-instead/</link>
					<comments>http://www.deaneckles.com/blog/835_does-the-table-1-fallacy-apply-if-it-is-table-s1-instead/#respond</comments>
		
		<dc:creator><![CDATA[Dean Eckles]]></dc:creator>
		<pubDate>Mon, 23 Aug 2021 00:47:50 +0000</pubDate>
				<category><![CDATA[econometrics]]></category>
		<category><![CDATA[experiments]]></category>
		<category><![CDATA[research methods]]></category>
		<category><![CDATA[statistics]]></category>
		<guid isPermaLink="false">http://www.deaneckles.com/blog/?p=835</guid>

					<description><![CDATA[This post is cross-posted from Andrew Gelman&#8217;s Statistical Modeling, Causal Inference, and Social Science. There&#8217;s more discussion over there. In a randomized experiment (i.e. RCT, A/B test, etc.) units are randomly assigned to treatments (i.e. conditions, variants, etc.). Let&#8217;s focus on Bernoulli randomized experiments for now, where each unit is independently assigned to treatment with [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p><em>This post is <a href="https://statmodeling.stat.columbia.edu/2021/08/22/does-the-table-1-fallacy-apply-if-it-is-table-s1-instead/">cross-posted</a> from Andrew Gelman&#8217;s Statistical Modeling, Causal Inference, and Social Science. There&#8217;s more discussion over there.</em></p>



<p>In a randomized experiment (i.e. RCT, A/B test, etc.) units are randomly assigned to treatments (i.e. conditions, variants, etc.). Let&#8217;s focus on Bernoulli randomized experiments for now, where each unit is independently assigned to treatment with probability <em>q</em> and to control otherwise.</p>



<p>Thomas Aquinas argued that God&#8217;s knowledge of the world upon creation of it is a kind of practical knowledge: knowing something is the case because you made it so. One might think that that in randomized experiments we have a kind of practical knowledge: we know that treatment was randomized because we randomized it. But unlike Aquinas&#8217;s God, we are not infallible, we often delegate, and often we are in the position of consuming reports on other people&#8217;s experiments.</p>



<p>So it is common to perform and report some tests of the null hypothesis that this process did indeed generate the data. For example, one can test that the sample sizes in treatment and control aren&#8217;t inconsistent with this. This is common in at least in the Internet industry (see, e.g., <a href="https://www.amazon.com/Trustworthy-Online-Controlled-Experiments-Practical/dp/1108724264">Kohavi, Tang &amp; Xu</a> on &#8220;sample ratio mismatch&#8221;), where it is often particularly easy to automate. Perhaps more widespread is testing whether the means of pre-treatment covariates in treatment and control are distinguishable; these are often called balance tests. One can do per-covariate tests, but if there are a lot of covariates then this can generate confusing false positives. So often one might use <a href="https://alexandercoppock.com/Green-Lab-SOP/Green_Lab_SOP.html#covariate-imbalance-and-the-detection-of-administrative-errors">some test</a> for all the covariates jointly at once.</p>



<p>Some experimentation systems in industry automate various of these tests and, if they reject at, say, <em>p</em> &lt; 0.001, show prominent errors or even watermark results so that they are difficult to share with others without being warned. If we&#8217;re good Bayesians, we probably shouldn&#8217;t give up on our prior belief that treatment was indeed randomized just because some p-value is less than 0.05. But if we&#8217;ve got <em>p</em> &lt; 1e-6, then — for all but the most dogmatic prior beliefs that randomization occurred as planned — we&#8217;re going to be doubtful that everything is alright and move to investigate.</p>



<p>In my own digital field and survey experiments, we indeed run these tests. Some of my papers report the results, but I know there&#8217;s at least one that doesn&#8217;t (though we did the tests) and another where we just state they were all not significant (and this can be verified with the replication materials). My sense is that reporting balance tests of covariate means is becoming even more of a norm in some areas, such as applied microeconomics and related areas. And I think that&#8217;s a good thing.</p>



<p>Interestingly, it seems that not everyone feels this way.</p>



<p>In particular, methodologists working in epidemiology, medicine, and public health sometimes refer to a &#8220;Table 1 fallacy&#8221; and advocate against performing and/or reporting these statistical tests. Sometimes the argument is specifically about clinical trials, but often it is more generally randomized experiments.</p>



<p>Stephen Senn argues in <a href="https://doi.org/10.1002/sim.4780131703">this influential 1994 paper:</a></p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p><em>Indeed the practice [of statistical testing for baseline balance] can accord neither with the logic of significance tests nor with that of hypothesis tests for the following are two incontrovertible facts about a randomized clinical trial:</em></p><p><em>1. over all randomizations the groups are balanced;</em></p><p><em>2. for a particular randomization they are unbalanced.</em></p><p><em>Now, no ‘significant imbalance’ can cause 1 to be untrue and no lack of a significant balance can make 2 untrue. Therefore the only reason to employ such a test must be to examine the process of randomization itself. Thus a significant result should lead to the decision that the treatment groups have not been randomized, and hence either that the trialist has practised deception and has dishonestly manipulated the allocation or that some incompetence, such as not accounting for all patients, has occurred.</em></p><p><em>In my opinion this is not the usual reason why such tests are carried out (I believe the reason is to make a statement about the observed allocation itself) and I suspect that the practice has originated through confused and false analogies with significance and hypothesis tests in general.</em></p></blockquote>



<p>This highlights precisely where my view diverges: indeed the reason I think such tests should be performed is because I think that they could lead to the conclusion that &#8220;the treatment groups have not been randomized&#8221;. I wouldn&#8217;t say this <em>always</em> rises to the level of &#8220;incompetence&#8221; or &#8220;deception&#8221;, at least in the applications I&#8217;m familiar with. (Maybe I&#8217;ll write about some of these reasons at another time — some involve interference, some are analogous to differential attrition.)</p>



<p>It seems that experimenters and methodologists in social science and the Internet industry think that broken randomization is more likely, while methodologists mainly working on clinical trails put a very, very small prior probability on such events. Maybe this largely reflects the real probabilities in these areas, for various reasons. If so, part of the disagreement simply comes from cross-disciplinary diffusion of advice and overgeneralization. However, even some of the same researchers are sometimes involved in randomized experiments that aren&#8217;t subject to all the same processes as clinical trials.</p>



<p>Even if there is a small prior probability of broken randomization, if it is very easy to test for it, we still should. One nice feature of balance tests compared with other ways of auditing a randomization and data collection process is that they are pretty easy to take in as a reader.</p>



<p>But maybe there are other costs of conducting and reporting balance tests?</p>



<p>Indeed this gets at other reasons some methodologists oppose balance testing. For example, they argue that it fits into an, often vague, process of choosing estimators in a data-dependent way: researchers run the balance tests and make decisions about how to estimate treatment effects as a result.</p>



<p>This is articulated in <a href="https://doi.org/10.1080/00031305.2017.1322143">a paper in&nbsp;<em>The American Statistician</em> by Mutz, Pemantle &amp; Pham</a>, which includes highlighting how discretion here creates a garden of forking paths. In my interpretation, the most considered and formalized arguments are saying is that conducting balance tests and then using that to determine which covariates to include in the subsequent analysis of treatment effects in randomized experiments has bad properties and shouldn&#8217;t be done. Here the idea is that when these tests provide some evidence against the null of randomization for some covariate, researchers sometimes then adjust for that covariate (when they wouldn&#8217;t have otherwise); and when everything looks balanced, researchers use this as a justification for using simple unadjusted estimators of treatment effects. I agree with this, and typically one should already specify adjusting for relevant pre-treatment covariates in the pre-analysis plan. Including them will <a href="https://projecteuclid.org/journals/annals-of-applied-statistics/volume-7/issue-1/Agnostic-notes-on-regression-adjustments-to-experimental-data--Reexamining/10.1214/12-AOAS583.full">increase precision</a>.</p>



<p>I&#8217;ve also heard the idea that these balance tests in Table 1 confuse readers, who see a single <em>p</em> &lt; 0.05 — often uncorrected for multiple tests — and get worried that the trial isn&#8217;t valid. More generally, we might think that <a href="https://twitter.com/statsepi/status/1429117521739780100">Table 1 of a paper in a widely read medical journal isn&#8217;t the right place for such information</a>. This seems right to me. There are important ingredients to good research that don&#8217;t need to be presented prominently in a paper, though it is important to provide information about them somewhere readily inspectable in the package for both pre- and post-publication peer review.</p>



<p>In light of all this, here is a proposal:</p>



<ol class="wp-block-list"><li>Papers on randomized experiments should<strong> report tests of the null hypothesis that treatment was randomized as specified.&nbsp;</strong>These will often include balance tests, but of course there are others.</li><li>These tests should follow the maxim &#8220;<strong>analyze as you randomize</strong>&#8220;, both accounting for any clustering or blocking/stratification in the randomization and any particularly important subsetting of the data (e.g., removing units without outcome data).</li><li>Given a typically high prior belief that randomization occurred as planned, authors, reviewers, and readers should <strong>certainly not use <em>p</em> &lt; 0.05 as a decision criterion here</strong>.</li><li>If there is evidence against randomization, <strong>authors should investigate</strong>, and may often be able to fully or partially <strong>fix the problem </strong>long prior to peer review (e.g., by including improperly discarded data) or in the paper (e.g., by identifying the problem only affected some units&#8217; assignments, bounding the possible bias).</li><li>While it makes sense to mention them in the main text, there is typically little reason — if they don&#8217;t reject with a tiny p-value — for them to appear in Table 1 or some other prominent position in the main text, particularly of a short article. Rather, they should typically <strong>appear in a supplement or appendix</strong> — perhaps as Table S1 or Table A1.</li></ol>



<p>This recognizes both the value of checking implications of one of the most important assumptions in randomized experiments and that most of the time this test shouldn&#8217;t cause us to update our beliefs about randomization much. I wonder if any of this remains controversial and why.</p>
]]></content:encoded>
					
					<wfw:commentRss>http://www.deaneckles.com/blog/835_does-the-table-1-fallacy-apply-if-it-is-table-s1-instead/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Marshaling the black ants: Practical knowledge in a small village in Malaysia</title>
		<link>http://www.deaneckles.com/blog/815_marshaling-the-black-ants-practical-knowledge-in-a-small-village-in-malaysia/</link>
					<comments>http://www.deaneckles.com/blog/815_marshaling-the-black-ants-practical-knowledge-in-a-small-village-in-malaysia/#respond</comments>
		
		<dc:creator><![CDATA[Dean Eckles]]></dc:creator>
		<pubDate>Wed, 27 Jun 2018 19:33:05 +0000</pubDate>
				<category><![CDATA[culture]]></category>
		<category><![CDATA[ethnography]]></category>
		<category><![CDATA[food]]></category>
		<category><![CDATA[political science]]></category>
		<guid isPermaLink="false">http://www.deaneckles.com/blog/?p=815</guid>

					<description><![CDATA[One of my favorite bits from James Scott&#8217;s Seeing Like a State: While doing fieldwork in a small village in Malaysia, I was constantly struck by the breadth of my neighbors’ skills and their casual knowledge of local ecology. One particular anecdote is representative. Growing in the compound of the house in which I lived was [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>One of my favorite bits from James Scott&#8217;s <em>Seeing Like a State</em>:</p>
<blockquote><p>While doing fieldwork in a small village in Malaysia, I was constantly struck by the breadth of my neighbors’ skills and their casual knowledge of local ecology. One particular anecdote is representative. Growing in the compound of the house in which I lived was a locally famous mango tree. Relatives and acquaintances would visit when the fruit was ripe in the hope of being given a few fruits and, more important, the chance to save and plant the seeds next to their own house. Shortly before my arrival, however, the tree had become infested with large red ants, which destroyed most of the fruit before it could ripen. It seemed nothing could be done short of bagging each fruit. Several times I noticed the elderly head of household, Mat Isa, bringing dried nipah palm fronds to the base of the mango tree and checking them. When I finally got around to asking what he was up to, he explained it to me, albeit reluctantly, as for him this was pretty humdrum stuff compared to our usual gossip. He knew that small black ants, which had a number of colonies at the rear of the compound, were the enemies of large red ants. He also knew that the thin, lancelike leaves of the nipah palm curled into long, tight tubes when they fell from the tree and died. (In fact, the local people used the tubes to roll their cigarettes.) Such tubes would also, he knew, be ideal places for the queens of the black ant colonies to lay their eggs. Over several weeks he placed dried nipah fronds in strategic places until he had masses of black-ant eggs beginning to hatch. He then placed the egg-infested fronds against the mango tree and observed the ensuing week-long Armageddon. Several neighbors, many of them skeptical, and their children followed the fortunes of the ant war closely. Although smaller by half or more, the black ants finally had the weight of numbers to prevail against the red ants and gain possession of the ground at the base of the mango tree. As the black ants were not interested in the mango leaves or fruits while the fruits were still on the tree, the crop was saved.</p>
<p>This successful field experiment in biological controls presupposes several kinds of knowledge: the habitat and diet of black ants, their egg-laying habits, a guess about what local material would substitute as movable egg chambers, and experience with the fighting proclivities of red and black ants. Mat Isa made it clear that such skill in practical entomology was quite widespread, at least among his older neighbors, and that people remembered something like this strategy having worked once or twice in the past. What is clear to me is that no agricultural extension official would have known the first thing about ants, let alone biological controls; most extension agents were raised in town and in any case were concerned entirely with rice, fertilizer, and loans. Nor would most of them think to ask; they were, after all, the experts, trained to instruct the peasant. It is hard to imagine this knowledge being created and maintained except in the context of lifelong observation and a relatively stable, multigenerational community that routinely exchanges and preserves knowledge of this kind. [Chapter 9]</p></blockquote>
]]></content:encoded>
					
					<wfw:commentRss>http://www.deaneckles.com/blog/815_marshaling-the-black-ants-practical-knowledge-in-a-small-village-in-malaysia/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Selecting effective means to any end</title>
		<link>http://www.deaneckles.com/blog/803_selecting-effective-means-to-any-end/</link>
					<comments>http://www.deaneckles.com/blog/803_selecting-effective-means-to-any-end/#respond</comments>
		
		<dc:creator><![CDATA[Dean Eckles]]></dc:creator>
		<pubDate>Tue, 20 Mar 2018 00:05:11 +0000</pubDate>
				<category><![CDATA[data collection]]></category>
		<category><![CDATA[experiments]]></category>
		<category><![CDATA[Facebook]]></category>
		<category><![CDATA[heuristics]]></category>
		<category><![CDATA[individual differences]]></category>
		<category><![CDATA[influence]]></category>
		<category><![CDATA[markets]]></category>
		<category><![CDATA[personality]]></category>
		<category><![CDATA[persuasion profiling]]></category>
		<category><![CDATA[persuasive technology]]></category>
		<category><![CDATA[privacy]]></category>
		<category><![CDATA[psychology]]></category>
		<category><![CDATA[shopping]]></category>
		<category><![CDATA[social software]]></category>
		<category><![CDATA[surveillance]]></category>
		<guid isPermaLink="false">http://www.deaneckles.com/blog/?p=803</guid>

					<description><![CDATA[How are psychographic personalization and persuasion profiling different from more familiar forms of personalization and recommendation systems? A big difference is that they focus on selecting the &#8220;how&#8221; or the &#8220;means&#8221; of inducing you to an action — rather than selecting the &#8220;what&#8221; or the &#8220;ends&#8221;. Given the recent interest in this kind of personalization, [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p style="padding-left: 30px;"><em>How are psychographic personalization and persuasion profiling different from more familiar forms of personalization and recommendation systems? A big difference is that they focus on selecting the &#8220;how&#8221; or the &#8220;means&#8221; of inducing you to an action — rather than selecting the &#8220;what&#8221; or the &#8220;ends&#8221;. Given the recent interest in this kind of personalization, I wanted to highlight some excerpts from something Maurits Kaptein and I wrote in 2010.</em><sup><a href="http://www.deaneckles.com/blog/803_selecting-effective-means-to-any-end/#footnote_0_803" id="identifier_0_803" class="footnote-link footnote-identifier-link" title="We were of course influenced by B.J. Fogg&amp;#8217;s previous use of the term &amp;#8216;persuasion profiling&amp;#8217;, including in his comments to the Federal Trade Commission in 2006.">1</a></sup></p>
<p style="padding-left: 30px;"><em>This post excerpts our 2010 article, a version of which was published as:</em><br><em> Kaptein, M., &amp; Eckles, D. (2010). <a href="http://www.deaneckles.com/downloads/SelectingEffectiveMeansToAnyEnd.pdf">Selecting effective means to any end: Futures and ethics of persuasion profiling</a>. In International Conference on Persuasive Technology (pp. 82-93). Springer Lecture Notes in Computer Science.</em></p>
<p style="padding-left: 30px;"><em>For more on this topic, see <a href="http://www.persuasion-profiling.com/downloads/">these papers</a>.</em></p>
<p class="p1">We distinguish between those adaptive persuasive technologies that adapt the particular ends they try to bring about and those that adapt their means to some end.</p>
<p class="p1">First, there are systems that use models of individual users to select particular ends that are instantiations of more general target behaviors. If the more general target behavior is book buying, then such a system may select which specific books to present.</p>
<p class="p1">Second, adaptive persuasive technologies that change their means adapt the persuasive strategy that is used — independent of the end goal. One could offer the same book and for some people show the message that the book is recommended by experts, while for others emphasizing that the book is almost out of stock. Both messages be may true, but the effect of each differs between users.</p>
<h3 class="p1" style="padding-left: 30px;">Example 2. Ends adaptation in recommender systems</h3>
<p class="p1" style="padding-left: 30px;">Pandora is a popular music service that tries to engage music listeners and persuade them into spending more time on the site and, ultimately, subscribe. For both goals it is beneficial for Pandora if users enjoy the music that is presented to them by achieving a match between the music offering to individual, potentially latent music preferences. In doing so, Pandora adaptively selects the end — the actual song that is listened to and that could be purchased, rather than the means — the reasons presented for the selection of one specific song.</p>
<p class="p1">The distinction between end-adaptive persuasive technologies and means-adaptive persuasive technologies is important to discuss since adaptation in the latter case could be domain independent. In end adaptation, we can expect that little of the knowledge of the user that is gained by the system can be used in other domains (e.g. book preferences are likely minimally related to optimally specifying goals in a mobile exercise coach). Means adaptation is potentially quite the opposite. If an agent expects that a person is more responsive to authority claims than to other influence strategies in one domain, it may well be that authority claims are also more effective for that user than other strategies in a different domain. While we focus on novel means-adaptive systems, it is actually quite common for human influence agents adaptively select their means.</p>
<h2 class="p2">Influence Strategies and Implementations</h2>
<p class="p1">Means-adaptive systems select different means by which to bring about some attitude or behavior change. The distinction between adapting means and ends is an abstract and heuristic one, so it will be helpful to describe one particular way to think about means in persuasive technologies. One way to individuate means of attitude and behavior change is to identify distinct influence strategies, each of which can have many implementations. Investigators studying persuasion and compliance-gaining have varied in how they individuate influence strategies: Cialdini [5] elaborates on six strategies at length, Fogg [8] describes 40 strategies under a more general definition of persuasion, and others have listed over 100 [16].</p>
<p class="p1">Despite this variation in their individuation, influence strategies are a useful level of analysis that helps to group and distinguish specific influence tactics. In the context of means adaptation, human and computer persuaders can select influence strategies they expect to be more effective that other influence strategies. In particular, the effectiveness of a strategy can vary with attitude and behavior change goals. Different influence strategies are most effective in different stages of the attitude to behavior continuum [1]. These range from use of heuristics in the attitude stage to use of conditioning when a behavioral change has been established and needs to be maintained [11]. Fogg [10] further illustrates this complexity and the importance of considering variation in target behaviors by presenting a two-dimensional matrix of 35 classes behavior change that vary by (1) the schedule of change (e.g., one time, on cue) and (2) the type of change (e.g., perform new behavior vs. familiar behavior). So even for persuasive technologies that do not adapt to individuals, selecting an influence strategy — the means — is important. We additionally contend that influence strategies are also a useful way to represent individual differences [9] — differences which may be large enough that strategies that are effective on average have negative effects for some people.</p>
<h3 class="p1" style="padding-left: 30px;">Example 4. Backfiring of influence strategies</h3>
<p class="p1" style="padding-left: 30px;">John just subscribed to a digital workout coaching service. This system measures his activity using an accelerometer and provides John feedback through a Web site. This feedback is accompanied by recommendations from a general practitioner to modify his workout regime. John has all through his life been known as authority averse and dislikes the top-down recommendation style used. After three weeks using the service, John’s exercise levels have decreased.</p>
<h2 class="p2">Persuasion Profiles</h2>
<p class="p1">When systems represent individual differences as variation in responses to influence strategies — and adapt to these differences, they are engaging in persuasion profiling. Persuasion profiles are thus collections of expected effects of different influence strategies for a specific individual. Hence, an individual’s persuasion profile indicates which influence strategies — one way of individuating means of attitude and behavior change — are expected to be most effective.</p>
<p class="p1">Persuasion profiles can be based on demographic, personality, and behavioral data. Relying primarily on behavioral data has recently become a realistic option for interactive technologies, since vast amounts of data about individuals’ behavior in response to attempts at persuasion are currently collected. These data describe how people have responded to presentations of certain products (e.g. e-commerce) or have complied to requests by persuasive technologies (e.g. the DirectLife Activity Monitor [12]).</p>
<p class="p1">Existing systems record responses to particular messages — implementations of one or more influence strategies — to aid profiling. For example, Rapleaf uses responses by a users’ friends to particular advertisements to select the message to present to that user [2]. If influence attempts are identified as being implementations of particular strategies, then such systems can “borrow strength” in predicting responses to other implementations of the same strategy or related strategies. Many of these scenarios also involve the collection of personally identifiable information, so persuasion profiles can be associated with individuals across different sessions and services.</p>
<h2 class="p2">Consequences of Means Adaptation</h2>
<p class="p1">In the remainder of this paper we will focus on the implications of the usage of persuasion profiles in means-adaptive persuasive systems. There are two properties of these systems which make this discussion important:</p>
<p class="p1"><strong>1. End-independence:</strong> Contrary to profiles used by end-adaptive persuasive sys- tems the knowledge gained about people in means-adaptive systems can be used independent from the end goal. Hence, persuasion profiles can be used independent of context and can be exchanged between systems.</p>
<p class="p1"><strong>2. Undisclosed:</strong> While the adaptation in end-adaptive persuasive systems is often most effective when disclosed to the user, this is not necessarily the case in means-adaptive persuasive systems powered by persuasion profiles. Selecting a different influence strategy is likely less salient than changing a target behavior and thus will often not be noticed by users.</p>
<p class="p1">Although through the previous examples and the discussion of adaptive persuasive systems these two notions have already been hinted upon, we feel it is important to examine each in more detail.</p>
<h3 class="p1">End-Independence</h3>
<p class="p1">Means-adaptive persuasive technologies are distinctive in their end-independence: a persuasion profile created in one context can be applied to bringing about other ends in that same context or to behavior or attitude change in a quite different context. This feature of persuasion profiling is best illustrated by contrast with end adaptation.</p>
<p class="p1">Any adaptation that selects the particular end (or goal) of a persuasive attempt is inherently context-specific. Though there may be associations between individual differences across context (e.g., between book preferences and political attitudes) these associations are themselves specific to pairs of contexts. On the other hand, persuasion profiles are designed and expected to be independent of particular ends and contexts. For example, we propose that a person’s tendency to comply more to appeals by experts than to those by friends is present both when looking at compliance to a medical regime as well as purchase decisions.</p>
<p class="p1">It is important to clarify exactly what is required for end-independence to obtain. If we say that a persuasion profile is end-independent than this does not imply that the effectiveness of influence strategies is constant across all contexts. Consistent with the results reviewed in section 3, we acknowledge that influence strategy effectiveness depends on, e.g., the type of behavior change. That is, we expect that the most effective influence strategy for a system to employ, even given the user’s persuasion profile, would depend on both context and target behavior. Instead, end-independence requires that the difference between the average effect of a strategy for the population and the effect of that strategy for a specific individual is relatively consistent across contexts and ends.</p>
<h4 class="p1">Implications of end-independence.</h4>
<p class="p1">From end-independence, it follows that persuasion profiles could potentially be created by, and shared with, a number of systems that use and modify these profiles. For example, the profile constructed from observing a user’s online shopping behavior can be of use in increasing compliance in saving energy. Behavioral measures in latter two contexts can contribute to refining the existing profile.<sup><a href="http://www.deaneckles.com/blog/803_selecting-effective-means-to-any-end/#footnote_1_803" id="identifier_1_803" class="footnote-link footnote-identifier-link" title="This point can also be made in the language of interaction effects in analysis of variance: Persuasion profiles are estimates of person&ndash;strategy interaction effects. Thus, the end-independence of persuasion profiles requires not that the two-way strategy&ndash; context interaction effect is small, but that the three-way person&ndash;strategy&ndash;context interaction is small.">2</a></sup></p>
<p class="p1">Not only could persuasion profiles be used across contexts within a single organization, but there is the option of exchanging the persuasion profiles between corporations, governments, other institutions, and individuals. A market for persuasion profiles could develop [9], as currently exists for other data about consumers. Even if a system that implements persuasion profiling does so ethically, once constructed the profiles can be used for ends not anticipated by its designers.</p>
<p class="p1">Persuasion profiles are another kind of information about individuals collected by corporations that individuals may or have effective access to. This raises issues of data ownership. Do individuals have access to their complete persuasion profiles or other indicators of the contents of the profiles? Are individuals compensated for this valuable information [14]? If an individual wants to use Amazon’s persuasion profile to jump-start a mobile exercise coach’s adaptation, there may or may not be technical and/or legal mechanisms to obtain and transfer this profile.</p>
<h3 class="p1">Non-disclosure</h3>
<p class="p1">Means-adaptive persuasive systems are able and likely to not disclose that they are adapting to individuals. This can be contrasted with end adaptation, in which it is often advantageous for the agent to disclose the adaption and potentially easy to detect. For example, when Amazon recommends books for an individual it makes clear that these are personalized recommendations — thus benefiting from effects of apparent personalization and enabling presenting reasons why these books were recommended. In contrast, with means adaptation, not only may the results of the adaptation be less visible to users (e.g. emphasizing either “Pulitzer Prize winning” or “International bestseller”), but disclosure of the adaptation may reduce the target attitude or behavior change.</p>
<p class="p1">It is hypothesized that the effectiveness of social influence strategies is, at least partly, caused by automatic processes. According to dual-process models [4], un- der low elaboration message variables manipulated in the selection of influence strategies lead to compliance without much thought. These dual-process models distinguish between central (or systematic) processing, which is characterized by elaboration on and consideration of the merits of presented arguments, and pe- ripheral (or heuristic) processing, which is characterized by responses to cues as- sociated with, but peripheral to the central arguments of, the advocacy through the application of simple, cognitively “cheap”, but fallible rules [13]. Disclosure of means adaptation may increase elaboration on the implementations of the selected influence strategies, decreasing their effectiveness if they operate primarily via heuristic processing. More generally, disclosure of means adaptation is a disclosure of persuasive intent, which can increase elaboration and resistance to persuasion.</p>
<p class="p1">Implications of non-disclosure. The fact that persuasion profiles can be obtained and used without disclosing this to users is potentially a cause for concern. Potential reductions in effectiveness upon disclosure incentivize system designs to avoid disclosure of means adaptation.</p>
<p class="p1">Non-disclosure of means adaptation may have additional implications when combined with value being placed on the construction of an accurate persuasion profile. This requires some explanation. A simple system engaged in persuasion profiling could select influence strategies and implementations based on which is estimated to have the largest effect in the present case; the model would thus be engaged in passive learning. However, we anticipate that systems will take a more complex approach, employing active learning techniques [e.g., 6]. In active learning the actions selected by the system (e.g., the selection of the influence strategy and its implementation) are chosen not only based on the value of any resulting attitude or behavior change but including the value predicted improvements to the model resulting from observing the individual’s response. Increased precision, generality, or comprehensiveness of a persuasion profile may be valued (a) because the profile will be more effective in the present context or (b) because a more precise profile would be more effective in another context or more valuable in a market for persuasion profiles.</p>
<p class="p1">These later cases involve systems taking actions that are estimated to be non-optimal for their apparent goals. For example, a mobile exercise coach could present a message that is not estimated to be the most effective in increasing overall activity level in order to build a more precise, general, or comprehensive persuasion profile. Users of such a system might reasonably expect that it is designed to be effective in coaching them, but it is in fact also selecting actions for other reasons, e.g., selling precise, general, and comprehensive persuasion profiles is part of the company’s business plan. That is, if a system is designed to value constructing a persuasion profile, its behavior may differ substantially from its anticipated core behavior.</p>
<div class="references">
<p class="p3">[1] Aarts, E.H.L., Markopoulos, P., Ruyter, B.E.R.: The persuasiveness of ambient intelligence. In: Petkovic, M., Jonker, W. (eds.) Security, Privacy and Trust in Modern Data Management. Springer, Heidelberg (2007)</p>
<p class="p3">[2] Baker, S.: Learning, and profiting, from online friendships. BusinessWeek 9(22) (May 2009)Selecting Effective Means to Any End 93</p>
<p class="p3">[3] Berdichevsky, D., Neunschwander, E.: Toward an ethics of persuasive technology. Commun. ACM 42(5), 51–58 (1999)</p>
<p class="p3">[4] Cacioppo, J.T., Petty, R.E., Kao, C.F., Rodriguez, R.: Central and peripheral routes to persuasion: An individual difference perspective. Journal of Personality and Social Psychology 51(5), 1032–1043 (1986)</p>
<p class="p3">[5] Cialdini, R.: Influence: Science and Practice. Allyn &amp; Bacon, Boston (2001)</p>
<p class="p3">[6] Cohn,D.A., Ghahramani,Z.,Jordan,M.I.:Active learning with statistical models. Journal of Artificial Intelligence Research 4, 129–145 (1996)</p>
<p class="p3">[7] Eckles, D.: Redefining persuasion for a mobile world. In: Fogg, B.J., Eckles, D. (eds.) Mobile Persuasion: 20 Perspectives on the Future of Behavior Change. Stanford Captology Media, Stanford (2007)</p>
<p class="p3">[8] Fogg, B.J.: Persuasive Technology: Using Computers to Change What We Think and Do. Morgan Kaufmann, San Francisco (2002)</p>
<p class="p3">[9] Fogg, B.J.: Protecting consumers in the next tech-ade, U.S. Federal Trade Commission hearing (November 2006), http://www.ftc.gov/bcp/workshops/techade/pdfs/transcript_061107.pdf</p>
<p class="p3">[10] Fogg,B.J.:The behavior grid: 35 ways behavior can change. In: Proc. of Persuasive Technology 2009, p. 42. ACM, New York (2009)</p>
<p class="p3">[11] Kaptein, M., Aarts, E.H.L., Ruyter, B.E.R., Markopoulos, P.: Persuasion in am- bient intelligence. Journal of Ambient Intelligence and Humanized Computing 1, 43–56 (2009)</p>
<p class="p3">[12] Lacroix, J., Saini, P., Goris, A.: Understanding user cognitions to guide the tai- loring of persuasive technology-based physical activity interventions. In: Proc. of Persuasive Technology 2009, vol. 350, p. 9. ACM, New York (2009)</p>
<p class="p3">[13] Petty, R.E., Wegener, D.T.: The elaboration likelihood model: Current status and controversies. In: Chaiken, S., Trope, Y. (eds.) Dual-process theories in social psychology, pp. 41–72. Guilford Press, New York (1999)</p>
<p class="p3">[14] Prabhaker, P.R.: Who owns the online consumer? Journal of Consumer Market- ing 17, 158–171 (2000)</p>
<p class="p3">[15] Rawls, J.: The independence of moral theory. In: Proceedings and Addresses of the American Philosophical Association, vol. 48, pp. 5–22 (1974)</p>
<p class="p3">[16] Rhoads, K.: How many influence, persuasion, compliance tactics &amp; strategies are there? (2007), <span class="s1">http://www.workingpsychology.com/numbertactics.html</span></p>
<p class="p3">[17] Schafer, J.B., Konstan, J.A., Riedl, J.: E-commerce recommendation applications. Data Mining and Knowledge Discovery 5(1/2), 115–153 (2001)</p>
</div>
<ol class="footnotes"><li id="footnote_0_803" class="footnote">We were of course influenced by B.J. Fogg&#8217;s previous use of the term &#8216;persuasion profiling&#8217;, including in <a href="http://www.ftc.gov/bcp/workshops/techade/pdfs/transcript_061107.pdf">his comments to the Federal Trade Commission in 2006</a>.</li><li id="footnote_1_803" class="footnote">This point can also be made in the language of interaction effects in analysis of variance: Persuasion profiles are estimates of person–strategy interaction effects. Thus, the end-independence of persuasion profiles requires not that the two-way strategy– context interaction effect is small, but that the three-way person–strategy–context interaction is small.</li></ol>]]></content:encoded>
					
					<wfw:commentRss>http://www.deaneckles.com/blog/803_selecting-effective-means-to-any-end/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Is thinking about monetization a waste of our best minds?</title>
		<link>http://www.deaneckles.com/blog/775_is-thinking-about-monetization-a-waste-of-our-best-minds/</link>
					<comments>http://www.deaneckles.com/blog/775_is-thinking-about-monetization-a-waste-of-our-best-minds/#comments</comments>
		
		<dc:creator><![CDATA[Dean Eckles]]></dc:creator>
		<pubDate>Mon, 18 Sep 2017 22:59:57 +0000</pubDate>
				<category><![CDATA[culture]]></category>
		<category><![CDATA[history]]></category>
		<category><![CDATA[influence]]></category>
		<category><![CDATA[marketing]]></category>
		<category><![CDATA[markets]]></category>
		<category><![CDATA[participatory media]]></category>
		<guid isPermaLink="false">http://www.deaneckles.com/blog/?p=775</guid>

					<description><![CDATA[I just recently watched this talk by Jack Conte, musician, video artist, and cofounder of Patreon: Jack dives into how rapidly the Internet has disrupted the business of selling reproducible works, such as recorded music, investigative reporting, etc. And how important — and exciting — it is build new ways for the people who create [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>I just recently watched this talk by Jack Conte, musician, video artist, and cofounder of Patreon:</p>
<p><iframe src="https://embed.ted.com/talks/jack_conte_how_artists_can_finally_get_paid_in_the_digital_age" width="550px" height="350px" frameborder="0" scrolling="no" allowfullscreen="allowfullscreen"></iframe></p>
<p>Jack dives into how rapidly the Internet has disrupted the business of selling reproducible works, such as recorded music, investigative reporting, etc. And how important — and exciting — it is build new ways for the people who create these works to be able to make a living doing so. Of course, Jack has some particular ways of doing that in mind — such as subscriptions and subscription-like patronage of artists, such as via Patreon.</p>
<p>But this also made me think about this much-repeated<sup><a href="http://www.deaneckles.com/blog/775_is-thinking-about-monetization-a-waste-of-our-best-minds/#footnote_0_775" id="identifier_0_775" class="footnote-link footnote-identifier-link" title="So often repeated that Hammerbacher said to Charlie Rose, &ldquo;That&rsquo;s going to be on my tombstone, I think.&rdquo;">1</a></sup> <a href="https://www.fastcompany.com/3008436/why-data-god-jeffrey-hammerbacher-left-facebook-found-cloudera">quote</a> from Jeff Hammerbacher (formerly of Facebook, Cloudera, and now doing bioinformatics research):</p>
<blockquote><p>&#8220;The best minds of my generation are thinking about how to make people click ads. That sucks.&#8221;</p></blockquote>
<p>I certainly agree that many other types of research can be very important and impactful, and often more so than working on data infrastructure, machine learning, market design, etc. for advertising. However, Jack Conte&#8217;s talk certainly helped make the case for me that monetization of &#8220;content&#8221; is something that has been disrupted already but needs some of the best minds to figure out new ways for creators of valuable works to make money.</p>
<p>Some of this might be coming up with new arrangements altogether. But it seems like this will continue to occur partially through advertising revenue. Jack highlights how little ad revenue he often saw — even as his videos were getting millions of views. And newspapers&#8217; have been less able to monetize online attention through advertising than they had been able to in print.</p>
<p><figure style="width: 282px" class="wp-caption aligncenter"><img fetchpriority="high" decoding="async" class="" src="http://assets.pewresearch.org/wp-content/uploads/sites/12/2017/05/31150957/FT_17.05.25_newspapers_revenue3.png" width="282" height="302" /><figcaption class="wp-caption-text">From Pew Research Center&#8217;s report <a href="http://www.pewresearch.org/fact-tank/2017/06/01/circulation-and-revenue-fall-for-newspaper-industry/">&#8220;Despite subscription surges for largest U.S. newspapers, circulation and revenue fall for industry overall&#8221;</a>.</figcaption></figure></p>
<p>Some of this may reflect that advertising dollars were just really poorly allocated before. But improving this situation will require a mix of work on advertising — certainly beyond just getting people to click on ads — such as providing credible measurement of the effects and ROI of advertising, improving targeting of advertising, and more.</p>
<p>Another side of this question is that advertising remains an important part of our culture and force for attitude and behavior change. Certainly looking back on 2016 right now, many people are interested in what effects political advertising had.</p>
<p>So maybe it isn&#8217;t so bad if at least <em>some</em> of our best minds are working on online advertising.</p>
<ol class="footnotes">
<li id="footnote_0_775" class="footnote">So often repeated that Hammerbacher said to Charlie Rose, “That’s going to be on my tombstone, I think.”</li>
</ol>
]]></content:encoded>
					
					<wfw:commentRss>http://www.deaneckles.com/blog/775_is-thinking-about-monetization-a-waste-of-our-best-minds/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Total war, and armaments as &#8220;superior goods&#8221;</title>
		<link>http://www.deaneckles.com/blog/762_total-war-and-armaments-as-superior-goods/</link>
					<comments>http://www.deaneckles.com/blog/762_total-war-and-armaments-as-superior-goods/#comments</comments>
		
		<dc:creator><![CDATA[Dean Eckles]]></dc:creator>
		<pubDate>Tue, 07 Mar 2017 18:38:52 +0000</pubDate>
				<category><![CDATA[consumption]]></category>
		<category><![CDATA[history]]></category>
		<category><![CDATA[political science]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://www.deaneckles.com/blog/?p=762</guid>

					<description><![CDATA[Hobsbawn on industrialization, mass mobilization, and &#8220;total war&#8221; in The Age of Extremes: A History of the World, 1914-1991 (ch. 1): Jane Austen wrote her novels during the Napoleonic wars, but no reader who did not know this already would guess it, for the wars do not appear in her pages, even though a number [&#8230;]]]></description>
										<content:encoded><![CDATA[<p class="p1">Hobsbawn on industrialization, mass mobilization, and &#8220;total war&#8221; in <em>The Age of Extremes: A History of the World, 1914-1991</em> (ch. 1):</p>
<blockquote>
<p class="p1">Jane Austen wrote her novels during the Napoleonic wars, but no reader who did not know this already would guess it, for the wars do not appear in her pages, even though a number of the young gentlemen who pass through them undoubtedly took part in them. It is inconceivable that any novelist could write about Britain in the twentieth-century wars in this manner.</p>
<p class="p1">The monster of twentieth-century total war was not born full-sized. Nevertheless, from 1914 on, wars were unmistakably mass wars. Even in the First World War Britain mobilized 12.5 per cent of its men for the forces, Germany 15.4 per cent, France almost 17 per cent. In the Second World War the percentage of the total active labour force that went into the armed forces was pretty generally in the neighborhood of 20 per cent (Milward, 1979, p. 216). We may note in passing that such a level of mass mobilization, lasting for a matter of years, cannot be maintained except by a modern high-productivity industrialized economy, and – or alternatively – an economy largely in the hands of the non-combatant parts of the population. Traditional agrarian economies cannot usually mobilize so large a proportion of their labour force except seasonally, at least in the temperate zone, for there are times in the agricultural year when all hands are needed (for instance to get in the harvest). Even in industrial societies so great a manpower mobilization puts enormous strains on the labour force, which is why modern mass wars both strengthened the powers of organized labour and produced a revolution in the employment of women outside the household: temporarily in the First World War, permanently in the Second World War.</p>
</blockquote>
<p class="p1">A <em>superior good</em> is something that one purchases more of as income rises. Here it is appealing to, at least metaphorically, see the huge expenditures on industrial armaments as revealing arms as superior goods in this sense.</p>
]]></content:encoded>
					
					<wfw:commentRss>http://www.deaneckles.com/blog/762_total-war-and-armaments-as-superior-goods/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Using covariates to increase the precision of randomized experiments</title>
		<link>http://www.deaneckles.com/blog/745_using-covariates-to-increase-the-precision-of-randomized-experiments/</link>
					<comments>http://www.deaneckles.com/blog/745_using-covariates-to-increase-the-precision-of-randomized-experiments/#comments</comments>
		
		<dc:creator><![CDATA[Dean Eckles]]></dc:creator>
		<pubDate>Wed, 25 Jan 2017 23:43:58 +0000</pubDate>
				<category><![CDATA[causal inference]]></category>
		<category><![CDATA[econometrics]]></category>
		<category><![CDATA[experiments]]></category>
		<category><![CDATA[research methods]]></category>
		<category><![CDATA[statistics]]></category>
		<guid isPermaLink="false">http://www.deaneckles.com/blog/?p=745</guid>

					<description><![CDATA[A simple difference-in-means estimator of the average treatment effect (ATE) from a randomized experiment is, being unbiased, a good start, but may often leave a lot of additional precision on the table. Even if you haven&#8217;t used covariates (pre-treatment variables observed for your units) in the design of the experiment (e.g., this is often difficult [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>A simple difference-in-means estimator of the average treatment effect (ATE) from a randomized experiment is, being unbiased, a good start, but may often leave a lot of additional precision on the table. Even if you haven&#8217;t used covariates (pre-treatment variables observed for your units) in the design of the experiment (e.g., this is often difficult to do in streaming random assignment in Internet experiments; see <a href="https://arxiv.org/abs/1409.3174">our paper</a>), you can use them to increase the precision of your estimates in the analysis phase. Here are some simple ways to do that. I&#8217;m not including a whole range of more sophisticated/complicated approaches. And, of course, if you don&#8217;t have any covariates for the units in your experiments — or they aren&#8217;t very predictive of your outcome, this all won&#8217;t help you much.</p>
<h3>Post-stratification</h3>
<p>Prior to the experiment you could do stratified randomization (i.e. blocking) according to some categorical covariate (making sure that there there are same number of, e.g., each gender, country, paid/free accounts in each treatment). But you can also do something similar after: compute an ATE within each stratum and then combine the strata-level estimates, weighting by the total number of observations in each stratum. For details — and proofs showing this often won&#8217;t be much worse than blocking, consult <a href="http://sekhon.berkeley.edu/papers/postadjustment.pdf">Miratrix, Sekhon &amp; Yu (2013)</a>.</p>
<h3>Regression adjustment with a single covariate</h3>
<p>Often what you most want to adjust for is a single numeric covariate,<sup><a href="http://www.deaneckles.com/blog/745_using-covariates-to-increase-the-precision-of-randomized-experiments/#footnote_0_745" id="identifier_0_745" class="footnote-link footnote-identifier-link" title="As Winston Lin notes in the comments and as is implicit in my&nbsp;comparison with post-stratification, as long as the number of covariates is small and not growing with sample size, the same&nbsp;asymptotic results apply.">1</a></sup> such as a lagged version of your outcome (i.e., your outcome from some convenient period before treatment). You can simply use ordinary least squares regression to adjust for this covariate by regressing your outcome on both a treatment indicator and the covariate. Even better (particularly if treatment and control are different sized by design), you should regress your outcome on: a treatment indicator, the covariate centered such that it has mean zero, and the product of the two.<sup><a href="http://www.deaneckles.com/blog/745_using-covariates-to-increase-the-precision-of-randomized-experiments/#footnote_1_745" id="identifier_1_745" class="footnote-link footnote-identifier-link" title="Note that if the covariate is binary or, more generally, categorical, then this exactly coincides&nbsp;with the post-stratified estimator considered above.">2</a></sup> Asymptotically (and usually in practice with a reasonably sized experiment), this will increase precision and it is pretty easy to do. For more on this, see <a href="https://projecteuclid.org/euclid.aoas/1365527200">Lin (2012)</a>.</p>
<h3>Higher-dimensional adjustment</h3>
<p>If you have a lot more covariates to adjust for, you may want to use some kind of penalized regression. For example, you could use the Lasso (L1-penalized regression); see <a href="https://arxiv.org/abs/1507.03652">Bloniarz et al. (2016)</a>.</p>
<h3>Use out-of-sample predictions from any model</h3>
<p>Maybe you instead want to use neural nets, trees, or an ensemble of a bunch of models? That&#8217;s fine, but if you want to be able to do valid statistical inference (i.e., get 95% confidence intervals that actually cover 95% of the time), you have to be careful. The easiest way to be careful in many Internet industry settings is just to use historical data to train the model and then get out-of-sample predictions Yhat from that model for your present experiment. You then then just subtract Yhat from Y and use the simple difference-in-means estimator. <a href="http://aronow.research.yale.edu/unbiased.pdf">Aronow and Middleton (2013)</a> provide some technical details and extensions. A simple extension that makes this more robust to changes over time is to use this out-of-sample Yhat as a covariate, as described above.<sup><a href="http://www.deaneckles.com/blog/745_using-covariates-to-increase-the-precision-of-randomized-experiments/#footnote_2_745" id="identifier_2_745" class="footnote-link footnote-identifier-link" title="I added this sentence in response to Winston Lin&amp;#8217;s comment.">3</a></sup></p>
<ol class="footnotes">
<li id="footnote_0_745" class="footnote">As Winston Lin notes in the comments and as is implicit in my comparison with post-stratification, as long as the number of covariates is small and not growing with sample size, the same asymptotic results apply.</li>
<li id="footnote_1_745" class="footnote">Note that if the covariate is binary or, more generally, categorical, then this exactly coincides with the post-stratified estimator considered above.</li>
<li id="footnote_2_745" class="footnote">I added this sentence in response to Winston Lin&#8217;s comment.</li>
</ol>
]]></content:encoded>
					
					<wfw:commentRss>http://www.deaneckles.com/blog/745_using-covariates-to-increase-the-precision-of-randomized-experiments/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Adjusting biased samples</title>
		<link>http://www.deaneckles.com/blog/736_adjusting-biased-samples/</link>
					<comments>http://www.deaneckles.com/blog/736_adjusting-biased-samples/#respond</comments>
		
		<dc:creator><![CDATA[Dean Eckles]]></dc:creator>
		<pubDate>Thu, 13 Oct 2016 04:28:19 +0000</pubDate>
				<category><![CDATA[data collection]]></category>
		<category><![CDATA[econometrics]]></category>
		<category><![CDATA[political science]]></category>
		<category><![CDATA[research methods]]></category>
		<category><![CDATA[statistics]]></category>
		<guid isPermaLink="false">http://www.deaneckles.com/blog/?p=736</guid>

					<description><![CDATA[Nate Cohn at The New York Times reports on how one 19-year-old black man is having an outsized impact on the USC/LAT panel&#8217;s estimates of support for Clinton in the U.S. presidential election. It happens that the sample doesn&#8217;t have enough other people with similar demographics and voting history (covariates) to this panelist, so he [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Nate Cohn at The New York Times <a href="http://www.nytimes.com/2016/10/13/upshot/how-one-19-year-old-illinois-man-is-distorting-national-polling-averages.html?_r=0">reports on</a> how one 19-year-old black man is having an outsized impact on the USC/LAT panel&#8217;s estimates of support for Clinton in the U.S. presidential election. It happens that the sample doesn&#8217;t have enough other people with similar demographics and voting history (covariates) to this panelist, so he is getting a large weight in computing the overall averages for the populations of interest, such as likely voters:</p>
<blockquote>
<p id="story-continues-1" class="story-body-text story-content" data-para-count="104" data-total-count="104">There is a 19-year-old black man in Illinois who has no idea of the role he is playing in this election.</p>
<p class="story-body-text story-content" data-para-count="51" data-total-count="155">He is sure he is going to vote for <a class="meta-per" title="More articles about Donald J. Trump." href="http://www.nytimes.com/interactive/2016/us/elections/donald-trump-on-the-issues.html?inline=nyt-per">Donald J. Trump</a>.</p>
<p class="story-body-text story-content" data-para-count="293" data-total-count="448">And he has been held up as proof by conservatives — including outlets like Breitbart News and The New York Post — that Mr. Trump is excelling among black voters. He has even played a modest role in shifting entire polling aggregates, like the Real Clear Politics average, toward Mr. Trump.</p>
</blockquote>
<p>As usual, <a href="http://andrewgelman.com/2016/10/12/31398/">Andrew Gelman suggests that the solution to this problem is a technique he calls &#8220;Mr. P&#8221;</a> (multilevel regression and post-stratification). I wanted to comment on some practical tradeoffs among common methods. Maybe these are useful notes, which can be read alongside <a href="http://www.nytimes.com/interactive/2016/09/20/upshot/the-error-the-polling-world-rarely-talks-about.html">another nice piece by Nate Cohn on how different adjustment methods can yield very different polling results</a>.</p>
<h3>Post-stratification</h3>
<p>Complete post-stratification is when you compute the mean outcome (e.g., support for Clinton) for each stratum of people, such as 18-24-year-old black men, defined by the covariates <em>X</em>. Then you combine these weighting by the size of each group in the population of interest. This really only works when you have a lot of data compared with the number of strata — and the number of strata grows very fast in the number of covariates you want to adjust for.</p>
<h3>Modeling sample inclusion and weighting</h3>
<p>When people talk about survey weighting, often what they mean is weighting by inverse of the estimated probability of inclusion in the sample. You model selection into the survey <em>S</em> using, e.g., logistic regression on the covariates <em>X</em> and some interactions. This can be done with <a href="http://statweb.stanford.edu/~tibs/sta305files/Rudyregularization.pdf">regularization</a> (i.e., priors, shrinkage) since many of the terms in the model might be estimated with very few observations. Especially without enough regularization, this can result in very large weights when you don&#8217;t have enough of some particular type in your sample.</p>
<h3>Modeling the outcome and integrating</h3>
<p>You fit a model predicting the response (e.g., support for Clinton) <em>Y</em> with the covariates <em>X.</em> You regularize this model in some way so that the estimate for each person is going to &#8220;borrow strength&#8221; from other people with similar <em>X</em>s. So now you have a fitted responses <em>Yhat</em> for each unique <em>X. </em>To get an estimate for a particular population of interest, integrate out over the distribution of <em>X</em> in that population. Gelman&#8217;s preferred version &#8220;Mr. P&#8221; uses a multilevel (aka hierarchical Bayes, random effects) model for the outcome, but other regularization methods may often be appealing.</p>
<p>This is nice because there can be some substantial efficiency gains (i.e. more precision) by making use of the outcome information. But there are also some practical issues. First, you need a model for each outcome in your analysis, rather than just having weights you could use for all outcomes and all recodings of outcomes. Second, the implicit weights that this process puts on each observation can vary from outcome to outcome — or even for different codings (i.e. a dichotomization of answers on a numeric scale) of the same outcome. <a href="http://andrewgelman.com/2016/10/12/31398/#comment-326872">In a reply to his post, Gelman notes</a> that you would need a different model for each outcome, but that some joint model for all outcomes would be ideal. Of course, the latter joint modeling approach, while appealing in some ways (many statisticians love having one model that subsumes everything&#8230;) means that adding a new outcome to analysis would change all prior results.</p>
<p>&nbsp;</p>
<p>Side note: Other methods, not described here, also work towards the aim of matching characteristics of the population distribution (e.g., iterative proportional fitting / raking). They strike me as overly specialized and not easy to adapt and extend.</p>
]]></content:encoded>
					
					<wfw:commentRss>http://www.deaneckles.com/blog/736_adjusting-biased-samples/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>It&#8217;s better for older workers to go a little fast: DocSend in Snow Crash</title>
		<link>http://www.deaneckles.com/blog/700_docsend_in_snow_crash/</link>
					<comments>http://www.deaneckles.com/blog/700_docsend_in_snow_crash/#comments</comments>
		
		<dc:creator><![CDATA[Dean Eckles]]></dc:creator>
		<pubDate>Wed, 07 May 2014 20:31:17 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://www.deaneckles.com/blog/?p=700</guid>

					<description><![CDATA[My friends at DocSend have just done their public launch (article, TechCrunch Disrupt presentation). DocSend provides easy ways to get analytics for documents (e.g., proposals, pitch decks, reports, memos) you send out, answering questions like: Who actually viewed the document? Which pages did they view? How much time did they spend on each page? The [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>My friends at <a href="http://docsend.com">DocSend</a> have just done their public launch (<a href="http://techcrunch.com/2014/05/06/docsend-is-the-analytics-tool-for-documents-weve-all-been-waiting-for/">article</a>, <a href="http://techcrunch.com/video/docsend-gives-you-analytics-for-documents/518221886/">TechCrunch Disrupt presentation</a>). DocSend provides easy ways to get analytics for documents (e.g., proposals, pitch decks, reports, memos) you send out, answering questions like: Who actually viewed the document? Which pages did they view? How much time did they spend on each page? The most common use cases for DocSend&#8217;s current customers involve sales, marketing, and startup fundraising &#8212; mainly sending documents to people outside an organization.</p>
<p>From when Russ, Dave, and Tony started floating these ideas, I&#8217;ve pointed out the similarity with a often forgotten scene<sup><a href="http://www.deaneckles.com/blog/700_docsend_in_snow_crash/#footnote_0_700" id="identifier_0_700" class="footnote-link footnote-identifier-link" title="I know it&amp;#8217;s often forgotten because I&amp;#8217;ve tried referring to the scene with many people who have read Snow Crash&mdash; or at least claim to have read it&amp;#8230;">1</a></sup> in <a href="http://www.amazon.com/gp/product/0553380958/ref=as_li_tl?ie=UTF8&amp;camp=211189&amp;creative=373489&amp;creativeASIN=0553380958&amp;link_code=as3&amp;tag=readytohandde-20&amp;linkId=TCKNZOOUWU3GHE35"><em>Snow Crash</em></a>, in which a character — Y.T.&#8217;s mom — is tracked by her employer (the Federal Government actually) as she reads a memo on a cost-saving program. Here&#8217;s an except from Chapter 37:</p>
<p style="padding-left: 30px;">Y.T.&#8217;s mom pulls up the new memo, checks the time, and starts reading it. The estimated reading time is 15.62 minutes. Later, when Marietta [her boss] does her end-of-day statistical roundup, sitting in her private office at 9:00 P.M., she will see the name of each employee and next to it, the amount of time spent reading this memo, and her reaction, based on the time spent, will go something like this:</p>
<p style="padding-left: 30px;">• Less than 10 min.: Time for an employee conference and possible attitude counseling.<br />
• 10-14 min.: Keep an eye on this employee; may be developing slipshod attitude.<br />
• 14-15.61 min.: Employee is an efficient worker, may sometimes miss important details.<br />
• Exactly 15.62 min.: Smartass. Needs attitude counseling.<br />
• 15.63-16 min.: Asswipe. Not to be trusted.<br />
• 16-18 min.: Employee is a methodical worker, may sometimes get hung up on minor details.<br />
• More than 18 min.: Check the security videotape, see just what this employee was up to (e.g., possible unauthorized restroom break).</p>
<p style="padding-left: 30px;">Y.T.&#8217;s mom decides to spend between fourteen and fifteen minutes reading the memo. It&#8217;s better for younger workers to spend too long, to show that they&#8217;re careful, not cocky. It&#8217;s better for older workers to go a little fast, to show good management potential. She&#8217;s pushing forty. She scans through the memo, hitting the Page Down button at reasonably regular intervals, occasionally paging back up to pretend to reread some earlier section. The computer is going to notice all this. It approves of rereading. It&#8217;s a small thing, but over a decade or so this stuff really shows up on your work-habits summary.</p>
<p>This is pretty much what DocSend provides. And, despite the emphasis on sales etc., some of their customers are using this for internal HR training — which shifts the power asymmetry in how this technology is used from salespeople selling to companies (which can choose not to buy, etc.) to employers tracking their employees.<sup><a href="http://www.deaneckles.com/blog/700_docsend_in_snow_crash/#footnote_1_700" id="identifier_1_700" class="footnote-link footnote-identifier-link" title="Of course, there are some products that do this kind of thing. What distinguishes DocSend is how easy it makes it to add such personalized tracking to simple documents and that this is the primary focus of the product, unlike larger sales tool sets like ClearSlide.">2</a></sup></p>
<p>To conclude, it&#8217;s worth noting that, at least for a time, product managers at Facebook — Russ&#8217; job before starting DocSend — were required to read Snow Crash as part of their internal training. Though I don&#8217;t think the folks running PM bootcamp actually tracked whether their subordinates looked at each page.</p>
<ol class="footnotes">
<li id="footnote_0_700" class="footnote">I know it&#8217;s often forgotten because I&#8217;ve tried referring to the scene with many people who have read Snow Crash— or at least claim to have read it&#8230;</li>
<li id="footnote_1_700" class="footnote">Of course, there are some products that do this kind of thing. What distinguishes DocSend is how easy it makes it to add such personalized tracking to simple documents and that this is the primary focus of the product, unlike larger sales tool sets like ClearSlide.</li>
</ol>
]]></content:encoded>
					
					<wfw:commentRss>http://www.deaneckles.com/blog/700_docsend_in_snow_crash/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Exploratory data analysis: Our free online course</title>
		<link>http://www.deaneckles.com/blog/675_exploratory-data-analysis-our-free-online-course/</link>
					<comments>http://www.deaneckles.com/blog/675_exploratory-data-analysis-our-free-online-course/#comments</comments>
		
		<dc:creator><![CDATA[Dean Eckles]]></dc:creator>
		<pubDate>Wed, 19 Mar 2014 06:14:10 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://www.deaneckles.com/blog/?p=675</guid>

					<description><![CDATA[Moira Burke, Solomon Messing, Chris Saden, and I have created a new online course on exploratory data analysis (EDA) as part of Udacity&#8217;s &#8220;Data Science&#8221; track. It is designed to teach students how to explore data sets. Students learn how to do EDA using R and the visualization package ggplot. We emphasize the value of [&#8230;]]]></description>
										<content:encoded><![CDATA[<p><a href="http://www.cs.cmu.edu/~./mkburke/">Moira Burke</a>, <a href="http://solomonmessing.wordpress.com/">Solomon Messing</a>, Chris Saden, and I have created <a href="https://www.udacity.com/course/ud651">a new online course on exploratory data analysis (EDA)</a> as part of Udacity&#8217;s &#8220;Data Science&#8221; track. It is designed to teach students how to explore data sets. Students learn how to do EDA using R and the visualization package <a href="http://ggplot2.org/">ggplot</a>.</p>
<p>We emphasize the value of EDA for building and testing intuitions about a data set, identifying problems or surprises in data, summarizing variables and relationships, and supporting other data analysis tasks. The course materials are all free, and you can also sign up for tutoring, grading (especially useful for the final project), and certification.</p>
<p>Between providing general advice on data analysis and visualization, stepping students through exactly how to produce particular plots, and reasoning about how the data can answer questions of interest, the course includes interviews with four of our amazing colleagues on the Facebook Data Science team:</p>
<ul>
<li>Aude Hofleitner shares the process behind <a href="https://www.facebook.com/notes/facebook-data-science/coordinated-migration/10151930946453859">research on coordinated migration</a> using hometown and &#8220;current city&#8221; Facebook data. (<a href="https://www.udacity.com/course/viewer#!/c-ud651/l-685569241/m-903038547">Udacity</a>, <a href="https://www.youtube.com/watch?v=7ihp6ofAJG8#t=13">YouTube</a>)</li>
<li><a href="http://www.ladamic.com/">Lada Adamic</a> gives an example of the importance of considering transformations of both x- and y-axes in an analysis from our forthcoming paper on the spread of rumors, memes, and urban legends on Facebook. (<a href="https://www.udacity.com/course/viewer#!/c-ud651/l-755618712/m-814098635">Udacity</a>, <a href="https://www.youtube.com/watch?v=Isa_FGQrvgs">YouTube</a>)</li>
<li><a href="http://seanjtaylor.com/">Sean Taylor</a> illustrates the <a href="http://scott.fortmann-roe.com/docs/BiasVariance.html">bias–variance tradeoff</a> and other modeling decisions in <a href="https://www.facebook.com/notes/facebook-data-science/the-emotional-highs-and-lows-of-the-nfl-season/10152033221418859">his work on sentiment expressed by NFL (American football) fans</a>. (<a href="https://www.udacity.com/course/viewer#!/c-ud651/l-701610057/m-870949219">Udacity</a>, <a href="https://www.youtube.com/watch?v=ahaxt6UKxQw">YouTube</a>)</li>
<li><a href="http://www-personal.umich.edu/~ebakshy/">Eytan Bakshy</a> provides advice and encouragement to people working to become a &#8220;data scientist&#8221; (whatever that is). (<a href="https://www.udacity.com/course/viewer#!/c-ud651/l-729069797/m-897250937">Udacity</a>, <a href="https://www.youtube.com/watch?v=FdkhUOtHIFg">YouTube</a>)</li>
</ul>
<p>One unique feature of this course is that one of the data sets we use is a &#8220;pseudo-Facebook&#8221; data set that Moira and I created to share many features with real Facebook data, but to not describe any particular real Facebook users or reveal certain kinds of information about aggregate behavior. Other data sets used in the course include two different data sets giving sale prices for diamonds and panel &#8220;scanner&#8221; data describing yogurt purchases.</p>
<p>It was an fascinating and novel process putting together this course. We scripted almost everything in detail in advance &#8212; before any filming started &#8212; using first outlines, then drafts using <a href="https://www.rstudio.com/ide/docs/r_markdown">Markdown in R</a> with <a href="http://yihui.name/knitr/">knitr</a>, and then more detailed scripts with Udacity-specific notation for all the different shots and interspersed quizzes. I think this is part of what leads <a href="http://junkcharts.typepad.com/junk_charts/2014/03/learn-eda-exploratory-data-analysis-from-the-experts.html">Kaiser Fung to write:</a></p>
<p style="padding-left: 30px;"><em>The course is designed from the ground up for online instruction, and it shows. If you have tried other online courses, you will immediately notice the difference in quality.</em></p>
<p>Check out <a href="https://www.udacity.com/course/ud651">the course</a> and let me know what you think — we&#8217;re still incorporating feedback.</p>
]]></content:encoded>
					
					<wfw:commentRss>http://www.deaneckles.com/blog/675_exploratory-data-analysis-our-free-online-course/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
	</channel>
</rss>
