<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Symposium Magazine &#187; RSS</title>
	<atom:link href="http://www.symposium-magazine.com/category/rss-content/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.symposium-magazine.com</link>
	<description>Where Academia Meets Public Life</description>
	<lastBuildDate>Fri, 19 Dec 2014 12:59:17 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.9.2</generator>
	<item>
		<title>Science Journalism and the Art of Expressing Uncertainty</title>
		<link>http://www.symposium-magazine.com/science-journalism-and-the-art-of-expressing-uncertainty/</link>
		<comments>http://www.symposium-magazine.com/science-journalism-and-the-art-of-expressing-uncertainty/#comments</comments>
		<pubDate>Tue, 31 Dec 2013 10:00:13 +0000</pubDate>
		<dc:creator><![CDATA[admin]]></dc:creator>
				<category><![CDATA[August 2013 Edition]]></category>
		<category><![CDATA[Current Issue]]></category>
		<category><![CDATA[Featured Article]]></category>
		<category><![CDATA[RSS]]></category>

		<guid isPermaLink="false">http://symposium-magazine.com/?p=6651</guid>
		<description><![CDATA[<div><img width="150" height="150" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/08/journos1-150x150.jpg" class="attachment-thumbnail wp-post-image" alt="Journalists are at work in a press room" style="margin-bottom: 15px;" /></div>It is all too easy for unsupported claims to get published in scientific publications. How can journalists address this? &#160; Note: This piece was originally published on August 4, 2013. Journalism is filled with examples of erroneous reporting turning into received opinion when reporters, editors, and the public take a story at face value after it came from a generally trusted source. Consider, for example, the claims of Iraq’s weapons of mass destruction, or the various public and corporate scandals where authorities ranging from government officials to the chairman of General Electric are taken at their word. As a scientist, I am concerned about the publication and promotion of speculative research, but I also believe that journalists can address this problem. Indeed, the traditional journalistic tool of interviewing knowledgeable outsiders can help if the focus is on the aspects of uncertainty associated with any scientific claim. Modern science is, by and large, a set of research directions rather than a collection of nuggets of established truths. In science reporting, the trusted sources are respected journals that actually are not infallible and often publish thought-provoking but speculative claims as settled truth. The story continues from there: The journal or the authors themselves promote the work in the news media, and established outlets report the claims without question. The journalists involved are implicitly following an assumption: If an article is published in a well regarded publication, treat it as true. In fact, this is a dangerous supposition. Just to cite a few recent examples, news media have reported a finding that African countries are poor because they have too much genetic diversity (published in the American Economic Review); that parents who pay for college will actually encourage their children to do worse in class (American Journal of Sociology); and that women&#8217;s political attitudes show huge variation across the menstrual cycle (Psychological Science). Each of these topics is, in its own way, fascinating, but the particular studies have serious flaws, either in the design of their data collection (the political attitudes study), the analysis (the study of college grades), or the interpretation of their data analysis (the genetic diversity study). Flawed research can still contribute in some way toward our understanding—remember our view of science as a set of research directions—but journalists can mislead their readers if they present such claims unquestioningly. The statistical errors in these published papers are important but subtle—subtle enough so that all three were published in the top journals in their fields. Papers such as this represent a fundamental difficulty in science reporting. On one hand, they are flawed in the sense that their conclusions are not fully supported by their data (at least, according to me and various other observers); on the other, we cannot expect a typical science reporter on his or her own to catch methodological errors that escaped several peer reviewers as well as the articles’ authors. My goal here is to suggest a strategy for science writers to express uncertainty about published studies without resorting to meaningless relativism. I will get to my recommendations in the context of a paper from 2007 by sociologist Satoshi Kanazawa on the correlation between attractiveness of parents and sex of children. Some detail is required here because this is necessary to understand the statistical problems with this paper. But my ultimate reason for talking about this particular example is that it demonstrates the challenge of reporting on statistical claims. This study was reported in what I view as an inappropriately uncritical way in a leading outlet for science journalism, and I will address how this reporting could be improved without requiring some extraordinary level of statistical expertise on the part of the journalist. I brought this case up a few years ago at a meeting of the National Association of Science Writers, when I spoke on the challenges of statistical inference for small effects. Using a dataset of 3,000 parents, Kanazawa found that the children of attractive parents were more likely to be girls, compared to the children of less attractive parents. The correlation was &#8220;statistically significant&#8221;—that is, there was less than a 5% chance of seeing a difference this extreme if there were no correlation in the general population. This result, along with some more general claims about evolutionary psychology, was published in the Journal of Theoretical Biology and received wide media exposure. But Kanazawa’s claims were not supported by the data in the way claimed in his paper. Simply put, his sample size was so small that it would be essentially impossible to learn anything about the correlation between parental beauty and child’s sex in the population. This may sound surprising, given that a sample size of 3,000 seems large. But it is not given the scientific context. There is a vast scientific literature on the human sex ratio, and any plausible differences in the probability of a female birth, comparing beautiful and ugly parents, would have to be very small: on the order of one-third of a percentage point or less. For example, it could be that the probability of having a girl is 48.9% for attractive parents and 48.7% for unattractive parents. It turns out that you would need a sample size far greater than 3,000 to detect such a small effect. To develop your intuition on this, consider national opinion polls, which typically interview about 1,500 people and have a margin of error of three percentage points either way. If you crunch the numbers, you would find that you need a representative sample of hundreds of thousands of people to detect differences of less than one-third of a percentage point. So from a mathematical standpoint, Kanazawa’s study never had a chance to provide an adequate estimate for what it was purporting to estimate. What about the claim of statistical significance, namely, that a pattern as extreme as in the data would occur by chance less than 5% of the time? The answer is that events that are somewhat rare will happen...]]></description>
				<content:encoded><![CDATA[<div><img width="150" height="150" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/08/journos1-150x150.jpg" class="attachment-thumbnail wp-post-image" alt="Journalists are at work in a press room" style="margin-bottom: 15px;" /></div><p><em>It is all too easy for unsupported claims to get published in scientific publications. How can journalists address this?<span id="more-6651"></span></em></p>
<p>&nbsp;</p>
<p><em>Note: This piece was originally published on August 4, 2013.</em></p>
<p>Journalism is filled with examples of erroneous reporting turning into received opinion when reporters, editors, and the public take a story at face value after it came from a generally trusted source. Consider, for example, the claims of Iraq’s weapons of mass destruction, or the various public and corporate scandals where authorities ranging from government officials to the chairman of General Electric are taken at their word.</p>
<p>As a scientist, I am concerned about the publication and promotion of speculative research, but I also believe that journalists can address this problem. Indeed, the traditional journalistic tool of interviewing knowledgeable outsiders can help if the focus is on the aspects of uncertainty associated with any scientific claim. Modern science is, by and large, a set of research directions rather than a collection of nuggets of established truths.</p>
<p>In science reporting, the trusted sources are respected journals that actually are not infallible and often publish thought-provoking but speculative claims as settled truth. The story continues from there: The journal or the authors themselves promote the work in the news media, and established outlets report the claims without question. The journalists involved are implicitly following an assumption: If an article is published in a well regarded publication, treat it as true. In fact, this is a dangerous supposition.</p>
<p>Just to cite a few recent examples, news media have reported a finding that African countries are poor because they have too much genetic diversity (published in the <em>American Economic Review</em>); that parents who pay for college will actually encourage their children to do worse in class (<em>American Journal of Sociology</em>); and that women&#8217;s political attitudes show huge variation across the menstrual cycle (<em>Psychological Science</em>). Each of these topics is, in its own way, fascinating, but the particular studies have serious flaws, either in the design of their data collection (the political attitudes study), the analysis (the study of college grades), or the interpretation of their data analysis (the genetic diversity study). Flawed research can still contribute in some way toward our understanding—remember our view of science as a set of research directions—but journalists can mislead their readers if they present such claims unquestioningly.</p>
<p>The statistical errors in these published papers are important but subtle—subtle enough so that all three were published in the top journals in their fields. Papers such as this represent a fundamental difficulty in science reporting. On one hand, they are flawed in the sense that their conclusions are not fully supported by their data (at least, according to me and various other observers); on the other, we cannot expect a typical science reporter on his or her own to catch methodological errors that escaped several peer reviewers as well as the articles’ authors. My goal here is to suggest a strategy for science writers to express uncertainty about published studies without resorting to meaningless relativism.</p>
<p>I will get to my recommendations in the context of a paper from 2007 by sociologist Satoshi Kanazawa on the correlation between attractiveness of parents and sex of children. Some detail is required here because this is necessary to understand the statistical problems with this paper. But my ultimate reason for talking about this particular example is that it demonstrates the challenge of reporting on statistical claims. This study was reported in what I view as an inappropriately uncritical way in a leading outlet for science journalism, and I will address how this reporting could be improved without requiring some extraordinary level of statistical expertise on the part of the journalist.</p>
<p>I brought this case up a few years ago at a meeting of the National Association of Science Writers, when I spoke on the challenges of statistical inference for small effects. Using a dataset of 3,000 parents, Kanazawa found that the children of attractive parents were more likely to be girls, compared to the children of less attractive parents. The correlation was &#8220;statistically significant&#8221;—that is, there was less than a 5% chance of seeing a difference this extreme if there were no correlation in the general population. This result, along with some more general claims about evolutionary psychology, was published in the <em>Journal of Theoretical Biology</em> and received wide media exposure.</p>
<p>But Kanazawa’s claims were not supported by the data in the way claimed in his paper. Simply put, his sample size was so small that it would be essentially impossible to learn anything about the correlation between parental beauty and child’s sex in the population. This may sound surprising, given that a sample size of 3,000 seems large. But it is not given the scientific context.</p>
<p>There is a vast scientific literature on the human sex ratio, and any plausible differences in the probability of a female birth, comparing beautiful and ugly parents, would have to be very small: on the order of one-third of a percentage point or less. For example, it could be that the probability of having a girl is 48.9% for attractive parents and 48.7% for unattractive parents. It turns out that you would need a sample size far greater than 3,000 to detect such a small effect. To develop your intuition on this, consider national opinion polls, which typically interview about 1,500 people and have a margin of error of three percentage points either way. If you crunch the numbers, you would find that you need a representative sample of hundreds of thousands of people to detect differences of less than one-third of a percentage point. So from a mathematical standpoint, Kanazawa’s study never had a chance to provide an adequate estimate for what it was purporting to estimate.</p>
<p>What about the claim of statistical significance, namely, that a pattern as extreme as in the data would occur by chance less than 5% of the time? The answer is that events that are somewhat rare will happen if you look hard enough. In this case, there were various ways to slice the data. For example, in the survey, attractiveness was measured on a scale of one to five. Kanazawa&#8217;s statistically significant difference was a comparison between the most beautiful people (category 5), compared to categories 1-4. But he could have compared categories 4-5 to 1-3, or compared 3-5 to 1-2. Or, perhaps more reasonably, he could have fit a model called a linear regression, which can be considered as an average of all these comparisons. It turns out that, of all these, the comparison he looked at happened to be the one that was largest in the data at hand, and this comparison was, on its own, statistically significant.</p>
<p>At one level, we can call this a mistake. And this mistake did come under scrutiny, including from me; I later published a letter in the journal and an article in the magazine American Scientist expanding on the above criticisms. But broadly speaking, the quirky claim of an association between attractiveness and sex ratio received positive press attention at the time. For example, the Freakonomics Blog reported that this study suggests:</p>
<p>“There are more beautiful women in the world than there are handsome men. Why? Kanazawa argues it’s because good-looking parents are 36 percent more likely to have a baby daughter as their first child than a baby son—which suggests, evolutionarily speaking, that beauty is a trait more valuable for women than for men. The study was conducted with data from 3,000 Americans, derived from the National Longitudinal Study of Adolescent Health, and was published in the <em>Journal of Theoretical Biology</em>.”</p>
<p>Actually, from a quantitative perspective, the claim contradicts what is known about variation in the human sex ratio from the scientific literature. A difference of 36 percent is literally 100 times larger than anything that could reasonably be expected in the population.</p>
<p>As I said to the audience of science writers, this story demonstrates the challenges of reporting on technical work. It is the sort of error that can, and does, make its way past the author, peer reviewers, journal editors, and into the news media. This sort of thing happens—none of us is infallible—but it is worth thinking about how the news media could play a more active and constructive role in the scientific conversation.</p>
<p>How could journalists do more? This is where the importance of expert feedback comes in. Just as a careful journalist runs the veracity of a scoop by as many reliable sources as possible, he or she should interview as many experts as possible before reporting on a scientific claim. The point is not necessarily to interview an opponent of the study, or to present &#8220;both sides&#8221; of the story, but rather to talk to independent scholars get their views and troubleshoot as much as possible. The experts might very well endorse the study, but even then they are likely to add more nuance and caveats. In the Kanazawa study, for example, any expert in sex ratios would have questioned a claim of a 36% difference—or even, for that matter, a 3.6% difference. It is true that the statistical concerns—namely, the small sample size and the multiple comparisons—are a bit subtle for the average reader. But any sort of reality check would have helped by pointing out where this study took liberties.</p>
<p>The point is not that we need reflexive skepticism, or that every story becomes a controversy. Rather, journalists should remember to put any dramatic claims in context, given that publication in a leading journal does not by itself guarantee that work is free of serious error.</p>
<p>So what is new here? Journalists, who already know about the importance of interviewing experts, can bring their training in contextualizing stories to draw a clearer picture of the uncertainty that underlies so much scientific endeavor. We now live in a world of post-publication review—existing peer review serves a function but is not complete—and news reporting can be part of this. And it should not be a problem for a journalist to find these experts; many scientists would be flattered to be quoted in the press.</p>
<p>If journalists go slightly outside the loop &#8212; for example, asking a cognitive psychologist to comment on the work of a social psychologist, or asking a computer scientist for views on the work of a statistician – they have a chance to get a broader view. To put it another way: some of the problems of hyped science arise from the narrowness of subfields, but you can take advantage of this by moving to a neighbouring subfield to get an enhanced perspective.</p>
<p>Just as is the case with so many other beats, science journalism has to adhere to the rules of solid reporting and respect the need for skepticism. And this skepticism should not be exercised for the sake of manufacturing controversy—two sides clashing for the sake of getting attention—but for the sake of conveying to readers a sense of uncertainty, which is central to the scientific process. The point is not that all articles are fatally flawed, but that many newsworthy studies are coupled with press releases that, quite naturally, downplay uncertainty.</p>
<p>For an example of the value of critical science reporting, consider the recent discussion of the data-analysis choices of economists Carmen Reinhart and Kenneth Rogoff in their <a title="Link to Gelman post on Reinhart and Rogoff" href="http://andrewgelman.com/2013/04/16/memo-to-reinhart-and-rogoff-i-think-its-best-to-admit-your-errors-and-go-on-from-there" target="_blank">now-famous 2010 paper</a> on public debt and economic growth. In this case, the lively discussion came only after some critics released a paper with detailed refutations of the analysis of Reinhart and Rogoff. The bigger point, though, is that when reporters recognize the uncertainty present in all scientific conclusions, I suspect they will be more likely to ask interesting questions and employ their journalistic skills.</p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://www.symposium-magazine.com/science-journalism-and-the-art-of-expressing-uncertainty/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Game Theory Is Useful, Except When It Is Not</title>
		<link>http://www.symposium-magazine.com/game-theory-is-useful-except-when-it-is-not-ariel-d-procaccia/</link>
		<comments>http://www.symposium-magazine.com/game-theory-is-useful-except-when-it-is-not-ariel-d-procaccia/#comments</comments>
		<pubDate>Tue, 31 Dec 2013 01:09:55 +0000</pubDate>
		<dc:creator><![CDATA[admin]]></dc:creator>
				<category><![CDATA[Back Issues]]></category>
		<category><![CDATA[July 2013 Edition]]></category>
		<category><![CDATA[RSS]]></category>
		<category><![CDATA[Symposium Magazine]]></category>
		<category><![CDATA[Ariel Procaccia]]></category>
		<category><![CDATA[computer science]]></category>
		<category><![CDATA[game theory]]></category>

		<guid isPermaLink="false">http://symposium-magazine.com/symposium_magazine/?p=30</guid>
		<description><![CDATA[<div><img width="150" height="150" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/07/nash-150x150.jpeg" class="attachment-thumbnail wp-post-image" alt="Nobel Laureates Beijing Forum 2005" style="margin-bottom: 15px;" /></div>The study of strategic interactions is gaining popularity across disciplines, but that does not mean its relevance is universal. &#160; Note: This article was originally published on July 8, 2013. Although game theory is now a household name, few people realize that game theorists do not actually study “games” — at least not in the usual sense of the word. Rather, we interpret a “game” as a strategic interaction between two or more rational “players.” These players can be people, animals, or computer programs; the interaction can be cooperative, competitive, or somewhere in between. Game theory is a mathematical theory and, as such, provides a slew of rigorous models of interaction and theorems to specify which outcomes are predicted by any given model. Sounds useful, doesn’t it? After all, many people are familiar with one of game theory’s most famous test cases: the Cold War. It is well-known that game theory informed U.S. nuclear strategy, and indeed, the interaction between the two opposing sides — NATO and the Warsaw Pact — can be modeled as the following game, which is a variation of the famous “Prisoner’s Dilemma.” Both sides can choose to either build a nuclear arsenal or avoid building one. From each side’s point of view, not building an arsenal as the other side builds one is the worst possible outcome, because it leads to strategic inferiority and, potentially, destruction. By the same token, from each side’s point of view, building an arsenal while the other side avoids building one is the best possible outcome. However, if both sides avoid building an arsenal, or both sides build one, neither side has an advantage over the other. Both sides prefer the former option because it frees them from the enormous costs of a nuclear arms race. Strangely enough, though, the only rational strategy is to build an arsenal, whether the other side builds one (in which case you are saving yourself from possible annihilation) or does not (in which case you are gaining the strategic upper hand). This analysis gave rise to the doctrine of MAD: Mutually Assured Destruction. The simple idea is that the use of nuclear weapons by one side would result in full-scale nuclear war and the complete annihilation of both sides. Given that nuclear stockpiling is unavoidable, MAD at least guaranteed that no side could afford to attack the other. So it would seem that game theory has saved the world from thermonuclear war. But does one really need to be a game theorist to come up with these insights? Game theory tells us, for example, that different forms of stable outcomes exist in a wide variety of “games” and computational game theory gives us tools to compute them. But the type of strategic reasoning underlying Cold War policy does not directly leverage deep mathematics — it is just common sense. More generally, one can argue that game theory — as a mathematical theory — cannot provide concrete advice in real-life situations. In fact, one of the most forceful advocates of this point is the well-known game theorist Ariel Rubinstein, who claims that “applications” of game theory are nothing more than attaching labels to real-life situations. In an article that rehashes his well-known views, Rubinstein cites the euro zone crisis, which some say is a version of the Prisoner’s Dilemma, to argue that “such statements include nothing more profound than saying that the euro crisis is like a Greek tragedy.” In Rubinstein’s view, game theory is first and foremost a mathematical theory with a “nearly magical connection between the symbols and the words.” By contrast, he contends, for the purpose of application, we should see game theory as a “collection of fables and proverbs” that can provide an interesting perspective on real-life situations but not give specific recommendations. Michael Chwe, a professor of political science at the University of California, Los Angeles, offers a different take, arguing in his latest book that novelist Jane Austen is, in fact, a game theorist. After describing a scene from Mansfield Park, Chwe writes: “With this episode, Austen illustrates how in some situations, not having a choice can be better. This is an unintuitive result well known in game theory.” Another of Austen’s game-theoretic insights has explicit applications: “When a high-status person interacts with a low-status person, the high-status person has difficulty understanding the low-status person as strategic. … This can help us understand why, for example, after the U.S. invaded Iraq, the resulting Iraqi insurgency came as a complete surprise to U.S. leaders.” To Chwe, Austen studied the principles of strategic interaction on the level of Rubinstein’s “fables and proverbs.” But if we take his conclusion &#8212; this makes Austen a game theorist &#8212; this means that these fables and proverbs lie at the core of game theory, rather than at game theory’s periphery, where it interfaces with popular culture. Chwe makes a convincing case that Austen was keenly interested in studying how people manipulate each other &#8212; and, indeed, that is one of the things that make Austen a great writer. But that does not necessarily make her a great game theorist. In fact, as a mathematical and scientific theory, game theory often falls short when it is applied to complex situations like international relations or parliamentary balance of power. However, in some situations, game theory can be useful in the scientific, prescriptive sense. For example, game theory is useful for, well, playing games. Modern software agents that play games like poker (such as the ones from Tuomas Sandholm’s group at Carnegie Mellon University) do in fact use rather advanced game theory, augmented with clever equilibrium-computation algorithms. Game theory actually works better when the players are computer programs, because these are completely rational, unlike human players, who can be unpredictable. Game theory is also useful for designing auctions. To give a concrete example from my own experience, consider the surprisingly lively Pittsburgh real-estate market, where multiple buyers typically submit simultaneous bids for one house without seeing each...]]></description>
				<content:encoded><![CDATA[<div><img width="150" height="150" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/07/nash-150x150.jpeg" class="attachment-thumbnail wp-post-image" alt="Nobel Laureates Beijing Forum 2005" style="margin-bottom: 15px;" /></div><p><em>The study of strategic interactions is gaining popularity across disciplines, but that does not mean its relevance is universal.</em></p>
<p><span id="more-30"></span></p>
<p>&nbsp;</p>
<p><em>Note: This article was originally published on July 8, 2013.</em></p>
<p>Although game theory is now a household name, few people realize that game theorists do not actually study “games” — at least not in the usual sense of the word. Rather, we interpret a “game” as a strategic interaction between two or more rational “players.” These players can be people, animals, or computer programs; the interaction can be cooperative, competitive, or somewhere in between. Game theory is a mathematical theory and, as such, provides a slew of rigorous models of interaction and theorems to specify which outcomes are predicted by any given model.</p>
<p>Sounds useful, doesn’t it? After all, many people are familiar with one of game theory’s most famous test cases: the Cold War. It is well-known that game theory informed U.S. nuclear strategy, and indeed, the interaction between the two opposing sides — NATO and the Warsaw Pact — can be modeled as the following game, which is a variation of the famous “Prisoner’s Dilemma.” Both sides can choose to either build a nuclear arsenal or avoid building one. From each side’s point of view, not building an arsenal as the other side builds one is the worst possible outcome, because it leads to strategic inferiority and, potentially, destruction. By the same token, from each side’s point of view, building an arsenal while the other side avoids building one is the best possible outcome.</p>
<p>However, if both sides avoid building an arsenal, or both sides build one, neither side has an advantage over the other. Both sides prefer the former option because it frees them from the enormous costs of a nuclear arms race. Strangely enough, though, the only rational strategy is to build an arsenal, whether the other side builds one (in which case you are saving yourself from possible annihilation) or does not (in which case you are gaining the strategic upper hand). This analysis gave rise to the doctrine of MAD: Mutually Assured Destruction. The simple idea is that the use of nuclear weapons by one side would result in full-scale nuclear war and the complete annihilation of both sides. Given that nuclear stockpiling is unavoidable, MAD at least guaranteed that no side could afford to attack the other.</p>
<p>So it would seem that game theory has saved the world from thermonuclear war. But does one really need to be a game theorist to come up with these insights? Game theory tells us, for example, that different forms of stable outcomes exist in a wide variety of “games” and computational game theory gives us tools to compute them. But the type of strategic reasoning underlying Cold War policy does not directly leverage deep mathematics — it is just common sense.</p>
<p>More generally, one can argue that game theory — as a mathematical theory — cannot provide concrete advice in real-life situations. In fact, one of the most forceful advocates of this point is the well-known game theorist Ariel Rubinstein, who claims that “applications” of game theory are nothing more than attaching labels to real-life situations. In an <a href="http://www.faz.net/aktuell/feuilleton/debatten/game-theory-how-game-theory-will-solve-the-problems-of-the-euro-bloc-and-stop-iranian-nukes-12130407.html">article</a> that rehashes his well-known views, Rubinstein cites the euro zone crisis, which some say is a version of the Prisoner’s Dilemma, to argue that “such statements include nothing more profound than saying that the euro crisis is like a Greek tragedy.” In Rubinstein’s view, game theory is first and foremost a mathematical theory with a “nearly magical connection between the symbols and the words.” By contrast, he contends, for the purpose of application, we should see game theory as a “collection of fables and proverbs” that can provide an interesting perspective on real-life situations but not give specific recommendations.</p>
<p>Michael Chwe, a professor of political science at the University of California, Los Angeles, offers a different take, arguing in his latest book that novelist Jane Austen is, in fact, a game theorist. After describing a scene from <i>Mansfield Park</i>, Chwe writes: “With this episode, Austen illustrates how in some situations, not having a choice can be better. This is an unintuitive result well known in game theory.” Another of Austen’s game-theoretic insights has explicit applications: “When a high-status person interacts with a low-status person, the high-status person has difficulty understanding the low-status person as strategic. … This can help us understand why, for example, after the U.S. invaded Iraq, the resulting Iraqi insurgency came as a complete surprise to U.S. leaders.”</p>
<p>To Chwe, Austen studied the principles of strategic interaction on the level of Rubinstein’s “fables and proverbs.” But if we take his conclusion &#8212; this makes Austen a game theorist &#8212; this means that these fables and proverbs lie at the core of game theory, rather than at game theory’s periphery, where it interfaces with popular culture. Chwe makes a convincing case that Austen was keenly interested in studying how people manipulate each other &#8212; and, indeed, that is one of the things that make Austen a great writer. But that does not necessarily make her a great game theorist.</p>
<p>In fact, as a mathematical and scientific theory, game theory often falls short when it is applied to complex situations like international relations or parliamentary balance of power. However, in some situations, game theory can be useful in the scientific, prescriptive sense. For example, game theory is useful for, well, playing games. Modern software agents that play games like poker (such as the ones from Tuomas Sandholm’s group at Carnegie Mellon University) do in fact use rather advanced game theory, augmented with clever equilibrium-computation algorithms. Game theory actually works better when the players are computer programs, because these are completely rational, unlike human players, who can be unpredictable.</p>
<p>Game theory is also useful for designing auctions. To give a concrete example from my own experience, consider the surprisingly lively Pittsburgh real-estate market, where multiple buyers typically submit simultaneous bids for one house without seeing each other’s offers. The house is sold to the highest bidder, and the price is equal to the highest bid. In this procedure, which is called a first-price auction, buyers try to second-guess each other, and their bids are normally lower than the price they are actually willing to pay.</p>
<p>Suppose that, instead, the seller chooses to sell the house to the highest bidder for a price that is equal to the second-highest bid. This seemingly far-fetched idea is known as the second-price auction. In a second-price auction, one can never benefit from submitting a bid that is different from one’s true value for the house. Indeed, intuitively, a buyer’s bid does not affect the price he pays if he wins, so the buyer’s bid should be no lower than his true value in order to maximize his chances of winning. But bidding a value that is higher than the buyer’s true value will change the outcome only if the second-highest bid is higher than the buyer’s true value (otherwise, the buyer could have won by bidding his true value), in which case the buyer does not want to win the auction, and he overpays. In game-theoretic terms, the second-price auction is incentive compatible.</p>
<p>The beautiful idea underlying the second-price auction has inspired similar insights that guide the design of sophisticated auctions for goods worth billions of dollars, such as rights to transmit over bands of the electromagnetic spectrum. And while this application of game theory seems fundamentally different from playing poker, the two are in fact similar: both involve interactions taking place in closed, controlled environments, where the rules of the game are specified exactly.</p>
<p>But not all of game theory’s success stories are like that. An especially exciting example comes from Milind Tambe’s group at the University of Southern California, a project I have collaborated on. Their work models security situations as a game between a defender (e.g., airport security) and an attacker (e.g., a terrorist organization or a smuggling ring). The defender’s strategy is a randomized deployment of its resources (e.g., cameras, patrols) specifying how likely it is that each of its resources would defend each of the possible targets (e.g., airport terminals).</p>
<p>The defender moves first by committing to a security strategy, which the attacker then observes via surveillance. The attacker must choose which target to pursue knowing the likelihood that it will be defended, but without knowing whether a specific target is defended on the actual day of attack. The defender must therefore anticipate the attacker’s response and commit to a strategy that guarantees the best outcome by deploying resources randomly and broadly. This forces the attacker to be less effective.</p>
<p>Similar game-theoretic models have been around since the 1960s, but it is only in the last decade that researchers have begun to understand the computational aspects of these games. Tambe and his group have gone as far as implementing and deploying algorithms that prescribe a security policy by computing the defender’s optimal strategy. These algorithms are currently in use by the Los Angeles International Airport, the U.S. Coast Guard, and the Federal Air Marshal Service.</p>
<p>These success stories explain game theory’s relevance, but not its huge popularity. The latest edition of a massive open online course (MOOC) on game theory, taught by professors from Stanford and the University of British Columbia, had 130,000 registered students. Are many of these students hoping that game theory will help them in their jobs or their daily lives? If so, they are in for a disappointment. Game theory is typically not useful, but when it is, it shines.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://www.symposium-magazine.com/game-theory-is-useful-except-when-it-is-not-ariel-d-procaccia/feed/</wfw:commentRss>
		<slash:comments>15</slash:comments>
		</item>
		<item>
		<title>Still Waiting for Change</title>
		<link>http://www.symposium-magazine.com/still-waiting-for-change/</link>
		<comments>http://www.symposium-magazine.com/still-waiting-for-change/#comments</comments>
		<pubDate>Tue, 31 Dec 2013 01:06:32 +0000</pubDate>
		<dc:creator><![CDATA[admin]]></dc:creator>
				<category><![CDATA[August 2013 Edition]]></category>
		<category><![CDATA[RSS]]></category>
		<category><![CDATA[Symposium Magazine]]></category>

		<guid isPermaLink="false">http://symposium-magazine.com/?p=6707</guid>
		<description><![CDATA[<div><img width="150" height="150" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/08/waitress1-150x150.jpg" class="attachment-thumbnail wp-post-image" alt="Denny&#039;s Offers Free Breakfast In Effort To Aggressively Promote Sales" style="margin-bottom: 15px;" /></div>Economists are ignoring a class of workers whose wages have been frozen for decades. &#160; Note: This article was originally published on August 5, 2013. Since its inception, the minimum wage has provoked fiery debate. Indeed, when the Fair Labor Standards Act (FLSA) set the first federal minimum wage at $0.25 in 1938, the National Association of Manufacturers deemed it “a step in the direction of communism, Bolshevism, fascism, and Nazism.” Today, it remains a politically divisive issue. While many Democrats, including President Barack Obama, are calling for an increase at the federal level, numerous Republicans, including centrists in the party, would abolish it altogether. Amid political stalemate, low-wage workers have been galvanized into action, as seen by recent strikes across the country, from Macy’s to McDonald’s. The minimum wage happens to be one of the most studied topics by economists and policy analysts. Yet a puzzle remains: there is scant interest among economists – including those who study labor economics – and the broader policy community on the second major tier of the minimum wage system, the “sub-minimum” wage received by tipped workers. (A search among the top ten economic journals, for example, produces no articles on this subject in the last ten years.) This split in wage tiers was established in 1966, when Congress amended the FLSA to allow for a sub-minimum wage for tipped workers. While sub-minimum wage levels for students, youth and workers in training have long been allowed as temporary, the 1966 law made the “tipped wage” permanent through its “tip credit” provision. At that time, employers of tipped workers were allowed to pay a base wage of only half of the regular minimum wage, with the other half provided through customer tips, which is considered “credit” toward the employee’s total wage. This framework is legal as long as the sum of the tip wage and customer tips amounts to the regular minimum wage. In short, customer tips are not wholly a gift or token of gratitude from the served to the server but a wage subsidy provided to employers. In 1966, employers and customers shared equally in contributing to the wages of tipped workers. As the law intended, the tipped wage paid by the employer and the tipped credit from the customer were each half of the regular minimum wage. Over the next three decades, the official tip credit provision sometimes dropped as low as 40%, and never exceeded 50% of the regular minimum wage. As the situation stands today, at the federal level, the maximum tip credit allowance is $5.12, which is equal to the minimum wage ($7.25) minus the tipped wage ($2.13). The $2.13 tipped wage is now just 29% of the regular minimum wage, while the tip credit afforded to employers makes up 71%. What happened? Ironically, it was the Minimum Wage Increase Act of 1996 that initially caused this relative drop in the tipped wage. Signed into law by President Bill Clinton, the act increased the federal minimum wage from $4.25 to $4.75 an hour but froze the tipped minimum wage at $2.13 an hour under heavy pressure from the restaurant lobby. At the time, the $2.13 tipped wage had been in effect since 1991. This means that the sub-wage floor we have today has actually been in effect for 22 years. And when lawmakers took up an FLSA amendment in 2007 to raise the minimum wage in three steps, the tipped wage was again left off the table. Inflation has also eroded the purchasing power of both wage floors, but the fundamental cause behind the decline of the tipped wage has been the long decades of inaction. Today, its real value is at its lowest level since it was established in 1966. Over time, the ratio between the two wages fell from 50% in 1966 to just 29% in 2013, which means the tipped wage has fallen more than 20 percentage-points against the federal minimum wage. The subsidy afforded to employers ($5.12) is now more than twice the base wage they actually pay their workers. In short, most of the money these workers receive is from customer tips, not from their employer. In the absence of federal action, states have stepped in to institute a mix of wage floors of their own. Under various state policies, we have a system where wait staff in Texas are paid an hourly wage as low as $2.13, while a server at the same restaurant chain in Washington State earns a base wage of at least $9.19 an hour. This example reflects a range of both wage tiers across the country, which changes the tip credit amount that employers are allowed to claim. This is because the joint wage system depends on both the regular and the tipped minimum wages. The first determining factor is whether a state follows the federal regular minimum wage of $7.25 or whether it has adopted a higher state minimum. The second factor is which tip credit provision the state falls into: full, partial, or no. A full-tip credit state takes advantage of the maximum allowable tip credit, enabling payment of the lowest sub-minimum wage ($2.13 per hour, the federal tipped minimum wage). A partial-tip credit state has a sub-minimum wage that is above $2.13 but below the binding minimum wage for that state. Finally, states that require employers to pay tipped workers the binding regular minimum wage are referred to as no-tip credit states; at a minimum, tipped workers are paid the same as non-tipped workers. The figure below shows the three basic tip credit categories. The red states allow the full-tip credit and a sub-wage of $2.13. The blue states have tipped wages above $2.13 but below the binding regular minimum—the tip credit amount varies in these states. Those colored in grey do not allow the sub-minimum wage; in those cases, that has been the policy for a long time. &#160; &#160; These six general scenarios &#8212; determined by the three tip credit provisions and...]]></description>
				<content:encoded><![CDATA[<div><img width="150" height="150" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/08/waitress1-150x150.jpg" class="attachment-thumbnail wp-post-image" alt="Denny&#039;s Offers Free Breakfast In Effort To Aggressively Promote Sales" style="margin-bottom: 15px;" /></div><p><em>Economists are ignoring a class of workers whose wages have been frozen for decades.</em><span id="more-6707"></span></p>
<p>&nbsp;</p>
<p><em>Note: This article was originally published on August 5, 2013.</em></p>
<p>Since its inception, the minimum wage has provoked fiery debate. Indeed, when the Fair Labor Standards Act (FLSA) set the first federal minimum wage at $0.25 in 1938, the National Association of Manufacturers deemed it “a step in the direction of communism, Bolshevism, fascism, and Nazism.” Today, it remains a politically divisive issue. While many Democrats, including President Barack Obama, are calling for an increase at the federal level, numerous Republicans, including centrists in the party, would abolish it altogether. Amid political stalemate, low-wage workers have been galvanized into action, as seen by recent strikes across the country, from Macy’s to McDonald’s.</p>
<p>The minimum wage happens to be one of the most studied topics by economists and policy analysts. Yet a puzzle remains: there is scant interest among economists – including those who study labor economics – and the broader policy community on the second major tier of the minimum wage system, the “sub-minimum” wage received by tipped workers. (A search among the top ten economic journals, for example, produces no articles on this subject in the last ten years.)</p>
<p>This split in wage tiers was established in 1966, when Congress amended the FLSA to allow for a sub-minimum wage for tipped workers. While sub-minimum wage levels for students, youth and workers in training have long been allowed as temporary, the 1966 law made the “tipped wage” permanent through its “tip credit” provision. At that time, employers of tipped workers were allowed to pay a base wage of only half of the regular minimum wage, with the other half provided through customer tips, which is considered “credit” toward the employee’s total wage. This framework is legal as long as the sum of the tip wage and customer tips amounts to the regular minimum wage.</p>
<p>In short, customer tips are not wholly a gift or token of gratitude from the served to the server but a wage subsidy provided to employers. In 1966, employers and customers shared equally in contributing to the wages of tipped workers. As the law intended, the tipped wage paid by the employer and the tipped credit from the customer were each half of the regular minimum wage. Over the next three decades, the official tip credit provision sometimes dropped as low as 40%, and never exceeded 50% of the regular minimum wage. As the situation stands today, at the federal level, the maximum tip credit allowance is $5.12, which is equal to the minimum wage ($7.25) minus the tipped wage ($2.13). The $2.13 tipped wage is now just 29% of the regular minimum wage, while the tip credit afforded to employers makes up 71%. What happened?</p>
<p><a href="http://symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/08/Real-v-Tipped-EDIT.jpg" rel="prettyphoto[6707]"><img class="size-full wp-image-6807" alt="wage chart, allegretto" src="http://symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/08/Real-v-Tipped-EDIT.jpg" width="792" height="612" /></a></p>
<p>Ironically, it was the Minimum Wage Increase Act of 1996 that initially caused this relative drop in the tipped wage. Signed into law by President Bill Clinton, the act increased the federal minimum wage from $4.25 to $4.75 an hour but froze the tipped minimum wage at $2.13 an hour under heavy pressure from the restaurant lobby. At the time, the $2.13 tipped wage had been in effect since 1991. This means that the sub-wage floor we have today has actually been in effect for 22 years. And when lawmakers took up an FLSA amendment in 2007 to raise the minimum wage in three steps, the tipped wage was again left off the table. Inflation has also eroded the purchasing power of both wage floors, but the fundamental cause behind the decline of the tipped wage has been the long decades of inaction. Today, its real value is at its lowest level since it was established in 1966.</p>
<p>Over time, the ratio between the two wages fell from 50% in 1966 to just 29% in 2013, which means the tipped wage has fallen more than 20 percentage-points against the federal minimum wage. The subsidy afforded to employers ($5.12) is now more than twice the base wage they actually pay their workers. In short, most of the money these workers receive is from customer tips, not from their employer.</p>
<p>In the absence of federal action, states have stepped in to institute a mix of wage floors of their own. Under various state policies, we have a system where wait staff in Texas are paid an hourly wage as low as $2.13, while a server at the same restaurant chain in Washington State earns a base wage of at least $9.19 an hour.</p>
<p>This example reflects a range of both wage tiers across the country, which changes the tip credit amount that employers are allowed to claim. This is because the joint wage system depends on both the regular and the tipped minimum wages. The first determining factor is whether a state follows the federal regular minimum wage of $7.25 or whether it has adopted a higher state minimum. The second factor is which tip credit provision the state falls into: full, partial, or no. A full-tip credit state takes advantage of the maximum allowable tip credit, enabling payment of the lowest sub-minimum wage ($2.13 per hour, the federal tipped minimum wage). A partial-tip credit state has a sub-minimum wage that is above $2.13 but below the binding minimum wage for that state. Finally, states that require employers to pay tipped workers the binding regular minimum wage are referred to as no-tip credit states; at a minimum, tipped workers are paid the same as non-tipped workers.</p>
<p>The figure below shows the three basic tip credit categories. The red states allow the full-tip credit and a sub-wage of $2.13. The blue states have tipped wages above $2.13 but below the binding regular minimum—the tip credit amount varies in these states. Those colored in grey do not allow the sub-minimum wage; in those cases, that has been the policy for a long time.</p>
<p>&nbsp;</p>
<p style="text-align: center;"><a href="http://symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/08/map.jpg" rel="prettyphoto[6707]"><img class="size-full wp-image-6710 aligncenter" alt="states and wages" src="http://symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/08/map.jpg" width="959" height="593" /></a></p>
<p>&nbsp;</p>
<p>These six general scenarios &#8212; determined by the three tip credit provisions and the binding state or federal regular minimum wage &#8212; change whenever states adopt a higher minimum wage or the federal minimum wage is raised. For instance, states may have a binding state minimum wage prior to a federal increase but then revert to the new federally binding minimum when the latter is increased.</p>
<p>A few examples highlight the vast differences in policies across the United States. Texas follows the federal policy for both minimum wage tiers, so the allowable tip credit provision afforded to employers is $5.12, while the base wage for tipped workers is $2.13. Minnesota follows the federal policy for its regular minimum wage but it does not allow for a tip credit. New Mexico has a regular minimum wage above the federal level ($7.50) but follows the federal policy for the tipped wage, so its tip credit is $5.37. Interestingly, Massachusetts has a regular minimum wage of $8.00 and a tipped wage of $2.63, also allowing a tip credit of $5.37. Conversely, Hawaii’s regular minimum wage is $7.25, but its tipped wage is $7.00, which leaves a tip credit of just $0.25. Not surprisingly, the median wage of wait staff varies by the allowable tip credit provision: $8.75, $9.14 and $10.27 for full-, partial-, and no-tip credit states, respectively.</p>
<p>These variations explain why two food servers in different states can earn vastly different wages, even if they work at the same restaurant chain. The variance in wage rates across states also indicates that restaurants can in fact pay higher wages, expand, and be profitable when tipped workers earn a base wage above the federal rate. Otherwise, restaurants would not bother to expand in no-tip credit states. Even in San Francisco, where employers are not allowed a tip credit and the citywide living wage is $10.55, the restaurant industry is booming.</p>
<p>The sub-minimum wage is not just an existential concern for millions of workers, but a vital policy issue that has been all but ignored by experts for decades. A common misconception among policy analysts, regardless of their ideology, is that wait staff make “a lot” of money. Earlier this summer, I gave a briefing on the tipped wage in Washington to the staff of the Senate Health, Education, Labor and Pensions Committee. I told them up front to put aside their personal experience of fine dining in D.C. restaurants, suspecting that this crowd often runs up fairly large bills that result in generous tips. I then asked them to think about the servers working in a Denny’s in Ohio, a diner in rural Pennsylvania, or at a truck stop in Wyoming. In these cases, tipped workers in general and wait staff in particular are overwhelmingly women (72%). And contrary to popular belief, they are not all teenagers on their first job: 45% of tipped workers and one in three wait staff is at least 30.</p>
<p>The low wages of tipped workers result in high poverty rates. Among all workers, 6.3% are classified as living below the federal poverty level; that rate varies from 6.0% to 6.7% depending on the three tip credit scenarios. But for wait staff, poverty is far higher, at 16.7% overall. And again, the rates vary greatly by the state tip credit status and decrease as wage rates increase: 19.4%, 16.2% and 13.6%, for full-, partial-, and no-tip credit states, respectively.</p>
<p>Another point to underscore is that these jobs are, for the most part, not quality jobs. For instance, they often lack basic benefits like health care or sick leave. Moreover, a worker is left with a base wage that is often non-existent after taxes. And tips may vary greatly from day to day and depend on the shift for which a worker is scheduled. Working a Monday afternoon may be much less lucrative than Friday evening. Most striking – and ironic – is that those who work in the restaurant industry experience a high degree of food insecurity. While about 8.4% of workers need food stamps to make ends meet, the figure is 15.7% for workers in the restaurant industry, and 14.5% for tipped workers.</p>
<p>These are economic outcomes that policymakers should keep in mind because the restaurant industry is one of the fastest growing in the country, and it employs large numbers of minimum and sub-minimum wage workers. While private sector employment grew by about 22% since 1990, the restaurant industry grew by over 70%. A hike in both the regular and the sub-minimum wage at the federal level would help a vast class of workers. Otherwise, taxpayers will continue to subsidize the bottom line of employers who skimp on wages and force workers to rely on public support such as Medicaid and food stamps to make ends meet.</p>
<p>After my presentation to the Senate staffers, many of them approached me absolutely shocked to learn of this wage system. A policy that few know or care about is a policy that will remain unchanged &#8212; in this case, for 22 years and counting. This issue has been off the radar in Washington, D.C., for several decades for many reasons – but an important one is that the tipped wage disproportionally affects low-income women, who do not have a lobby with the clout of the National Restaurant Association.</p>
<p>This is not to say that lawmakers have given up. A Democratic bill is circulating that calls for a minimum wage of $10.10 and re-links the sub-minimum wage to the regular minimum wage in several steps, ultimately bringing the former to 70% of the latter. The bill also calls for annual adjusts to account for price increases, which would prohibit the buying power of both minimum-wage tiers from eroding over time. A stronger wage policy would be a first step in addressing two of the biggest problems in our economy – inequality and poverty among the working poor.</p>
<p>Of course, with Congress gridlocked on literally everything, it is hard to envision any federal legislation passing soon. In the meantime, economists can provide empirical research to help inform and educate the policy community as well as the broader public. Research from economists can help complement the work of groups such as the Restaurant Opportunities Centers, which are operating nationwide, and the National Employment Law Project in New York City; both have been bringing attention to this issue in recent years. Labor economists, in particular, can play a vital role in answering questions like: How high can wages go up without adversely affecting employment for these workers? What has the impact of the variance in state wage policies been on health, upward mobility, and other related factors? What are the outcomes of state policies that have countered the inflation-adjusted decline in this wage tier?</p>
<p>Just to give one example, my colleagues and I have several research projects in the works to further look at both the regular and the sub-minimum wages. One is collecting data to analyze the price effects of San Jose’s citywide wage increase in March 2013, from California’s $8.00 minimum to $10.00, and how it affects restaurant menu prices. To date, most research finds it produces only small price increases. We are also investigating the effects of varying tipped wages on the poverty rates among these workers. But far more research needs to be done. The more that we economists can find out, the better we can equip policymakers with analysis to support their efforts. In turn, this can promote a national debate on an issue that has been under the radar for too long – despite the fact that millions of Americans have long been affected, and will continue to be in the future.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://www.symposium-magazine.com/still-waiting-for-change/feed/</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>Understanding the Irrational Commuter</title>
		<link>http://www.symposium-magazine.com/understanding-the-irrational-commuter/</link>
		<comments>http://www.symposium-magazine.com/understanding-the-irrational-commuter/#comments</comments>
		<pubDate>Tue, 31 Dec 2013 01:06:29 +0000</pubDate>
		<dc:creator><![CDATA[admin]]></dc:creator>
				<category><![CDATA[RSS]]></category>
		<category><![CDATA[September 2013 Edition]]></category>
		<category><![CDATA[Symposium Magazine]]></category>
		<category><![CDATA[David Levinson]]></category>
		<category><![CDATA[transportation]]></category>

		<guid isPermaLink="false">http://www.symposium-magazine.com/?p=13587</guid>
		<description><![CDATA[<div><img width="150" height="150" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/09/traffic1-150x150.jpg" class="attachment-thumbnail wp-post-image" alt="tokyo traffic" style="margin-bottom: 15px;" /></div>The increasing sophistication of data collection and analysis gives us deeper insights into human behavior &#8212; and how we make decisions about everyday travel. &#160; Note: This article was originally published on September 9, 2013. Transportation debates, from the local to national level, are invariably waged between competing interests. There are players representing economic development, road construction, the environmental lobbies, and diverse groups of transportation users &#8212; just to name a few. But there is also an important role for independent experts to play &#8212; not just as honest brokers, but as analysts who can assess what they learn from the increasingly sophisticated collection of data about travel and human behavior. And this is where academics can step in. Research that I have conducted with colleagues at the University of Minnesota has allowed us break down travel behavior and draw some surprising lessons that can guide transportation policy. Why are these lessons so valuable now? Technology has brought us to the point where we can provide incentives and disincentives to efficiently manage road use. To take just one example, look at the pervasive issue of congestion, which can be addressed through “congestion pricing.” To be sure, the cost of collecting a new road fee is non-trivial, especially compared with the alternative, a higher gas tax, which simply requires an annual check of refinery sales. But the benefits are a significant improvement in the management of road use, so that drivers who do not need to travel when roads are congested will have an incentive to avoid those peak times. If applied correctly, the resulting changes in route choices reveal where roads are overbuilt, and where demand, even after pricing, is sufficient to justify new capacity. In short, the most cost-effective thing we can do in the transportation field is to get the prices right. Once we do that, everything else will follow. Above all, this requires field experiments that test and evaluate different strategies and deploying those that are successful. I will elaborate on some of my experiments below, but will start by asking some basic questions about how people travel.  Do people take the shortest path? This is the very first question we need to ask, because we need to know whether travelers really do think rationally as they chart their commute. And our experiments showed that they do not: Only 15% of commuters take the shortest path for work, while a greater number take a path that is marginally longer. And many take routes that are up to 10 minutes longer than the shortest path. For non-commute trips, which tend to be a little bit shorter, more people take the shortest route. But even though you would expect that people making the same trip every day would know what their travel network looks like, they either chose to not take the shortest path, or they do not know what that route is. It is important to make this point up front, because a misconception among transportation modelers is that people inherently take the shortest travel-time route when they are navigating on roads, or that the reality is only slightly different from this simplifying assumption. This notion, in fact, is embedded in the travel demand models that are used in every transportation-planning and forecasting exercise in large metropolitan areas. The data, however, show this is not true. Our findings thus challenge the computerized travel demand models that are used daily to predict the effects of network changes (e.g., adding a lane), land uses (e.g., developing a surface parking lot), and policies (e.g., raising the price of gas) on levels of traffic and subsequent delays. One of the key components of these models is called “route assignment” &#8212; where the model tells traffic which route to take &#8212; or “route choice” &#8212; if we imagine the model predicts which route users will choose to maximize their utility. How did we set up this experiment? We looked at how driving patterns had changed after the I-35 W bridge collapsed in 2007. A few weeks prior to the reopening of the bridge in 2008, we installed GPS units in 200 private vehicles owned by study participants, and told them to drive as they normally would. We did not give them any other instructions except that they had to come to a designated location to get the GPS unit installed, and then return them eight weeks later. These people worked at or near the University of Minnesota or in downtown Minneapolis, and therefore were likely to be affected by the change in the network associated with the bridge. We needed to know what the real shortest path was in a given network &#8212; which required travel time data on all road segments &#8212; and which paths people actually used. While people might tell you in a survey they are going from A to B, we did not know what particular routes that they were using &#8212; and many people could not accurately answer anyway. But with the advent of GPS systems and more pervasive traffic monitoring, we could get better data. With the help of my research assistant Shanjiang Zhu (now a professor at George Mason University) we then organized the data. We had to make sure that GPS points fell on the network and that people were driving on the right side of the road. We matched this data to routes, so for each individual trip we could track where it started, where it ended, and the specific road segments that were taken. We used this very large data set to estimate the travel time on all of the relevant links in the network. In addition to knowing which route that someone actually took, we measured the expected travel time on many of the alternative routes that a traveler might consider, since other travelers used those roads. The advantage of the new GPS data is that it gives the speeds on the arterials at any given time. So we compared the...]]></description>
				<content:encoded><![CDATA[<div><img width="150" height="150" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/09/traffic1-150x150.jpg" class="attachment-thumbnail wp-post-image" alt="tokyo traffic" style="margin-bottom: 15px;" /></div><p><em>The increasing sophistication of data collection and analysis gives us deeper insights into human behavior &#8212; and how we make decisions about everyday travel.<span id="more-13587"></span></em></p>
<p>&nbsp;</p>
<p><em>Note: This article was originally published on September 9, 2013.</em></p>
<p>Transportation debates, from the local to national level, are invariably waged between competing interests. There are players representing economic development, road construction, the environmental lobbies, and diverse groups of transportation users &#8212; just to name a few. But there is also an important role for independent experts to play &#8212; not just as honest brokers, but as analysts who can assess what they learn from the increasingly sophisticated collection of data about travel and human behavior. And this is where academics can step in. Research that I have conducted with colleagues at the University of Minnesota has allowed us break down travel behavior and draw some surprising lessons that can guide transportation policy.</p>
<p>Why are these lessons so valuable now? Technology has brought us to the point where we can provide incentives and disincentives to efficiently manage road use. To take just one example, look at the pervasive issue of congestion, which can be addressed through “congestion pricing.” To be sure, the cost of collecting a new road fee is non-trivial, especially compared with the alternative, a higher gas tax, which simply requires an annual check of refinery sales. But the benefits are a significant improvement in the management of road use, so that drivers who do not need to travel when roads are congested will have an incentive to avoid those peak times.</p>
<p>If applied correctly, the resulting changes in route choices reveal where roads are overbuilt, and where demand, even after pricing, is sufficient to justify new capacity. In short, the most cost-effective thing we can do in the transportation field is to get the prices right. Once we do that, everything else will follow. Above all, this requires field experiments that test and evaluate different strategies and deploying those that are successful. I will elaborate on some of my experiments below, but will start by asking some basic questions about how people travel.</p>
<p><em> Do people take the shortest path?</em></p>
<p>This is the very first question we need to ask, because we need to know whether travelers really do think rationally as they chart their commute. And our experiments showed that they do not: Only 15% of commuters take the shortest path for work, while a greater number take a path that is marginally longer. And many take routes that are up to 10 minutes longer than the shortest path. For non-commute trips, which tend to be a little bit shorter, more people take the shortest route. But even though you would expect that people making the same trip every day would know what their travel network looks like, they either chose to not take the shortest path, or they do not know what that route is.</p>
<p>It is important to make this point up front, because a misconception among transportation modelers is that people inherently take the shortest travel-time route when they are navigating on roads, or that the reality is only slightly different from this simplifying assumption. This notion, in fact, is embedded in the travel demand models that are used in every transportation-planning and forecasting exercise in large metropolitan areas. The data, however, show this is not true. Our <a title="link to Shortest Path findings" href="http://nexus.umn.edu/Papers/ShortestPath.pdf" target="_blank">findings</a> thus challenge the computerized travel demand models that are used daily to predict the effects of network changes (e.g., adding a lane), land uses (e.g., developing a surface parking lot), and policies (e.g., raising the price of gas) on levels of traffic and subsequent delays. One of the key components of these models is called “route assignment” &#8212; where the model tells traffic which route to take &#8212; or “route choice” &#8212; if we imagine the model predicts which route users will choose to maximize their utility.</p>
<p>How did we set up this experiment? We looked at how driving patterns had changed after the I-35 W bridge collapsed in 2007. A few weeks prior to the reopening of the bridge in 2008, we installed GPS units in 200 private vehicles owned by study participants, and told them to drive as they normally would. We did not give them any other instructions except that they had to come to a designated location to get the GPS unit installed, and then return them eight weeks later. These people worked at or near the University of Minnesota or in downtown Minneapolis, and therefore were likely to be affected by the change in the network associated with the bridge.</p>
<p>We needed to know what the real shortest path was in a given network &#8212; which required travel time data on all road segments &#8212; and which paths people actually used. While people might tell you in a survey they are going from A to B, we did not know what particular routes that they were using &#8212; and many people could not accurately answer anyway. But with the advent of GPS systems and more pervasive traffic monitoring, we could get better data.</p>
<p>With the help of my research assistant Shanjiang Zhu (now a <a title="link to Zhu page" href=" http://civil.gmu.edu/people/shanjiang-zhu/" target="_blank">professor</a> at George Mason University) we then organized the data. We had to make sure that GPS points fell on the network and that people were driving on the right side of the road. We matched this data to routes, so for each individual trip we could track where it started, where it ended, and the specific road segments that were taken. We used this very large data set to estimate the travel time on all of the relevant links in the network. In addition to knowing which route that someone actually took, we measured the expected travel time on many of the alternative routes that a traveler might consider, since other travelers used those roads.</p>
<p>The advantage of the new GPS data is that it gives the speeds on the arterials at any given time. So we compared the routes that travelers actually took with what we estimated to be the shortest travel time on the network based on the average travel speed for each of those links. Ultimately, this led to our conclusion that drivers do not necessarily take the shortest route possible.</p>
<p><a href="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/09/Levinson-chart-01.jpg" rel="prettyphoto[13587]"><img class="aligncenter size-full wp-image-13588" alt="levinson chart" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/09/Levinson-chart-01.jpg" width="600" height="352" /></a></p>
<p>&nbsp;</p>
<p>This conclusion is a real-life test of the theory (“First Principle”), of British transportation analyst <a title="link to Wardrop page" href="http://www.icevirtuallibrary.com/content/article/10.1680/ipeds.1952.11259" target="_blank">John Glen Wardrop</a>, who posited that “the journey times in all routes actually used are equal and less than those which would be experienced by a single vehicle on any unused route.” This means that travelers take the path of least resistance, and they will make the best decision for themselves subject to what everybody else is doing (a variation of the Nash Equilibrium). Wardrop’s First Principle has been the foundation of route choice models ever since he developed it in the 1940s, but it had not received much empirical testing. As a researcher, I wanted to look under-the-hood, so to speak, with students and colleagues at the University of Minnesota so we could ask the question:</p>
<p><em> Why aren’t people taking the shortest path?</em></p>
<p>There are several explanations suggested by the data, most of which have to do with insufficient information and misperception, and some other conjectures:</p>
<p><em>Selflessness:</em> Wardrop’s First Principle assumes that all people are selfish, but perhaps at times they are in fact selfless. However, we believe selflessness is not a good explanation, not because of any moral failings but because travelers do not have enough information and are left guessing whether what they are doing is best for everyone else, even if at some self-sacrifice. We assume they aim to minimize their own travel time rather than that of society, but it is safe to say people cannot know what decision will minimize society’s travel time, because of computational and informational limitations discussed below.</p>
<p><em>Rationality:</em> Wardrop’s principle assumes that people are rational, but maybe people are not rational, or at least not rational all the time. Indeed, people often do react emotionally and intuitively, employing what Nobel Prize winner Daniel Kahneman calls System 1 in <a title="link to Thinking Fast and Slow" href="http://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555" target="_blank">Thinking, Fast and Slow</a>, based on heuristic rules. They do not have time for rational assessment. In another sense, for a repeated decision like daily commuting back and forth to work, it costs a significant amount of travel time, a scarce resource, to systematically behave irrationally. We thus assume people are behaving rationally (engaging Kahneman&#8217;s System 2) when they can. The idea of bounded rationality, developed by Herbert Simon, also a Nobel Prize winner, has been applied to route choice problems by many researchers, including my Master&#8217;s Advisor, University of Maryland Prof. Gang-len Chang in his <a title="link to Chang dissertation" href="http://transci.journal.informs.org/content/21/2/89.short" target="_blank">dissertation work</a> with Prof. Hani Mahmassani at the University of Texas. We can build models with bounded rationality assuming or estimating the bounds to this rationality due to information, cognitive limits, and time available to make a decision. We discuss some of the factors described below.</p>
<p><em>Perception:</em> It might be that people think they have the shortest travel time on their route, but they misperceived the travel time on the network. There are perception or cognition limits. On a 24-minute trip, are you going to know what the travel time is to the nearest 30 seconds or minute? I would because I am a transportation geek, but most people are not going to measure their time that precisely. When you look at how people report travel times in surveys, they typically round to 5 minutes and sometimes they round to the nearest 15 minutes. If people are only dealing with time perception in 5 or 15 minute chunks, saving a minute or two is not going to show up on their radar as something that is important to them.</p>
<p><em>Computation:</em> Sadly (for modellers), people are not computers. They cannot accurately add travel times across different road segments. They cannot systematically compare the travel times over alternative routes even if they had a complete data set.</p>
<p><em>Information:</em> Not only are people not computers, they are not GPS systems. People do not have complete maps of the network. They often do have good mental maps of the local street network around where they live, and a little bit around where they work and where they travel frequently, but if they live far from where they work, they tend not to know the detailed network in-between. There are also limits to people’s ability to navigate. Their cognitive or mental maps are far from complete. They only have the experience of the routes they have actually used. They can test other routes to gain experiences, but they do not have those innately.</p>
<p><em>Valuation:</em> Maybe people minimize the weighted sum of travel time, where time spent in different conditions is valued differently. We know, for instance, from the transit literature, that time spent waiting for a bus is much more onerous than time while on-board a vehicle in motion, making progress towards its destination, especially if the arrival time of the bus is uncertain.</p>
<p><em>Objective:</em> We assume that people care about minimizing only travel time. It might be that people are rational but they care about things besides travel time.</p>
<p><em>Search cost:</em> How long does it take to figure out what the travel time is on alternative routes? Are you willing to spend ten minutes exploring the network in order to save 30 seconds of travel time every day for the rest of your career? Rationally it might be worth doing so, since the payback is in only 20 days. People often will discount the possibility of saving time, worrying that this short-cut will actually be longer, or maybe they are afraid of getting lost. Fear of the unfamiliar is a major deterrent to exploration.</p>
<p><em>Route quality:</em> Many factors we used describe the quality or condition of a route and its environment. Is it potholed or newly paved? Does it run through a pleasant or unpleasant neighborhood? We have evidence that some people prefer a longer route if it is an attractive boulevard or parkway rather than a freeway trench.</p>
<p><em>Reliability:</em> The likelihood of arriving on time, and not just the expected travel time, affects willingness to select a route. There is the old parable of the man who drowned in an average of one inch of water. Similarly, it might not matter to me that the average travel time is 20 minutes if one day a week (but never knowing in advance which day) I can expect a travel time of 60 minutes. I do not want to leave 40 minutes earlier to avoid the occasional bad outcome. I might be willing to take a slower but more reliable route. I might even have a mixed strategy, or <a title="link to Science piece by Levinson" href="http://www.sciencedirect.com/science/article/pii/S0968090X13000545" target="_blank">portfolio</a>, combining different routes to achieve a personally satisfactory trade-off between expected time and reliability. In practice, this means some people might take surface streets, which are generally slower, but more reliable, instead of freeways, which are faster, but subject to more catastrophic breakdowns of traffic flow.</p>
<p><em>Pleasurability of travel:</em> Maybe people are rational, but they like traveling a little bit more than being at work or home, and so choose longer routes to prolong the experience. And many people want to commute, up to a point; Lothlorian Redmond and Patricia Mokhtarian <a title="link to Redmond piece" href="http://escholarship.org/uc/item/4mc291p2" target="_blank">find</a> there is a positive value to some amount of commuting, that the preferred commute length is not typically zero. However, it appears that many commutes are longer than the desired amount. Still, for some people, the longer route, which provides some psychological buffer between the stresses of work and the stresses of home, is desired.</p>
<p><em>Not all travel time is created equal: evidence from three experiments</em></p>
<p>The experiment with the I-35W bridge described above was just one case where we analyzed an exhaustive set of data to assess travel choice. I will describe three more below to elaborate on more findings colleagues and I made. For example, it turns out people chose routes not just based on factors like total travel time, or whether they have to wait on a ramp versus driving on a freeway, or whether their preferred route is pleasant or familiar. And as I described above, they also consider travel-time reliability. So they might choose one particular route not because it is the shortest time on average, but because they have a high probability of not being late more than, say, 5 percent of the time.</p>
<p>One of the first important decisions we made when setting up the experiments described below was to blend two approaches to understanding preference. A “revealed preference” approach examines the decisions people actually make and infers the causes using statistical tests, whereas a “stated preference” approach presents people with a set of hypothetical scenarios to choose from. The advantage of revealed preference is that it is based on real decisions. Unfortunately, it is unable to provide insight to alternatives that do not exist yet, or that travelers have not themselves experienced. We decided to run experiments that use hybrids between stated and revealed preference, where the subjects gained some experience with alternatives beyond a simple graphic and word-based description.</p>
<p>The <a title="link to first Levinson experiment" href="http://nexus.umn.edu/Papers/TrailsLanesOrTraffic.pdf" target="_blank">first experiment</a> looked at how people select bicycle routes. Minneapolis, like many cities, has seen a cycling boom, and as many as 4% of commuters bike to work (higher than the national average of 0.5%). With the help of my graduate assistants <a title="link to Tilahun site" href="http://ntilahun.com" target="_blank">Nebiyou Tilahun</a> (now faculty at the University of Illinois at Chicago) and <a title="link to Krizek page" href="http://carbon.ucdenver.edu/~kkrizek/biography.html" target="_blank">Professor Kevin Krizek</a> (now at the University of Colorado), I examined the weights people assign to factors when choosing bicycle routes, via a multi-media approach. We used a computer-based study with conditions shown as first-person videos of riding a bicycle in different conditions. To present the alternatives, Krizek rode a bicycle, hands-off, taking a video camera and videotaping each condition.</p>
<p>This was our question: Imagine you are commuting by bicycle and have to select one of two routes. Route 1 is a nicer and off-road but takes 40 minutes, while Route 2 is in traffic and takes 20 minutes. Once you chose, you would get another presentation where the travel times would change. One travel time would become higher or lower depending on what you answered. This allowed us to determine an “indifference point,” where you do not care whether you take either Route 1 or Route 2.</p>
<p>Our conclusion: While everyone prefers the high-quality route, all else equal, the indifference point varies by person, especially by gender. Women report they are willing to pay more in travel time for a higher quality off-road route than men are, even though the general patterns are the same. It also varies by season and weather, with colder weather deterring the duration of outdoor travel.</p>
<p>The <a title="link to Levinson's second experiment" href="http://nexus.umn.edu/Papers/WeightingWaiting.pdf" target="_blank">second experiment</a> looked at ramp meters, which are traffic lights at freeway entrance ramps that ration the number of cars entering onto the freeways to smooth out traffic. This ensures that &#8220;platoons&#8221; of cars are not all entering at once. Meters have also been used to ensure that the total number of cars on the freeway is below the number that would cause bottlenecks. In 2000, controversy arose over the ramp metering system in the Minneapolis-Saint Paul region amid growing concern about long waits at metered entrance ramps, which reportedly lasted as long as 20 minutes. But at the time there was no way to systematically determine the actual duration of waits.</p>
<p>The state legislature instructed the Minnesota Department of Transportation (MnDOT) to turn the ramp meters off for at least four weeks as an experiment. In fact, MnDOT continued the experiment for over eight weeks and ultimately concluded that ramp meters were valuable. But it changed the metering strategy to reduce maximum queues at the ramps to no more than four minutes.</p>
<p>In light of the <a title="link to ramp metering experiment" href="http://cdh.design.umn.edu" target="_blank">ramp metering shutdown</a>, I worked with my colleagues Kathleen Harder, John Bloomfield, and Kasia Winiarczyk to conduct two experiments with the same framework. The first approach we used was a classical stated preference experiment, administered on a computer, which we called Computer Administered Stated Preference (CASP). Since people may respond differently to hypothetical situations under a stated-preference scenario, especially scenarios they have never experienced, we also developed a second experimental method that we labeled a Virtual Experience Stated Preference (VESP).</p>
<p>In the VESP, we put subjects in a sophisticated driving simulator. This car is enveloped in a set of screens, and the driver is surrounded by animations that make the experience feel like driving. The simulator has speakers and vibrates, and when the driver turns the car, the point of view changes. We tested for many different conditions. In one set of conditions, subjects had to rank four alternatives, which were exactly the same for the virtual and computer experiences. By contrast, in the CASP, this was presented in a bar graph showing total travel time and the travel time of each component, with clear text labels.</p>
<p>0 minutes of ramp delay, 20 minutes of stop-and-go traffic at 30 MPH</p>
<p>2 minutes of ramp delay, 15 minutes of congested traffic at 40 MPH</p>
<p>4 minutes of ramp delay, 12 minutes of moderately congested traffic at 50 MPH</p>
<p>6 minutes of ramp delay, 10 minutes of free-flow traffic at 60 MPH</p>
<p>If you were strictly rational and cared only about minimizing total travel time, you would prefer 10 miles in 16 minutes, the fourth scenario. But travelers had a preset notion of waiting at a ramp and of driving in traffic, and only 1 of 44 subjects in the CASP presentation preferred to minimize total travel time under that particular scenario. By contrast, in the VESP, when people experienced the same conditions, 15 out of 17 subjects preferred the forth option to minimize total travel time.</p>
<p>Then, when we estimated statistical models for the CASP experiment, we found ramp time was about 1.6 times as onerous as freeway time. In contrast, when we estimated the model in the VESP experiment, we obtained exactly the opposite result, that ramp time is preferred to freeway time. What explains these differences?</p>
<p>The first is the contrast between simultaneity and sequencing. Under CASP, our subjects were looking at all the four options at the same time, taking about 15 minutes to read the screen for multiple presentations of similar questions. As they assessed the options, they recalled their previous experience about travel conditions. The VESP takes longer, about 90 minutes, because we gave the subjects those four presentations in sequence. They were “sitting” at the ramp meter and then “driving” through congested traffic before we asked them questions. So they had a more “real-world” sense of the total time taken.</p>
<p>The experiment also showed that people tend to remember things that happened most recently. Under VESP, what you remember at the end of the trip is that you have just been through stop-and-go traffic. And you think to yourself, “I don’t like stop and go traffic,” whereas the ramp meter was a long time ago. Furthermore, the stop-and-go traffic in the simulator may be more (or less) intense than how any particular traveler experiences traffic. Under CASP, subjects remember their actual work trips, but they are removed from the immediate sensation of driving. Finally, we found that it matters whether travelers have a real “goal” in the experiment. Under VESP, travelers are in a driving simulator because the researcher tells them so, not because they have any real goal at the end of it. This makes them focus more on pure time.</p>
<p><a href="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/09/Levinson-chart-02.jpg" rel="prettyphoto[13587]"><img class="aligncenter size-full wp-image-13590" alt="ramp wait times" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/09/Levinson-chart-02.jpg" width="792" height="612" /></a></p>
<p>In a <a title="link to third Levinson experiment" href="http://nexus.umn.edu/Papers/DeterminantsOfRouteChoice.pdf " target="_blank">third experiment</a>, we examined the value of information and route choice – this time under real-world conditions. We asked people to drive from the McNamara Alumni Center on the University of Minnesota-Minneapolis campus to the Cathedral in St. Paul by different routes. Each person was asked to go down one route and come back on another route; they then had to go down on a third route and back on a forth route. After each trip, they had to rate those routes. The routes were I-94, a depressed (and, to most observers, a depressing) freeway connecting Minneapolis and St. Paul; University Avenue, a retail-industrial street that had seen better days; Grand Avenue, a pedestrian-oriented retail street; Summit Avenue, a beautiful boulevard lined with some of the most expensive homes in St. Paul, including the Governor’s Mansion; and Marshall and Selby Avenues, a mix of residential and industrial areas. We had students collect data and gave instructions at each end of the experiment. We also provided some drivers the &#8220;expected&#8221; travel time of the route. The data collection was led by <a title="link to Nee page" href="http://blog.bn.ee/projects" target="_blank">Brendan Nee</a>, who is now a principal at <a title="link to Blink Tag" href="http://blinktag.com" target="_blank">Blink Tag</a> and a designer of a number of traveler information tools.</p>
<p>In addition, we had a GPS unit in each car and randomized who took which route in which sequence. We estimated models that would predict which route people would prefer; we also asked what route drivers would prefer for commuting, for shopping trips, for entertainment, etc., with the idea that commuting trips would present a different set of preferences than those for discretionary trips without the rigorous time constraints. This experiment formed much of the empirical evidence of my student Lei Zhang’s <a title="link to Zhang dissertation" href="http://www.lei.umd.edu" target="_blank">dissertation</a>, which developed a “Behavior User Equilibrium,” in contrast to the Wardropian User Equilibrium. The idea is that users are not minimizing travel time. Rather, they act on preferences for a larger set of factors, and that these behavioral factors need to be discovered empirically, through a theory known as Search, Information, Learning, and Knowledge acquisition. This SILK Theory presents a different paradigm for modeling route choices than traditional User Equilibrium, since it is a positive, empirical approach that describes what people actually do instead of what we think they should do.</p>
<p>What did our research find? Drivers will switch to a new route if the difference in time is great, or if the difference in time is not so great but the pleasure obtained from the route is higher. Drivers also prefer familiar routes and consider aesthetics. People do not like stopping at stop signs, but they do like driving on routes that they are familiar with. Finally, people like to have information about how long it is going to take &#8212; even if the information has only a passing resemblance to reality.</p>
<p><em>Why misperception matters</em></p>
<p>People systematically misperceive time. Sometimes they think places are farther away than they really are, and other times they seem closer. Freeways seem to take less time than they really do, while local streets seem longer. In part, this has to do with “task complexity,” or the “mental transaction costs” involved in traveling. When I need to make a lot of small driving and navigation decisions, like on a signalized route with lots of turns, I need to turn my focus more often to driving. Each time I engage my conscious brain in traveling decisions, I am occupied by traveling thoughts. Altogether, these decisions make the trip seem longer.</p>
<p>Other factors include temporal relevance (is the trip important?), temporal expectancies (what do I think the travel time will be?), temporal uncertainty (how reliable is my estimate of travel time?), affective elements (what is the emotional state of the traveler?), absorption (am I paying attention to the task at hand?) and arousal (how physically activated am I, and am I on drugs?) But when I can drive on an uncongested freeway, I can avoid many such thoughts. Driving is less salient. Time passes faster. As the expression goes, “time flies when you are having fun.”</p>
<p>In <a title="link to time perception study" href="http://nexus.umn.edu/Papers/PerceivedWaitingTime.pdf" target="_blank">one study</a> we ran examining travel time perception, we wanted to compare how people perceive and value travel time while waiting at red lights compared to moving on surface streets. A graduate student, <a title="link to Wu page" href="http://www.csupomona.edu/~ce/Faculty/XWu.html" target="_blank">Xinkai Wu</a>, now a professor at Cal Poly Pomona, created a simulation where drivers would see a traffic signal ahead, and get stuck behind a car that waited for the red light. Annoyed, they keep on waiting for up to two minutes. (Imagine wanting to play a driving video game, and getting stuck at a red light.)</p>
<p>We had a set of scenarios. In one scenario, they would be waiting 120 seconds on the minor route but not have any delay at two subsequent traffic signals. In another, they would only wait 30 seconds at the first light, but 60 seconds at second traffic light and 60 seconds at a third. We found that perceived and actual waiting time were virtually identical for the first 30 seconds, but beyond 30 seconds, actual waiting time was higher than perceived waiting time, up to 120 seconds. At 120 seconds, the trend was for perceived time to over-take actual time. However, that was the cut-off for the experiment, so perception findings in this situation require more information. We can say for sure, though, that the annoyance level at 120 seconds of waiting was more than four times higher than waiting 30 seconds. Furthermore, people hated stops.</p>
<p>Of course, with all of this, it depends on how you frame the question, what you ask, and what travelers were expecting. As noted above, comparing a computer-administered stated preference with one in which travelers were in a driving simulator completely flipped preferences for ramp meters.</p>
<p>We also <a title="link to network-time study" href="http://nexus.umn.edu/Papers/NetworkStructureAndTravelTimePerception.pdf" target="_blank">studied</a> a set of data that was collected for other purposes to gauge how network structure affects time perception, and how people report travel time. For instance, the network in downtown Minneapolis consists of a very tight grid of streets, so the block sizes are relatively small. In contrast, more modern suburbs like Woodbury are very circuitous and less well connected.</p>
<p>We measured the network structure along the actual route travelers pursued and compared reported times with our best estimate of measured travel times on their actual routes using GPS data. We then placed travelers into two groups, those who underestimated their travel time and those who overestimated their travel time, to see whether each experienced a difference in the network structure. We also measured network continuity – that is, how often you change routes – in the belief that if you have more discontinuity in your network, you are more likely to overestimate travel time because you spend more time thinking about it.</p>
<p>Similarly, if you ran into more intersections, you are more likely to overestimate the time. So each time you have to stop, or think about stopping, because there is potentially oncoming traffic, that is a mental transaction cost that increases how long you think about traveling, and thus how long you think you are traveling. However, when your shortest path is along freeways, which have fewer decision points, you are more likely to underestimate travel time. Ultimately, the accuracy of travel time perception on traffic signal waits, network structure, and what kind of route you are taking.</p>
<p>In sum, people misperceive the travel time on the road network all the time. We can predict general factors that help explain misperception, but we cannot predict any one person’s individual perception. On average, however, we can see that in certain conditions some people are more likely to overestimate (or underestimate) their time.</p>
<p>What these studies show is that to explain and predict the choices people make, we do not need better mathematical algorithms for finding the shortest path, but behaviorally-based route choice procedures. Transportation analysts should think about route choice not only as a mathematical problem of how to calculate the shortest path in the network, but about the things that people value, and what they perceive about the network. Both of which will affect individual decisions.</p>
<p><em>Selfishness at what price?</em></p>
<p>Wardrop developed not only his First Principle of User Equilibrium, but a second principle of System Optimality: at equilibrium the average journey time is kept to a minimum. This requires that every traveler acts in accordance with society’s best interest, which is something that no individual can calculate. This ratio between the total system travel time associated with a user equilibrium traffic pattern and the system optimal travel pattern has been dubbed “The Price of Anarchy” by Tim Roughgarden, who has applied this to computer networks. This “price” measures the inefficiency of autonomous (or selfish) control in a system, compared to a theoretically best central control.</p>
<p>When choosing a route, selfish users see the costs they incur but not the costs they impose on others. If we somehow persuaded travelers to make route decisions that considered the cost they impose on others &#8212; their marginal cost &#8212; we could achieve a minimal total cost for the system. In economics, the classic theoretical mechanism for this is called a Pigouvian Tax, which charges the polluter for the negative externalities imposed on others (that is, the difference between the social marginal cost and social average cost). In this case, the externality is congestion, or travel time imposed by a vehicle on all other vehicles in excess of what would be borne in the vehicle’s absence. Travelers facing a choice of travel times and this type of a tax might choose a route where the User Equilibrium (UE) solution would equal the Social Optimal (SO) one.</p>
<p>Using traffic assignment models, we compared system-optimal and user-equilibrium flows and travel times for the Minneapolis-Saint Paul regional planning network, assuming total traffic flow between origins and destinations were unaffected by our distortion of route prices. We found the SO assignment had a 1.7% overall time savings, and a slightly higher average speed (63.2 km/h vs. 61.8 km/h). Perhaps surprisingly, it also had somewhat more total vehicle kilometers traveled (9.37M vs. 9.33M), as drivers had to take longer routes to avoid imposing congestion on others.</p>
<p>So what does this mean? The price of anarchy &#8212; letting drivers choose their own routes rather than being centrally directed &#8212; is relatively small, under 2 percent. It turns out it is much more important to get people to choose an efficient time of day to travel than to worry about micro-managing which route they select.</p>
<p>What would this look like? We could impose time-varying prices to discourage demand when it is highest, and encourage demand at off-peak periods. This is what the High Occupancy/Toll lanes do, as well as transit systems that have peak and off-peak fares. This is also done on some toll facilities now. Other schemes, like the London Congestion Charge, have two prices: free or tolled, depending on time of day. This approach can be as refined as much we want, with prices changing every hour, every five minutes, or even continuously. The prices might change in real-time, or change according to a fixed and posted schedule.</p>
<p>Nobel-winning economist William Vickery laid the groundwork for this approach when he developed the first version of the bottleneck model, which showed how varying prices would allow people to make trade-offs between being on-time (at a higher toll) or being early or late (at a lower toll, but a higher cost in what transportation researchers call “schedule delay”).</p>
<p>The simplest <a title="link to bottleneck experiment" href="http://nexus.umn.edu/Papers/Microfoundations.pdf" target="_blank">version</a> of this has two players. Imagine two boats racing for a canal lock. When they arrive at the same time, only one can make it through first, the other has to wait. The one who makes it through imposes a schedule delay on the one who waited. But if they arrived at different times, there would be no direct schedule delay, even though one might not get into the canal at their preferred time. So if we appropriately price simultaneous arrivals, we will discourage them. When the number of players goes up – say, to 2,000 people instead of 2 &#8212; coordination is better through posted price signals than conversation and negotiation. Prices varying by the time of day is what congestion pricing is about. It is putting a higher price on times that are most desired, and lower prices on the less desired times.</p>
<p>There are also other ways to achieve this end. On most roads, it is assumed no one owns the travel time, and so we get congestion. But if the right to travel at a given time was viewed as a something like a property right, we could auction this to the highest bidder and avoid congestion. This would follow a strategy of establishing property rights to avoid externalities, as suggested by the economist Ronald Coase, who just passed away. In the transportation literature, this has come to be known as “<a title="link to reservation pricing" href="http://transportationist.org/2013/06/18/pricing-with-and-without-reservations" target="_blank">reservation pricing</a>.” Just as you should not expect to be seated if you show up unexpectedly at a popular restaurant that takes reservations, you should not expect to use a high-demand bottleneck facility on the road without making arrangements in advance.</p>
<p>Of course, this kind of pricing is much more complicated with a real-time system like transportation, and it is likely that some queuing is required. This ensures there is someone waiting to take advantage of the next gap that opens. The alternative would be that the facility remains under-utilized for part of the time, which has its own costs. Even restaurants that reserve tables sometimes make you wait a little bit for their immediate convenience, so they can maximize the productivity of their staff.</p>
<p>Unfortunately, congestion pricing remains more in the realm of theory than practice. While there are a few programs &#8212; notably Singapore, London, and Stockholm &#8212; they are not wide enough in scope or variable enough in prices to end congestion. Once many of these are implemented, more cities may copy their peers, and it will become standard in all large metropolitan areas. Technically, all of these systems all work well and reduce congestion compared to the alternative. But politically, they have been difficult to emulate. New York City tried and failed, and no other US city has been willing to do something quite so radical.</p>
<p>Another possible deployment path for congestion pricing is through what is called a Vehicle Mileage Tax, or a Mileage-based User Fee. Gas tax revenues, which provide a large share of road funding, have been declining for a long time in the US, due to the leveling off of demand for driving, as well as better fuel economy in cars. The simplest solution is to raise the gas tax, which solves an immediate problem, but not the longer term one. While hybrid gasoline-electric vehicles (like the Toyota Prius) still pay some gas tax, plug-in electrics (like the Tesla, Chevy Volt, or Nissan Leaf) pay almost none. Yet they still use the roads. Although they are presently a small share of the market, that share is likely to grow.</p>
<p>Some states are beginning to think about how to charge EVs for the use of roads, just as gasoline-powered vehicles are charged based on a gas tax. Once a device (basically a sophisticated odometer) is placed in cars tracking the miles traveled, it can also track when those miles are traveled (and where, with a GPS), and vary the rate by time-of-day. The State of Washington now taxes EVs $100 per year to offset the lack of gas tax revenue. Oregon is conducting a large scale test of the Vehicle Mileage Tax, allowing 5,000 volunteers to pay by the mile and have their gas tax rebated.</p>
<p>These experiments and policy shifts are a harbinger of change in how we pay for highways, but more importantly, the way we use highways. One day, our children will laugh at the foolishness of how their parents used to wait in traffic, breathe in fumes, and waste their time when they could have been doing something else.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://www.symposium-magazine.com/understanding-the-irrational-commuter/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Why Write the History of Capitalism?</title>
		<link>http://www.symposium-magazine.com/why-write-the-history-of-capitalism-louis-hyman/</link>
		<comments>http://www.symposium-magazine.com/why-write-the-history-of-capitalism-louis-hyman/#comments</comments>
		<pubDate>Tue, 31 Dec 2013 01:05:43 +0000</pubDate>
		<dc:creator><![CDATA[admin]]></dc:creator>
				<category><![CDATA[Back Issues]]></category>
		<category><![CDATA[July 2013 Edition]]></category>
		<category><![CDATA[RSS]]></category>
		<category><![CDATA[Symposium Magazine]]></category>
		<category><![CDATA[capitalism]]></category>
		<category><![CDATA[finance]]></category>
		<category><![CDATA[Louis Hyman]]></category>
		<category><![CDATA[U.S. history]]></category>

		<guid isPermaLink="false">http://symposium-magazine.com/symposium_magazine/?p=18</guid>
		<description><![CDATA[<div><img width="150" height="150" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/07/82838978_10-150x150.jpg" class="attachment-thumbnail wp-post-image" alt="bank run" style="margin-bottom: 15px;" /></div>A new generation of scholars is rewriting the story of capitalism by shaking off the old assumptions of both the Left and Right. &#160; Note: This article was originally published on July 8, 2013. Earlier this spring, I received a phone call from a reporter at The New York Times. Since I have written a couple books on the history of American personal debt, the occasional inquiry from journalists was not out of place, but usually they want to hear about the five best financial tips for success, not “real” history. This particular journalist, Jennifer Schuessler, asked me a very odd question: What does it mean to write the history of capitalism? I was dumbfounded. I paused. I asked her where she had even heard that term. She evaded the answer — “oh, it’s in the air” — but I began to tell her about where I thought the burgeoning subfield had come from, peppering my response with “agency,” “contingency” and other history jargon. She told me she could translate. As I spoke, I kept wondering why she cared. After all, The New York Times does not usually run stories on the subfields of academic disciplines, especially history. So you can imagine my surprise when I woke up the next Sunday and saw the front-page headline: “In History Departments, It’s Up With Capitalism.” For days, it was the most emailed story on the Times web site, with hundreds of people suddenly weighing in to comment on what capitalism meant. The discussion forums were, in many ways, more revealing than the article itself. Internet trolls had their say, but I was struck much more by the forums’ threads of disagreement. Many readers pointed out what they thought all the scholars has missed or excluded, all in an effort to determine whether we were pro-corporate apologists funded by big money (no) or communist “fifth columnists” (a more interesting charge, but again, no). For me, the ad hominem attacks were less telling than the fact that there was simply a fresh discussion of capitalism. For most of the readers who weighed in, capitalism is totally explained by either Karl Marx or Adam Smith (with the occasional John Maynard Keynes or Joseph Schumpeter tossed in). That is, capitalism is a system that can be universally explained through one theory or the other. Either you understand it or you do not. Either you read the right author or you are an ignoramus. In this view, the history of capitalism is simply the logical unfolding of a natural law, like an apple falling from a tree. As one reader put it, “a history of capitalism would be as revelatory as a ‘history of gravity.’” If only events befell us as predictably as Isaac Newton’s proverbial apple. History is not about proving a universal theory, but seeing how change occurs over time. As a scholarly practice, history is about explaining how events actually played out, with all their attendant unruliness. The essential problem is not to primly define capitalism like a schoolmarm, but to think about why capitalism, which appears to be so simple, evades easy definitions. And in the last decade, there has been a renewed interest among historians in not only challenging existing definitions, but in historicizing that very untidiness (much to the consternation of nominalists everywhere). As the United States emerges from the most severe financial crisis since the Great Depression, the sudden urgency is not difficult to understand. Booms and busts buffet us with alarming frequency. But it is important to note that the term “history of capitalism” began to assume a currency in the historical profession sometime in the mid-2000s, between the tech crash and the Great Recession. While the Recession has sparked renewed interest from the public, the new work preceded 2008 and marked an important shift that was not just intellectual but generational. For two generations, almost no historians who wanted to make a name for themselves worked on economic questions. New Left scholars of the 1960s and 1970s emphasized movements that fought for social change (labor, women, and African-Americans). The postmodern shift of the 1980s and 1990s pushed traditional subjects of economic history out of the field, and with it the stillborn subfield of cliometrics – a quantitative approach to economic history. If a scholar wrote about the history of business, or even worse, businessmen, he or she seemed to betray right-wing tendencies. If you wrote about actual businesses, many on the Left felt it was only to celebrate their leaders, the way that most historians wrote celebratory histories of the oppressed. Some stalwarts remained (of all political persuasions), but on the whole, they were marginalized. By contrast, for the generation of graduate students that came of age in the late 1990s and 2000s, the world looked very different. Social movements had either won &#8212; or lost &#8212; decades earlier. Radical reform, in the midst of seemingly unending economic stagnation, seemed a fantasy. Most importantly, American capitalism, as of 1989, had beaten Soviet communism. The either/or distinctions of the Cold War seemed less relevant. The questions that motivated so much of social history seemed naïve. The old question “Why is there no socialism in America?” became “Why do we even talk about socialism at all since we are in America?” We knew endless amounts about deviationist Trotskyites but nothing about hegemonic bankers. This gap came from the belief that there was very little to know. Alfred Chandler’s The Visible Hand was the only business history book most American graduate students of history continued to read. And it reaffirmed everything that the New Left thought about capitalism: that it was inevitable, mechanical, efficient, and boring. Capitalists operated with an inexorable logic, whereas the rest of us were “contingent agents” pursuing our free will. If pressed, few scholars would have put this assumption in these words, but it colored the questions that people asked. “Hegemony,” a term appropriated from Antonio Gramsci by cultural studies scholars in the 1970s, became diluted into...]]></description>
				<content:encoded><![CDATA[<div><img width="150" height="150" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/07/82838978_10-150x150.jpg" class="attachment-thumbnail wp-post-image" alt="bank run" style="margin-bottom: 15px;" /></div><p><em>A new generation of scholars is rewriting the story of capitalism by shaking off the old assumptions of both the Left and Right.<span id="more-18"></span></em></p>
<p>&nbsp;</p>
<p><em>Note: This article was originally published on July 8, 2013.</em></p>
<p>Earlier this spring, I received a phone call from a reporter at <i>The New York Times.</i> Since I have written a couple books on the history of American personal debt, the occasional inquiry from journalists was not out of place, but usually they want to hear about the five best financial tips for success, not “real” history.</p>
<p>This particular journalist, Jennifer Schuessler, asked me a very odd question: What does it mean to write the history of capitalism? I was dumbfounded. I paused. I asked her where she had even heard that term. She evaded the answer — “oh, it’s in the air” — but I began to tell her about where I thought the burgeoning subfield had come from, peppering my response with “agency,” “contingency” and other history jargon. She told me she could translate.</p>
<p>As I spoke, I kept wondering why she cared. After all, <i>The New York Times</i> does not usually run stories on the subfields of academic disciplines, especially history. So you can imagine my surprise when I woke up the next Sunday and saw the front-page headline: “In History Departments, It’s Up With Capitalism.” For days, it was the most emailed story on the <i>Times</i> web site, with hundreds of people suddenly weighing in to comment on what capitalism meant.</p>
<p>The discussion forums were, in many ways, more revealing than the article itself. Internet trolls had their say, but I was struck much more by the forums’ threads of disagreement. Many readers pointed out what they thought all the scholars has missed or excluded, all in an effort to determine whether we were pro-corporate apologists funded by big money (no) or communist “fifth columnists” (a more interesting charge, but again, no).</p>
<p>For me, the ad hominem attacks were less telling than the fact that there was simply a fresh discussion of capitalism. For most of the readers who weighed in, capitalism is totally explained by either Karl Marx or Adam Smith (with the occasional John Maynard Keynes or Joseph Schumpeter tossed in). That is, capitalism is a system that can be universally explained through one theory or the other. Either you understand it or you do not. Either you read the right author or you are an ignoramus. In this view, the history of capitalism is simply the logical unfolding of a natural law, like an apple falling from a tree. As one reader put it, “a history of capitalism would be as revelatory as a ‘history of gravity.’”</p>
<p>If only events befell us as predictably as Isaac Newton’s proverbial apple. History is not about proving a universal theory, but seeing how change occurs over time. As a scholarly practice, history is about explaining how events actually played out, with all their attendant unruliness. The essential problem is not to primly define capitalism like a schoolmarm, but to think about why capitalism, which appears to be so simple, evades easy definitions. And in the last decade, there has been a renewed interest among historians in not only challenging existing definitions, but in historicizing that very untidiness (much to the consternation of nominalists everywhere).</p>
<p>As the United States emerges from the most severe financial crisis since the Great Depression, the sudden urgency is not difficult to understand. Booms and busts buffet us with alarming frequency. But it is important to note that the term “history of capitalism” began to assume a currency in the historical profession sometime in the mid-2000s, between the tech crash and the Great Recession. While the Recession has sparked renewed interest from the public, the new work preceded 2008 and marked an important shift that was not just intellectual but generational.</p>
<p>For two generations, almost no historians who wanted to make a name for themselves worked on economic questions. New Left scholars of the 1960s and 1970s emphasized movements that fought for social change (labor, women, and African-Americans). The postmodern shift of the 1980s and 1990s pushed traditional subjects of economic history out of the field, and with it the stillborn subfield of cliometrics – a quantitative approach to economic history. If a scholar wrote about the history of business, or even worse, businessmen, he or she seemed to betray right-wing tendencies. If you wrote about actual businesses, many on the Left felt it was only to celebrate their leaders, the way that most historians wrote celebratory histories of the oppressed. Some stalwarts remained (of all political persuasions), but on the whole, they were marginalized.</p>
<p>By contrast, for the generation of graduate students that came of age in the late 1990s and 2000s, the world looked very different. Social movements had either won &#8212; or lost &#8212; decades earlier. Radical reform, in the midst of seemingly unending economic stagnation, seemed a fantasy. Most importantly, American capitalism, as of 1989, had beaten Soviet communism. The either/or distinctions of the Cold War seemed less relevant. The questions that motivated so much of social history seemed naïve. The old question “Why is there no socialism in America?” became “Why do we even talk about socialism at all since we are in America?” We knew endless amounts about deviationist Trotskyites but nothing about hegemonic bankers.</p>
<p>This gap came from the belief that there was very little to know. Alfred Chandler’s <i>The Visible Hand</i> was the only business history book most American graduate students of history continued to read. And it reaffirmed everything that the New Left thought about capitalism: that it was inevitable, mechanical, efficient, and boring. Capitalists operated with an inexorable logic, whereas the rest of us were “contingent agents” pursuing our free will. If pressed, few scholars would have put this assumption in these words, but it colored the questions that people asked. “Hegemony,” a term appropriated from Antonio Gramsci by cultural studies scholars in the 1970s, became diluted into silly analyses of advertising. In some sense, historians believed that they “got it” when they read Marx or Smith, and there was nothing much left to say.</p>
<p>My generation was shaped by all of those New Left social movement historians, taking race/gender/class as the essential lens. Business archives look very different when you are trained by reading Judith Butler. Banks look different when approached like Michel Foucault. This type of history starts by assuming that people on the margins matter, that culture is essential, and that questions of gender and racial power cannot be divorced from questions of class. Capitalism must be written from margin to center, to borrow a title from bell hooks. This history, however, must be written, even if the people we write about are not our heroes (something my generation never really had).</p>
<p>When capitalist institutions such as banks and corporations are treated as real places with real people, the stories begin to change. The imperatives of profit remain, but the choices on how to make that profit, if at all, begin to look much less inevitable. Moreover, it becomes impossible to ignore the ways in which those choices are shaped, not only by inter-firm competition, but also by culture and politics. Though important, profit becomes only one factor among many guiding the choices of executives, whose decisions matter more than perhaps anyone in determining our everyday lives, especially those on the bottom.</p>
<p>In short, scholars like me, who would become historians of capitalism, came to it backwards. As an undergraduate at Columbia, my labor history class with Joshua Freeman was standing room only in a large auditorium. By contrast, when I took a class on the history of capitalism as an undergraduate with J.W. Smit, there were only four students. He was amazing, but such courses were far outside the norm. When my undergraduate thesis advisor, Elizabeth Blackmar, told me I should stop studying labor and start studying capital (my thesis was on the radical collision of syndicalism and prohibition in the “No Beer, No Work” Movement of 1919), I looked at her as if she were an alien. She was right, but only over time, in graduate school, did I realize that to understand the history of labor, I really needed to understand the history of capital.</p>
<p>Nearly everyone I know now who identifies as a historian of capitalism had a similar awakening. Kim Phillips-Fein, a historian of business leaders, supply-siders and financial crises, trenchantly wrote that “in another generation we would all have been labor historians.” As graduate students, we felt isolated from the normal kinds of projects that excluded business and finance. We found each other haphazardly, often in archives, when we asked each other about our work. I first met Julia Ott, now my long-term collaborator, while we were waiting out a thunderstorm at the National Archives in Washington, D.C. I had not met a self-described “financial historian” before I met her, and it sounded like the most boring thing in the world. But later, as I started to write more about bond markets, I began to think of myself as one, too (and neither of us is <i>that </i>boring). Still, when I told people that I worked on the history of personal debt in the early 2000s, the response I most often received was a glassy-eyed stare of boredom. (Before the crash, no one wanted to talk about mortgage-backed securities. Trust me.)</p>
<p>Friendship begot friendship, even across generations, as people who felt isolated in the 1980s and 1990s, such as Blackmar and Richard John, now found themselves as the bridge to older historiographies of political economy that took the power of capitalist institutions seriously. Historians who had been working on these questions for years saw a surge in interest. Conferences, small ones at first, organized by graduate students, became slowly bigger, until the 2012 national American history conference had “Frontiers of Capitalism and Democracy” as its main theme.</p>
<p>Simply showing that capitalism had changed over time is in itself a major shift, as the responses in the discussion forums of <i>The New York Times</i> reminded me. Capitalism is not the end of history—as Francis Fukuyama famously put it at the end of the Cold War—it is our history. The changes in capitalism demand explanation. Even in just our lifetimes, we have seen how basic processes of capitalism, like work and investment, have been altered by policy, culture and invention. Topics such as inequality, unemployment, and debt crowd our newspapers and blogs.</p>
<p>Key to all of this was the curious divide between economists and historians, who would seem to naturally share our interest in economic history. By the 1990s, economists held enormous sway in the academe, with their robust models, high salaries, and public profiles. Americans, at least in elite forums, actually listened to them. We humanists ceded the public sphere, retreating to obscure journals but confident that critical theory was still much hipper than math, even if the White House did not call us.</p>
<p>The voices of dissent from the market orthodoxy suddenly found new opportunities after the Great Recession. After years of economic stagnation in the United States, we can no longer blindly accept the hypothesis that the free market is efficient in the long run. Opinions that flourished on the margins could now acquire a currency in the middle. As historians love to observe, most economists have failed to provide an explanation that makes sense to people. Stories, in most situations, are more powerful than regressions. Historians clearly should triumph over economists; after all, Americans hate math as much as they love the History Channel.</p>
<p>Yet historians have failed in their attempt to teach this lesson to a broader public. Readers love stories, but the narratives that we have provided about capitalism have been all but ignored. Some historians are still trying to impress people with clever jargon. Others cling to the puffed-up language of Marxism, or think that to discuss how the economy works is to countenance its operations, as if we become apologists whenever we discuss anything controversial. Mostly, the problem is less one of politics than imagination. We have not fully recognized that the stakes have changed. We are living in a time of tremendous possibility to fashion new ways of explaining the economy.</p>
<p>The history of capitalism is certainly uses statistics (and as well it should), but what makes it compelling are its stories of real people. Policymakers decide to change regulations. Business leaders take risks in bold ventures. Workers actually manage to resist huge corporations. Economic theory, for instance, would tell us that depressions are the worst time to strike and organize. Yet the Flint Sit-Down Strike of 1936 took place in the middle of the Great Depression. A group of auto-workers took on and won a strike against General Motors, then the most powerful corporation in the world. That reality, more than any theory, is what makes the history of capitalism different from economic history. What matters most is what cannot be entirely predicted. In this sense, the most compelling history is about entrepreneurs who challenge market equilibrium and common sense.</p>
<p>Nearly all of our economic theories about development emerge from our histories of capitalist growth over the past 500 years. Only by understanding capitalism’s development can we hope to spur development in emerging economies and steer developed economies onto a path of sustainable growth. Above all else, historians must remind us all that things change, even capitalism. In some sense, this idea is more radical than any millenarian communist tract. While the basic rules of capitalism might appear fixed (excess profits ought to be invested, work needs to be organized, and private property needs protecting), the forms that are possible are quite endless.</p>
<p>Even in the last two centuries, just in our country, the varieties of capitalism reveal how truly protean even simple ideas like “investment” can be. For example, the riskiest investments of the early nineteenth century were factories, while normal investment went into merchant ventures. The trip could be insured. Multiple friends (and it was always personal) could be brought together to split a ship and a cargo, and after the trip, the ship could be sold and the profits divided. How could a factory be divided? When would its “trip” end? The long time horizons just seemed too risky. If you wanted to invest in production, the safe bet was not factories, but slaves. Slaves could work. Slaves could have children. With the expanding frontier, slaves could be profitably sold. If one wanted to borrow money, slaves could be easily mortgaged, or even securitized. That factories, which we think embody capitalist investment, were in some sense the wild fringe of the 1820s and 1830s, complicates everything we think we know about capitalism.</p>
<p>New Left historians knew this bit of history as well as we do. The difference is less one of fact than one of interpretation. In this sense, the “history of capitalism” is perhaps less of a break than continuity with the New Left historiography— as much as every new generation likes to overthrow the last. Agency still matters to us, but we confine it to the powerful few who shaped commerce and industry. We ask more questions about firms, who still have power today, than about movements, who do not. Agency, when we see it, is a problem to explain rather than an assumption.</p>
<p>Would we wish that modern capitalism had evolved in some other way? Of course. But the historian’s task is to confront sober reality, not fashion heroic sagas. In our reality, ordinary people can make real changes only under extraordinary circumstances. The Flint Sit-down strike can happen, but rather than make it just another case of everyday agency, it should be understood as something special so that its lessons can be understood and applied. Luckily, archives always offer more instruction in the specificity of the past, even as they push us to question our assumptions about how capitalism works. Choices were and are made every day, if not by everyone, determining not only capitalism’s past but its future as well. The history of capitalism is not a fad, but something that we should think about, so that we can make better choices &#8212; when we have them &#8212; in the future.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.symposium-magazine.com/why-write-the-history-of-capitalism-louis-hyman/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>A Scientist Goes Rogue</title>
		<link>http://www.symposium-magazine.com/a-scientist-goes-rogue/</link>
		<comments>http://www.symposium-magazine.com/a-scientist-goes-rogue/#comments</comments>
		<pubDate>Tue, 31 Dec 2013 01:02:15 +0000</pubDate>
		<dc:creator><![CDATA[admin]]></dc:creator>
				<category><![CDATA[August 2013 Edition]]></category>
		<category><![CDATA[RSS]]></category>
		<category><![CDATA[Symposium Magazine]]></category>

		<guid isPermaLink="false">http://symposium-magazine.com/?p=6676</guid>
		<description><![CDATA[<div><img width="150" height="150" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/08/perlstein1-150x150.jpeg" class="attachment-thumbnail wp-post-image" alt="perlstein" style="margin-bottom: 15px;" /></div>Can social media and crowdfunding sustain independent researchers? &#160; Note: This article was originally published on August 5, 2013. Ethan Perlstein is a contradiction: an utterly modern researcher who hearkens to the 19th century tradition of the “gentleman scientist.” Perlstein, a self-dubbed evolutionary pharmacologist with a Ph.D. in molecular and cellular biology from Harvard, is one of the most vocal members of the so-called independent scientist movement. As with many trailblazers, he had no intention of starting a revolution; rather, as he puts it, “my back was to the wall.” That wall presented itself in the summer of 2012, when he was completing his fifth year as a postdoctoral fellow at Princeton. “I’d already gone through one year of the application cycle for assistant professorships and ran into a buzz-saw, because for one job opening, there are 300-400 applicants. I was preparing for a second bid,” he recalled. “I was told, ‘two years is nothing on the academic job market these days; you could be spending four years on the market. One postdoc is not enough these days; you need two postdocs.’ I just realized, I don’t want to do this. I want to do my science.” The seeds for going rogue had been planted already on Twitter, where scientists were openly and honestly kvetching in a way that only really happens on social media. Some tweets were grim, such as: “80% of PhDs in biology don’t end up on the tenure track.” (For more on this, see Perlstein’s clever blog post, “The Tenure Games.”) Until January 2011, he had no interest in Twitter, but once he created an account and started connecting with other scientists, he began to learn about alternative tracks for people in his situation. “People were talking about new ways to publish and review papers after publication, crowdfunding and all these alternative things, so I educated myself on these trends.” He started to study the history of independent scientists and discovered that “it goes back to the gentleman scientist tradition, like Darwin. I thought, I don’t really want to resurrect that tradition of the male-dominated, aristocratic leader class, but they did come up with huge discoveries.” As the term gentleman scientist implies, those people had money to play around with. Perlstein said, “The biggest stumbling block for someone who’s not a theorist in biology is that it’s so expensive to maintain a lab, and the supplies to use in that lab.” Perlstein’s specific area of biomedical research is particularly costly. So he made a very web 2.0 move: crowdfunding. Perlstein cited a tweet he read recently, which called crowdfunding the “gateway drug” of the independent scientist. “I think there’s a ring of truth to that,” he added. In September 2012, Perlstein decided to start a meth lab for mice to find out where radioactive amphetamines accumulate in mouse brain cells. He launched a crowdfunding campaign on the site Rockethub, a kind of Kickstarter for science for academic projects. The tag line, “Crowdfund my meth lab, yo,” was accompanied by a photo from Breaking Bad, about a teacher who runs a meth lab. The goal: to raise $25,000. It was hip. It was bold. It was youthful. It was as good as an example as any of how wide the gap is between academic scientists and independent scientists, reminiscent of Steve Jobs circa 1975 versus IBM of the same era. It is a safe bet that some of his former peers thought the move populist or unbecoming of an academic, particularly with the Breaking Bad allusions. But it worked: He raised $25,460 from over 400 people. And yes, as with Kickstarter, he offered little thank-you gestures to his donors, including “a 3-D printed model of methamphetamine the size of an iPhone that kind of looks like a dreidl.” He prints it himself on a 3-D printer. It is blue, a nod to Breaking Bad: “In the show they talk about blue crystal,” he explained. Trinkets aside, Perlstein is publishing the results of his research on his web site, in real time, rather than sitting on data for journal publication. One of the most controversial aspects of Perlstein’s independent scientist concept is that research transparency is key. “Crowdfunding could be one of the pillars supporting independent scientists, but it only works if you tell people what you are doing with their money.” That is where many scientists tempted to take the Perlstein route would stop short and possibly turn back. There appear to be too many risks, including the possibility of someone else stealing the idea. Perlstein laughs in the face of such fears. “Being an independent scientist is self-liberation from the constant paranoia that someone will steal [your idea]. My answer to people is that if you’re working in an area that is so faddish, you should think about working in a different area.” It’s not just chutzpah. He also thinks that stealing ideas is not as feasible as people seem to think. “We’re taking a technique in pharmacology that was developed decades ago. Someone could have done this at any time since then, but no one has. My talking about it now is not going to make someone say, ‘We’re going to do it.’ And even if they were to try to scoop us, they’re not going to do it overnight. They’re going to go through the same growing pains of getting preliminary data.” And here is the part where he starts to channel the spirit of his 19th century independent scientist forbearers: He is a purist. He’s out to find cures for rare diseases, among which he includes Cystic Fibrosis, Tay-Sachs, and Parkinson’s&#8211;all of which are relatively neglected by big drug companies. In the end, it is the science that matters. “Of course, someone could be doing in parallel and in stealth what we’re doing, but who cares? If we do the same experiment and independently get the same result, that’s the scientific method. Isn’t that the whole point? Getting the same result no...]]></description>
				<content:encoded><![CDATA[<div><img width="150" height="150" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/08/perlstein1-150x150.jpeg" class="attachment-thumbnail wp-post-image" alt="perlstein" style="margin-bottom: 15px;" /></div><p><em>Can social media and crowdfunding sustain independent researchers?</em><span id="more-6676"></span></p>
<p>&nbsp;</p>
<p><em>Note: This article was originally published on August 5, 2013.</em></p>
<p>Ethan Perlstein is a contradiction: an utterly modern researcher who hearkens to the 19th century tradition of the “gentleman scientist.”</p>
<p>Perlstein, a self-dubbed evolutionary pharmacologist with a Ph.D. in molecular and cellular biology from Harvard, is one of the most vocal members of the so-called independent scientist movement. As with many trailblazers, he had no intention of starting a revolution; rather, as he puts it, “my back was to the wall.”</p>
<p>That wall presented itself in the summer of 2012, when he was completing his fifth year as a postdoctoral fellow at Princeton.</p>
<p>“I’d already gone through one year of the application cycle for assistant professorships and ran into a buzz-saw, because for one job opening, there are 300-400 applicants. I was preparing for a second bid,” he recalled. “I was told, ‘two years is nothing on the academic job market these days; you could be spending four years on the market. One postdoc is not enough these days; you need two postdocs.’ I just realized, I don’t want to do this. I want to do my science.”</p>
<p>The seeds for going rogue had been planted already on Twitter, where scientists were openly and honestly kvetching in a way that only really happens on social media. Some tweets were grim, such as: “80% of PhDs in biology don’t end up on the tenure track.” (For more on this, see Perlstein’s clever <a title="link to &quot;tenure games&quot; blog post" href="http://www.perlsteinlab.com/blog/the-tenure-games" target="_blank">blog post</a>, “The Tenure Games.”)</p>
<p>Until January 2011, he had no interest in Twitter, but once he created an account and started connecting with other scientists, he began to learn about alternative tracks for people in his situation. “People were talking about new ways to publish and review papers after publication, crowdfunding and all these alternative things, so I educated myself on these trends.”</p>
<p>He started to study the history of independent scientists and discovered that “it goes back to the gentleman scientist tradition, like Darwin. I thought, I don’t really want to resurrect that tradition of the male-dominated, aristocratic leader class, but they did come up with huge discoveries.”</p>
<p>As the term gentleman scientist implies, those people had money to play around with. Perlstein said, “The biggest stumbling block for someone who’s not a theorist in biology is that it’s so expensive to maintain a lab, and the supplies to use in that lab.” Perlstein’s specific area of biomedical research is particularly costly. So he made a very web 2.0 move: crowdfunding.</p>
<p>Perlstein cited a tweet he read recently, which called crowdfunding the “gateway drug” of the independent scientist. “I think there’s a ring of truth to that,” he added.</p>
<p>In September 2012, Perlstein decided to start a <a title="Link to Perlstein Lab" href="http://perlsteinlab.com" target="_blank">meth lab</a> for mice to find out where radioactive amphetamines accumulate in mouse brain cells. He launched a crowdfunding campaign on the site <a title="Link to Rocket Hub" href="http://www.rockethub.com/" target="_blank">Rockethub</a>, a kind of Kickstarter for science for academic projects. The tag line, “<a title="Link to Perlstein Lab Crowdfund post" href="http://www.perlsteinlab.com/round-table/crowdfund-my-meth-lab-yo" target="_blank">Crowdfund my meth lab, yo</a>,” was accompanied by a photo from <em>Breaking Bad</em>, about a teacher who runs a meth lab. The goal: to raise $25,000.</p>
<p>It was hip. It was bold. It was youthful. It was as good as an example as any of how wide the gap is between academic scientists and independent scientists, reminiscent of Steve Jobs circa 1975 versus IBM of the same era. It is a safe bet that some of his former peers thought the move populist or unbecoming of an academic, particularly with the <em>Breaking Bad</em> allusions. But it worked: He raised $25,460 from over 400 people. And yes, as with Kickstarter, he offered little thank-you gestures to his donors, including “a 3-D printed model of methamphetamine the size of an iPhone that kind of looks like a dreidl.” He prints it himself on a 3-D printer. It is blue, a nod to Breaking Bad: “In the show they talk about blue crystal,” he explained.</p>
<p>Trinkets aside, Perlstein is publishing the results of his research on his web site, in real time, rather than sitting on data for journal publication. One of the most controversial aspects of Perlstein’s independent scientist concept is that research transparency is key. “Crowdfunding could be one of the pillars supporting independent scientists, but it only works if you tell people what you are doing with their money.”</p>
<p>That is where many scientists tempted to take the Perlstein route would stop short and possibly turn back. There appear to be too many risks, including the possibility of someone else stealing the idea. Perlstein laughs in the face of such fears. “Being an independent scientist is self-liberation from the constant paranoia that someone will steal [your idea]. My answer to people is that if you’re working in an area that is so faddish, you should think about working in a different area.”</p>
<p>It’s not just chutzpah. He also thinks that stealing ideas is not as feasible as people seem to think. “We’re taking a technique in pharmacology that was developed decades ago. Someone could have done this at any time since then, but no one has. My talking about it now is not going to make someone say, ‘We’re going to do it.’ And even if they were to try to scoop us, they’re not going to do it overnight. They’re going to go through the same growing pains of getting preliminary data.”</p>
<p>And here is the part where he starts to channel the spirit of his 19th century independent scientist forbearers: He is a purist. He’s out to find cures for rare diseases, among which he includes Cystic Fibrosis, Tay-Sachs, and Parkinson’s&#8211;all of which are relatively neglected by big drug companies.</p>
<p>In the end, it is the science that matters. “Of course, someone could be doing in parallel and in stealth what we’re doing, but who cares? If we do the same experiment and independently get the same result, that’s the scientific method. Isn’t that the whole point? Getting the same result no matter how many times you repeat the experiment? I don’t care about getting a paper published. Those considerations don’t matter to me anymore.”</p>
<p>Is he a mad scientist?</p>
<p>He could be. But the evidence increasingly suggests that he will end up being on the right side of history. He points out that the independent scientist model will only become easier over time. “Technological changes inevitably accelerate the process and make it irreversible,” he said. “In my field, in biology, <a title="Link to Nature piece on DNA sequencing" href="http://www.nature.com/scitable/blog/bio2.0/high_throughput_sequencing_and_cost" target="_blank">the cost of sequencing DNA</a> has dropped a million fold in the last six years.” He points out that this rate of acceleration is much faster than in other areas of technology. “You think Moore’s Law is incredible, [with computing complexity doubling every two years], but the cost of sequencing DNA is dropping like a rock. [There is] an app on the iPhone that can sequence genomes.”</p>
<p>The independent scientist movement is not a fad, said Perlstein, as long as problems for scientists in academia continue. “There are too many people rising up in the pyramid scheme of academia to be absorbed by positions,” he said. “Independent science is a safety valve that allows the pressure building from all this excess human capital. The independent path could absorb them, but that’s not going to edify academia. They will keep charging along as if nothing ever happened.” This will probably change over time, said Perlstein, as independent scientists start to prove successful in areas like fundraising. But in the meantime, “academia’s not going to do anything to course-correct.”</p>
<p>And Perlstein is not waiting for that to happen in any case. Independent science is his chosen path; he is not going about this with the expectation that it will impress a university faculty into hiring him. And though he admits he has “burned bridges” with the academy, he does not think he will be considered a renegade for long. “This trend is not going to stop. It has revolutionary moments, like all movements, but this train is out of the station.”</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://www.symposium-magazine.com/a-scientist-goes-rogue/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>Why U.S. Financial Hegemony Will Endure</title>
		<link>http://www.symposium-magazine.com/why-u-s-financial-hegemony-will-endure/</link>
		<comments>http://www.symposium-magazine.com/why-u-s-financial-hegemony-will-endure/#comments</comments>
		<pubDate>Tue, 31 Dec 2013 01:00:11 +0000</pubDate>
		<dc:creator><![CDATA[admin]]></dc:creator>
				<category><![CDATA[October 2013 Edition]]></category>
		<category><![CDATA[RSS]]></category>
		<category><![CDATA[Symposium Magazine]]></category>
		<category><![CDATA[network theory]]></category>
		<category><![CDATA[political science]]></category>

		<guid isPermaLink="false">http://www.symposium-magazine.com/?p=13901</guid>
		<description><![CDATA[<div><img width="150" height="150" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/10/dollar-150x150.jpg" class="attachment-thumbnail wp-post-image" alt="dollar" style="margin-bottom: 15px;" /></div>The great financial crisis of 2008 convinced many in the markets and policy arena that the U.S. had reached its high-water mark of dominance and that its decline was sealed. As they saw it, American financial prominence had proven so destabilizing that other countries had to insulate themselves against “profligate” U.S. behavior. Furthermore, the crisis dramatically reduced U.S. attractiveness to global capital, weakening its financial power to such an extent that the U.S. would be severely constrained in its ability to finance government debt at home and pursue geopolitical projects abroad.

As a result, many in this camp have anticipated a restructuring of the international financial system away from New York and toward China and other emerging markets. According to the World Economic Forum, Hong Kong displaced the United States as the world’s leading financial center last year. Many would agree with the economist Arvind Subramanian, who has argued that by 2030 China will be the world’s sole superpower, and that it is already the “world’s largest banker.”

These assessments see power as a result of the internal attributes of national economies: large economies with attractive financial sectors have power, while weaker ones do not. Accordingly, the U.S. decline in the share of global trade and income, and its domestic financial instability, should diminish its influence. But this focus fails to consider the ways in which the global financial network is, in fact, a complex and adaptive system. Power within this system does not depend solely on domestic attributes, but on the distribution of financial relationships that exists globally. In other words, the most well-connected economies, not just the biggest, are the most powerful. By extension, change within this structure does not follow a linear process, and economies that are initially more advantaged will continue to grow as the system develops. ]]></description>
				<content:encoded><![CDATA[<div><img width="150" height="150" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/10/dollar-150x150.jpg" class="attachment-thumbnail wp-post-image" alt="dollar" style="margin-bottom: 15px;" /></div><p><em>The United States not only continues to dominate global finance but has become even more central since the 2008 crisis. How did this happen?</em><span id="more-13901"></span></p>
<p>&nbsp;</p>
<p><em>Note: This article was originally published on October 7, 2013.</em></p>
<p><em></em>The great financial crisis of 2008 convinced many in the markets and policy arena that the U.S. had reached its high-water mark of dominance and that its decline was sealed. As they saw it, American financial prominence had proven so destabilizing that other countries had to insulate themselves against “profligate” U.S. behavior. Furthermore, the crisis dramatically reduced U.S. attractiveness to global capital, weakening its financial power to such an extent that the U.S. would be severely constrained in its ability to finance government debt at home and pursue geopolitical projects abroad.</p>
<p>As a result, many in this camp have anticipated a restructuring of the international financial system away from New York and toward China and other emerging markets. According to the World Economic Forum, Hong Kong displaced the United States as the world’s leading financial center last year. Many would agree with the economist Arvind Subramanian, who has argued that by 2030 China will be the world’s sole superpower, and that it is already the “world’s largest banker.”</p>
<p>These assessments see power as a result of the internal attributes of national economies: large economies with attractive financial sectors have power, while weaker ones do not. Accordingly, the U.S. decline in the share of global trade and income, and its domestic financial instability, should diminish its influence. But this focus fails to consider the ways in which the global financial network is, in fact, a complex and adaptive system. Power within this system does not depend solely on domestic attributes, but on the distribution of financial relationships that exists globally. In other words, the most well-connected economies, not just the biggest, are the most powerful. By extension, change within this structure does not follow a linear process, and economies that are initially more advantaged will continue to grow as the system develops.</p>
<p>The difference between these two approaches is significant. When we conceptualize the international financial system as a network, we see that the U.S. has become more central since 2007, not less. Rather than shift from West-to-East, global financial actors have responded to crisis by reorganizing around American capital to a remarkable extent. This is partially due to proactive responses to the crisis by policymakers such as the Federal Reserve, but it is also the result of factors outside the U.S. Above all, American capital markets remain attractive because complex networks contain strong path dependencies, which reinforce the core position of prominent countries while keeping potential challengers in the periphery. That is to say, policymakers and market players were limited in the decisions they could take because of factors that had already been locked in. As a result, the structure of the global financial system keeps the U.S. at the core and will continue to do so unless the entire network is fragmented, as it was during the 1930s when Great Britain lost its dominance.</p>
<p>Some who do see continuing U.S. financial resiliency contend that American power serves to the disadvantage of smaller countries. Indeed, they are correct that when a crisis occurs in the core – where the U.S. remains &#8212; the effects are felt throughout the system. But they miss the fact that American prominence also provides important stabilization mechanisms that can contain crises. To explain this, we need to look at what network scientists call “topology,” which refers to the organization of the components of a network, whether we are looking at a computer system or a financial system.</p>
<p>Once we view the international financial system in this context, we see that it is robust when facing crises in peripheral countries, but fragile when facing crises occurring in the core. This explains why the U.S. subprime crisis destabilized the global economy, while upheavals such as the 1990s East Asian crisis did not. Even the euro zone crisis has remained localized, to this point. A network perspective also explains how policy interventions by the U.S. prevented the collapse of the global system, thus ensuring that U.S. centrality persists. Finally, a network model should make us more cautious about promoting policies meant to erode U.S. financial hegemony. In fact, American centrality contained crises in peripheral countries from spreading globally, and the U.S. government demonstrated both the capacity and the willingness to pursue monetary and fiscal policies to moderate crises emanating from its own banking system. Returning to a world in which the structure of global financial relationships devolves outside the U.S. would therefore reintroduce a type of systemic risk not seen since the 1930s.</p>
<p><em>Describing the global banking network</em></p>
<p>What does the international banking network look like? First, the U.S. is strongly central, with over 70 percent of all countries placing a substantial amount of their overseas portfolio assets in the U.S., according to the Bank for International Settlements. After that, the distribution of international holdings is widely dispersed. The U.K. is the next most central, with about 35 percent of all countries significantly tied to its banking system. But most countries are only weakly tied there, even those with large financial sectors. Moving to Asia, Hong Kong &#8212; which supposedly passed New York and London as the world’s preeminent financial center &#8212; attracts a large amount of finance from fewer than 5 percent of the world’s economies. Mainland China barely exists in these networks, because the yuan is not convertible and foreign investment is tightly regulated.</p>
<p>Moreover, this network topology has reinforced itself over time, as ties to the U.S. have become increasingly strong. That progression paused briefly as a result of the 2007-08 shock, but quickly resumed. Perhaps most astonishingly, the U.S. has actually become more central in the aftermath of the crisis. The current international banking system is what we would call “hierarchical” – with the U.S. at its core and most other countries’ banking systems in the periphery – and displays dynamics of preferential attachment that reinforce this kind of “system hierarchy” through time.</p>
<p><em>When U.S. centrality helps, and when it hurts</em></p>
<p>What are the implications of a hierarchical financial network on international financial stability, and what does it mean for policy responses to crises? U.S. centrality, on the whole, is positive for system stability because the U.S. – with its deep, liquid banking system – is best poised to absorb and manage banking losses. When a peripheral country experiences a banking crisis it transmits its losses to foreign banks with which it has cross-border obligations, mainly U.S. financial institutions. In such cases, peripheral-country losses represent only a relatively small percentage of global banking assets, so U.S. banks are generally able to assume losses without collapsing. In this way, U.S. centrality acts as an important buffer to most interstate banking crisis contagion.</p>
<p>Consequently, a U.S.-centered global banking system is robust enough to withstand crises emanating from peripheral countries. This claim may seem counter-intuitive as well as controversial. But if we understand network dynamics, we can see the benefit of an international financial system centered on the U.S. And given preferential attachment, the persistence of U.S. centrality speaks to its positive effect: if U.S. centrality were in fact destabilizing, this kind of network manifestation would have quickly disintegrated after its initial formation.</p>
<p>That said, under certain conditions, U.S. centrality could be quite destructive to the stability of the global financial system. Because U.S. banking obligations represent a sizable percentage of bank balance sheets in most countries, most economies are vulnerable to crises that originate in America. Unlike the U.S., these peripheral countries are unable to absorb losses because U.S. obligations represent a large portion of their portfolios. Thus, a U.S.-centered global banking system is fragile to crises emanating from the core.</p>
<p><em>What this means for crisis response</em></p>
<p>This leads us to the next question: Why did the U.S. subprime crisis, which rippled throughout the international banking system, not ultimately destroy it? Indeed, as our analysis shows, the U.S. actually became more central to the system in the aftermath of the crisis. Ultimately, the answer is a combination of three factors: the lack of better alternatives, policy responses in the core, and the actions that banks took to protect themselves against risk. All of these responses strengthened ties between the U.S. and peripheral national banking systems.</p>
<p>First, while the subprime crisis illustrated the vulnerabilities in the U.S. banking system, financial actors continue to face limited alternatives when determining where to place assets other than in the U.S. The euro zone, deeply embroiled in its own debt crises, is not an attractive alternative. China, despite its economic growth, is an unviable center because its currency is not internationalized and its banking system is closed. Hong Kong and Switzerland are too small to provide sufficient liquidity or offer the kind of policy capacity to provide stability in the face of crisis. Other large emerging economies such as Brazil, Russia, and India have not developed large, internationally integrated financial markets. Given a lack of fit alternatives, the U.S. remains central.</p>
<p>Second, once the crisis unfolded, the U.S. pursued domestic and international policies that helped to stem financial losses. The Federal Reserve took swift action to unfreeze credit markets and injected massive amounts of liquidity into the global system. In an unprecedented move, the Fed also lent billions of dollars to distressed foreign firms during the crisis: 10 of the top 20 borrowers from Fed emergency lending programs were European banks; an 11th was Japanese. Fed Chairmen Ben Bernanke, an economist who understands how important global cooperation is in the face of systemic crisis, opened swap lines with almost every significant central bank in the world. Unlike Britain in the 1930s, the U.S. was able to pursue policies to reinforce the structure of the system because of its central position within that structure; it was willing to do so because it knew staying at the center confers economic and geopolitical advantages.</p>
<p>Finally, banks actually increased their exposure to the U.S. banking system in the aftermath of the crisis. They did so because, despite the crisis, American banks are still considered safer than others. The U.S. banking system is deep and liquid. As discussed above, the U.S. has the capacity to moderate crises by supporting systemically important banks, and its government demonstrated its willingness to pursue such stabilization policies in the face of crisis. This increased reliance on the U.S. illustrates an important feature of a network characterized by preferential attachment: once a hierarchical network is created, destruction of this system would obliterate substantial wealth, so banks have incentives to shore up the core and re-constitute the network. This dynamic is nicely demonstrated by market responses to the 2011 downgrade of the U.S. sovereign debt rating. While any other country would have seen interest rates spike after a downgrade, interest rates on U.S. government bonds actually fell.</p>
<p><em>Revising lessons learned</em></p>
<p>Many pundits claim the lesson of the Great Recession is that American financial dominance is both dangerous and waning. Our conceptualization of the international financial system as a complex network leads us to a wholly different set of conclusions.</p>
<p>First, the optimal policy response to crises depends on where within the network the crisis occurs. A crisis in the core requires quite different actions than does a crisis in the periphery: the latter has very little chance of generating systemic crisis and can be managed by institutions like the International Monetary Fund. While the hardship of people suffering in Greece and other countries hit by crises is very real, these peripheral calamities do not pose significant threats to the system as a whole. Crises in the U.S., by contrast, can quickly destabilize the entire financial system. Accordingly, the policy response to these crises must be designed to support the structure of the system, not change it. This requires significant global policy coordination designed to maintain liquidity and prevent systemic collapse.</p>
<p>Second, the actions that peripheral countries take to insulate themselves against crisis often paradoxically reinforce U.S. centrality. This dynamic is generally a net positive for network stability so long as macroeconomic imbalances are managed carefully. In Asia, for example, countries amassed foreign exchange reserves as self-insurance against negative current account shocks – a move that further entrenched U.S. centrality, since the dollar is the primary global reserve currency. As discussed above, the dollar’s primacy allows the U.S. to respond to negative shocks with large injections of liquidity into the global financial system. U.S. financial centrality, along with its demonstrated willingness to respond to shocks with the policies necessary to keep markets liquid, raises investors’ confidence in its financial system while tying their financial health to the strength of this network. This leads investors to continue to buy U.S. debt, further allowing the U.S. to borrow cheaply even after ratings agencies downgrade or threaten to downgrade U.S. debt. Indeed, our argument complements economist Barry Eichengreen’s conclusion in his book <em>Exorbitant Privilege</em> that the U.S. dollar will persist as the main global reserve currency due to lack of a fit alternative.</p>
<p>Third, our argument about “preferential attachment” suggests that the emergence of a better alternative is a rare phenomenon, since positive feedback perpetually structures the network around the center. Moreover, even if such an alternative did exist, we would not see a change to the network structure unless a shock emanating from the core was large enough to destroy the topology of international relationships. Even when shocks are large enough to destroy the system, core countries may pursue policies that prevent network collapse. Thus, preferential attachment makes the emergence of a new financial hegemon highly unlikely. At the same time, because so many countries are highly connected to the center, hierarchical networks are susceptible to hugely disruptive tail events with systemic effects. One important implication of our research is that it may be incredibly difficult to predict when we might experience a destabilizing crisis &#8212; but we do know that such a crisis must originate in the U.S.</p>
<p>Finally, our network model describes the necessary and sufficient conditions for the persistence and change of systemic orders. It does not mean the U.S. will forever be the center of the international financial system. But for the U.S. to be dethroned, three things need to happen jointly: a severe crisis in the U.S., the inability or unwillingness of the U.S. government to pursue policies necessary to stabilize the system, and the emergence of an alternative that is ready to move into a position at the center. Network dynamics suggest that under these three conditions, a network transformation could happen quite rapidly. Historically, such transitions have been accompanied by political instability and major power conflicts. The most recent transition – from British to American financial centrality – occurred within the context of two World Wars and a Great Depression. It appears that none of these conditions exists at present. Because of the persistence created by preferential attachment, policies aimed at eroding U.S. centrality probably will not work. But if they did, world order would be threatened.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://www.symposium-magazine.com/why-u-s-financial-hegemony-will-endure/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>History Versus Hagiography</title>
		<link>http://www.symposium-magazine.com/history-versus-hagiography/</link>
		<comments>http://www.symposium-magazine.com/history-versus-hagiography/#comments</comments>
		<pubDate>Mon, 04 Nov 2013 00:22:12 +0000</pubDate>
		<dc:creator><![CDATA[admin]]></dc:creator>
				<category><![CDATA[November 2013 Edition]]></category>
		<category><![CDATA[RSS]]></category>

		<guid isPermaLink="false">http://www.symposium-magazine.com/?p=14065</guid>
		<description><![CDATA[<div><img width="150" height="150" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/11/ventresca-150x150.jpeg" class="attachment-thumbnail wp-post-image" alt="palatucci award" style="margin-bottom: 15px;" /></div>Among the many dilemmas a professional historian will face in the course of his or her work, few are as vexing as the question of how, or even if, moral judgement fits with historical interpretation. This is especially true of inescapably controversial figures or episodes in the past that seem to demand of the historian some moral and ethical insight or conclusion, especially if the past is still alive in contemporary memory. That is, they demand some lesson about the deeper or ultimate meaning of the historical question under study beyond a mere empirical narration of facts or the logical explanation of cause and effect. I will talk about a case &#8212; the Vatican’s role during the Holocaust &#8212; to make some broader points about moral judgement. I do not believe historians ought to avoid it per se, as if historical interpretation were somehow amoral. But we need to understand why it is so important to differentiate the stages of scholarly inquiry, and especially how a full and proper historical interpretation can inform moral judgement about past events and their meanings. We must let history do its essential task of showing what happened and why, so that we can then conduct a reasonable, informed analysis about what might have been, and what ought to be. Few questions are thornier than the issue of papal intervention, or lack thereof, on behalf of persecuted Jews during the Holocaust. Arguably the most contentious claims reflect competing narratives about the presumed role of the pope and the Vatican in rescue and relief initiatives on behalf of Jews, especially in Italy, and Rome in particular. Narratives of papal rescue and relief often blur the lines between wartime experiences and their framing in postwar memory. Nowhere is this more evident than in the self-congratulatory narrative attributing to Pius XII a decisive role as “rescuer” – a narrative that the Vatican itself crafted before the war had even ended. Sensitive to charges of papal inaction on behalf of persecuted Jews, senior papal diplomats offered specific examples of the thousands of Jews in Rome &#8212; up to 6,000 &#8212; who had been given “refuge and succor” by the Vatican during German occupation of the city, primarily in the form of material aid, asylum, and safe passage. This narrative also came from Pius XII himself, who utilized self-ascribed claims of rescue and relief to justify his policy of impartiality and cautious public diplomacy. It was also useful in deflecting the constant entreaties reaching the pope during the war, very often from other ecclesiastical authorities, for the Vatican to do more for persecuted European Jews. Immediately after the war, the pope and senior advisors saw diplomatic advantage in publicizing the many public expressions of Jewish gratitude. This, in turn, set the stage for a similar response in the 1960s to Rolf Hochhuth’s The Deputy, a drama about the pope’s role during the Holocaust, first performed in Berlin in February 1963, five years after Pius’ death. Although it was a fictionalized historical account, Hochhuth’s play sparked a dramatic rethinking of Pius XII’s wartime role. More than any single work of sound historical interpretation, Hochhuth’s work cast the indelible image of the wartime pope as a moral coward and political failure whose cautious diplomatic approach played into Hitler’s murderous hands. To this day, Pius apologists are still wrestling with the ghosts stirred by Hochhuth’s Deputy. Typically, they point to the many Jews after the war who expressed gratitude for papal rescue and relief during the Holocaust. What these apologists present us with is a selective arrangement of historical fragments, which they construe as persuasive vindication of the wartime pontiff’s decision-making. In this respect, the apologists’ account represents mythology and hagiography than critical history. The problem permeates scholarship in the field. Indeed, one is struck by how often in the literature on Pius XII we find a juxtaposition of “supporters” and “defenders” pitted against “critics” and “skeptics.” The former make untenable claims that the pope and the Vatican played a decisive role in saving several hundred thousand Jews during the Holocaust. The most exaggerated of these – which even some respectable scholars and the Vatican repeat – have achieved the status of established fact in apologetic circles, all the more because they come from Jewish sources. This camp would have it that upwards of 800,000 Jews were saved during the Holocaust by means of direct or indirect papal intervention. That said, few scholars lend serious credence to this claim, given the specious method by which it was derived, not to mention the apologetic-polemical end to which that inflated figure has been used. However, other longstanding claims of papal assistance are more credible and warrant sustained, critical scholarly attention, if only to place them in a proper context. As I argue in my book, Soldier of Christ, there is ample evidence to show that the pope and his advisors did authorize or tacitly allow papal representatives and ecclesiastical entities around the world to mobilize their resources to help those facing persecution. This was hardly tantamount to a policy or a directive of Jewish rescue and relief, and it certainly does not stand as evidence of an intentional scheme to furtively mobilize church resources on a massive scale to help persecuted Jews. Still, it was a measure of decisive assistance just the same. The challenge is finding a framework for calibrating that assistance in quantitative and qualitative ways. As an example of just how complex this question is, we can look to the controversy this past summer over the wartime record of Giovanni Palatucci, an Italian police official long regarded as a righteous rescuer but now implicated by new research as a possible collaborator in the Holocaust. In the span of a few short months, an established version of history was called into question as mythology. On one side we have the established public memory of Palatucci – an ordinary Catholic rescuer who did extraordinary things to save Jewish lives during the Holocaust in...]]></description>
				<content:encoded><![CDATA[<div><img width="150" height="150" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/11/ventresca-150x150.jpeg" class="attachment-thumbnail wp-post-image" alt="palatucci award" style="margin-bottom: 15px;" /></div><p>Among the many dilemmas a professional historian will face in the course of his or her work, few are as vexing as the question of how, or even if, moral judgement fits with historical interpretation. This is especially true of inescapably controversial figures or episodes in the past that seem to demand of the historian some moral and ethical insight or conclusion, especially if the past is still alive in contemporary memory. That is, they demand some lesson about the deeper or ultimate meaning of the historical question under study beyond a mere empirical narration of facts or the logical explanation of cause and effect.</p>
<p>I will talk about a case &#8212; the Vatican’s role during the Holocaust &#8212; to make some broader points about moral judgement. I do not believe historians ought to avoid it per se, as if historical interpretation were somehow amoral. But we need to understand why it is so important to differentiate the stages of scholarly inquiry, and especially how a full and proper historical interpretation can inform moral judgement about past events and their meanings. We must let history do its essential task of showing what happened and why, so that we can then conduct a reasonable, informed analysis about what might have been, and what ought to be.</p>
<p>Few questions are thornier than the issue of papal intervention, or lack thereof, on behalf of persecuted Jews during the Holocaust. Arguably the most contentious claims reflect competing narratives about the presumed role of the pope and the Vatican in rescue and relief initiatives on behalf of Jews, especially in Italy, and Rome in particular. Narratives of papal rescue and relief often blur the lines between wartime experiences and their framing in postwar memory. Nowhere is this more evident than in the self-congratulatory narrative attributing to Pius XII a decisive role as “rescuer” – a narrative that the Vatican itself crafted before the war had even ended.</p>
<p>Sensitive to charges of papal inaction on behalf of persecuted Jews, senior papal diplomats offered specific examples of the thousands of Jews in Rome &#8212; up to 6,000 &#8212; who had been given “refuge and succor” by the Vatican during German occupation of the city, primarily in the form of material aid, asylum, and safe passage. This narrative also came from Pius XII himself, who utilized self-ascribed claims of rescue and relief to justify his policy of impartiality and cautious public diplomacy. It was also useful in deflecting the constant entreaties reaching the pope during the war, very often from other ecclesiastical authorities, for the Vatican to do more for persecuted European Jews.</p>
<p>Immediately after the war, the pope and senior advisors saw diplomatic advantage in publicizing the many public expressions of Jewish gratitude. This, in turn, set the stage for a similar response in the 1960s to Rolf Hochhuth’s <em>The Deputy</em>, a drama about the pope’s role during the Holocaust, first performed in Berlin in February 1963, five years after Pius’ death. Although it was a fictionalized historical account, Hochhuth’s play sparked a dramatic rethinking of Pius XII’s wartime role. More than any single work of sound historical interpretation, Hochhuth’s work cast the indelible image of the wartime pope as a moral coward and political failure whose cautious diplomatic approach played into Hitler’s murderous hands.</p>
<p>To this day, Pius apologists are still wrestling with the ghosts stirred by Hochhuth’s <em>Deputy</em>. Typically, they point to the many Jews after the war who expressed gratitude for papal rescue and relief during the Holocaust. What these apologists present us with is a selective arrangement of historical fragments, which they construe as persuasive vindication of the wartime pontiff’s decision-making. In this respect, the apologists’ account represents mythology and hagiography than critical history.</p>
<p>The problem permeates scholarship in the field. Indeed, one is struck by how often in the literature on Pius XII we find a juxtaposition of “supporters” and “defenders” pitted against “critics” and “skeptics.” The former make untenable claims that the pope and the Vatican played a decisive role in saving several hundred thousand Jews during the Holocaust. The most exaggerated of these – which even some respectable scholars and the Vatican repeat – have achieved the status of established fact in apologetic circles, all the more because they come from Jewish sources. This camp would have it that upwards of 800,000 Jews were saved during the Holocaust by means of direct or indirect papal intervention.</p>
<p>That said, few scholars lend serious credence to this claim, given the specious method by which it was derived, not to mention the apologetic-polemical end to which that inflated figure has been used. However, other longstanding claims of papal assistance are more credible and warrant sustained, critical scholarly attention, if only to place them in a proper context. As I argue in my book, <em>Soldier of Christ</em>, there is ample evidence to show that the pope and his advisors did authorize or tacitly allow papal representatives and ecclesiastical entities around the world to mobilize their resources to help those facing persecution. This was hardly tantamount to a policy or a directive of Jewish rescue and relief, and it certainly does not stand as evidence of an intentional scheme to furtively mobilize church resources on a massive scale to help persecuted Jews. Still, it was a measure of decisive assistance just the same. The challenge is finding a framework for calibrating that assistance in quantitative and qualitative ways.</p>
<p>As an example of just how complex this question is, we can look to the controversy this past summer over the wartime record of Giovanni Palatucci, an Italian police official long regarded as a righteous rescuer but <a title="link to NYT/Palatucci" href="http://www.nytimes.com/2013/06/20/arts/an-italian-saint-in-the-making-or-a-collaborator-with-nazis.html" target="_blank">now implicated</a> by new research as a possible collaborator in the Holocaust. In the span of a few short months, an established version of history was called into question as mythology. On one side we have the established public memory of Palatucci – an ordinary Catholic rescuer who did extraordinary things to save Jewish lives during the Holocaust in the town of Fiume, now Rijeka, Croatia. On the other, we have the counter-memory of an unassuming functionary who dutifully carried out his administrative tasks on behalf of murderous fascist regimes, to deadly ends.</p>
<p>The Palatucci case illustrates just how the intertwining threads of “the true, the false, and the fictional” to borrow from the Italian historian Carlo Ginzburg, are brought together to form diverse, even competing, iconographies that selectively represent an otherwise complex historical picture. Consider how the story of how Palatucci first came to be recognized formally by Yad Vashem and other authoritative bodies as a “Righteous Among the Nations” for his role in helping Jews in Fiume survive, reportedly through such activities as issuing forged residence and transit documents. It is said that he even hid one couple in his office attic. Palatucci was eventually arrested and tortured by the Gestapo and then imprisoned in Dachau for treason. He died there in February 1945, at the age of 35. It is unclear whether he was executed or died from malnutrition or related illnesses.</p>
<p>Public praise for Palatucci’s role quickly surfaced immediately following the war. Formal ways of memorializing his efforts followed suit. Some Jewish refugees who fled Europe to Palestine in 1939 via Fiume credited Palatucci for their survival. In 1953, they named a street and a park in the Israeli city of Ramat Gan in his honor. In 1955, the Union of the Italian Jewish Communities, a national umbrella organization of Jewish groups in Italy, posthumously awarded Palatucci with its gold medal for his efforts. By 1990, the testimonial from at least one survivor, together with other evidence that “hundreds” of Jews were helped directly or indirectly by Palatucci, was enough for Yad Vashem to declare him a <a title="link to yad vashem" href="http://www.yadvashem.org/yv/en/righteous/program.asp" target="_blank">Righteous Among the Nations</a>. This title is dedicated to those “non-Jews who risked their lives to save Jews during the Holocaust.”</p>
<p>Beyond the Jewish community, both secular and ecclesiastical authorities in Italy have offered authoritative endorsements of Palatucci’s wartime role as rescuer. In 1995, the Italian government posthumously awarded Palatucci its “Gold Merit” of civil merit. And, in 2000, Pope John Paul II honored Palatucci as one of the “martyrs” of the twentieth century for his reputed role in Jewish rescue, and for having died a prisoner of the Nazis while holding fast to the virtues of his Christian faith. In 2004, his <a title="link to Palatucci site/primo levi" href="http://www.primolevicenter.org/Essays%26Interviews/Entries/2010/1/8_Giovanni_Palatucci_Between_History_and_Hagiography.html" target="_blank">cause passed the initial diocesan stage of canonization</a>, which means that Palatucci also now bears the solemn honor of “Servant of God” and is a candidate for sainthood.</p>
<p>Despite these honors, doubts about Palatucci’s status as righteous rescuer had been circulating for several years. But the debate took a serious shift earlier this year when researchers affiliated with various reputable institutes claimed that an investigation of the relevant documentation painted a very different picture of Palatucci: Far from blocking the implementation of Italian racial laws in defence of Jews, Palatucci was notoriously diligent in tracking Jewish residents and refugees in and around Fiume, and in enforcing existing racial laws. To substantiate the claim, several scholars point to, among other things, the fact that an estimated 80 percent of Fiume’s 500 Jews in 1943 were deported to Auschwitz, a higher percentage than any other Italian city. (For more, read Alessandra Farkas, “<a title="link to Farkas story" href="http://www.jta.org/2013/06/17/news-opinion/world/shadows-cast-on-the-heroism-of-italian-schindler" target="_blank">Shadows cast on the heroism of Italian Schindler</a>,” originally in <em>Corriere della Sera</em> but reproduced in English in <em>The Times of Israel</em>, June 14, 2013.)</p>
<p>Amid rising doubt, the Union of the Italian Jewish Communities has asked the Center for Jewish Contemporary Documentation in Milan to set up a research commission to work with established institutions and researchers to sift through a wide range of evidence from various sources to reach some definitive conclusions on the Palatucci case. As historian Michele Sarfatti of the Center for Jewish Contemporary Documentation in Milan observed recently, the problem here is that the public praise, honors, and “memorials” have by and large “preceded historical research.” Accordingly, Yad Vashem, the United States Holocaust Memorial Museum, and even the Vatican have <a title="link to committee on fiume" href="http://www.primolevicenter.org/Essays%26Interviews/Entries/2013/8/27_The_Center_for_Contemporary_Jewish_Documentation_in_Milan_to_Lead_the_Research_Committee_on_Fiume_1938-1945_and_Palatucci.html" target="_blank">pledged to review </a>the “new documentation” to set the record straight, to the extent this is even possible.</p>
<p>Given the inherent complexity involved in any story of rescue during the Holocaust, and the sensitivities at hand when the narrative under scrutiny involves a candidate for sainthood, it is little wonder that the controversy over Palatucci’s place in history and memory, like that of Pope Pius XII, is polarizing. It is telling how the legitimate historical investigations into Palatucci’s role during the Holocaust have quickly turned into a predictably polemical debate involving Pius XII. A case in point is a <a title="link to Foa article" href="http://www.osservatoreromano.va/portal/dt?JSPTabContainer.setSelected=JSPTabContainer%2FDetail&amp;last=false=&amp;path=/news/cultura/2013/143q13-Il-caso-Palatucci--da-GiUSto-delle-Nazioni-.html&amp;title=To%20strike%20the%20Church%20of%20Pius%20XII&amp;locale=fr,%20accessed%20August%2021,%202013" target="_blank">recent piece</a> in the Vatican’s newspaper<em> L’Osservatore Romano</em> by the Italian historian Anna Foa. She wrote that it is understandable in the course of historical study to continue subjecting what she calls “hagiographic interpretations” of Palatucci’s case to heretofore “scarce” historical research. Yet she maintained that this case is really being revisited to “mar” the Church of Pius XII: “[I]n targeting Palatucci, the intention was essentially to hit a Catholic involved in rescuing Jews, a champion of the idea that the Church spared no effort to help Jews.” This, Foa concludes, “is ideology, not history.”</p>
<p>Sadly, as Foa’s comments illustrate, historical study of Palatucci’s role during the Holocaust, like the study of the role of Pius XII and the Catholic Church writ large, will continue to get caught up in the polemical vortex of the so-called “Pius war.” Consequently, serious students of the subject find themselves working within — and sometimes perpetuating — an adversarial-polemical mode of discourse and analysis. Even worse, despite their best intentions, professional historians are susceptible to proffering ideology or a form of advocacy as opposed to historical interpretation. This is a blurring of the lines between moral judgement and historical evaluation, engaging in speculative, counter-factual discussions of what might have been or what ought to have been instead of what was, and why.</p>
<p style="text-align: center;">***</p>
<p>This debate goes to the heart of the question of why we need history. The historian Eric Hobsbawm put it well when he wrote in <a title="link to On History" href="http://www.amazon.com/On-History-Eric-Hobsbawm/dp/1565844688" target="_blank"><em>On History</em></a>, “We swim in the past as fish do in water, and cannot escape from it.” As he saw it, we need history so that we can gain an appreciable “sense of the past” as we make our way in contemporary society. As part of that, understanding change over time frequently demands that we “dismantle” the various historical mythologies, or “mythic history,” that define collective memory of the past.</p>
<p>I am borrowing the term “mythic history” from the historian Philip Jenkins, who argued in <a title="lin to Jenkins book" href="http://www.amazon.com/The-New-Anti-Catholicism-Acceptable-Prejudice/dp/0195176049" target="_blank"><em>The New Anti-Catholicism: The Last Acceptable Prejudice</em></a> that much contemporary criticism of Catholicism – what he termed “attacks” – draws upon history and forms of “mythic history” that frame popular understanding of the Catholic Church’s place in major historical events and processes. The claim warrants serious, critical examination, but before doing so, some definition is in order. Most dictionaries define “myth” as a “figment,” as a belief that may be widely-held yet is essentially “untrue.” The adjective “mythic” can be used to describe something that is “fictitious, untrue.” We might also consider the term in the classical sense of <em>mythos</em>: narratives, stories, and legends that are grounded in concrete historical realities but include elements that are partly untrue or unknowable, or exaggerated and intentionally selective so as to impart some deeper lesson. Taken this way, we can conceive of mythic history as a version of the past that is partly or largely untrue and yet also widely-held and deeply engrained in popular understanding. As Jenkins noted, “there are some historical facts that everyone knows, that are simply too obvious to need explanation.”</p>
<p>What Jenkins means to say is that people think they know the facts; they think that their assumptions about even immensely complex historical realities are complete and accurate no matter how superficial, selective, or even mistaken those assumptions may be. So by virtue of their widespread currency, versions of mythic history have unmistakable influence. This is all the more so when mythic history is produced, transmitted, and sanctioned by established purveyors of cultural authority, be they in government, academe, or so-called media of record. Mythic history finds its most widespread and influential expression in keywords and phrases. Richard Slotkin has described these in his book <a title="link to gunfighter nation" href="http://www.amazon.com/Gunfighter-Nation-Frontier-Twentieth-Century-America/dp/0806130318" target="_blank"><em>Gunfighter Nation</em></a> as “mythic icons” &#8212; a single word or phrase that evokes what in reality is “a complex system of historical associations.”</p>
<p>These icons assume a practical function as rhetorical devices in written or verbal discourse. They come in the form of a single word, short phrase, or simplified interpretation whose primary function is not to explain the past as it essentially was, but to offer a generalized and selective picture of the past to impart some lesson of how things got to be the way they are. This has the effect, obviously, of reducing complex historical realities to simplistic motifs, which then can be used to practical effect in informing and shaping socio-cultural values and discourse.</p>
<p>It might be tempting for professional historians to avoid trading in mythic history and to dismiss these versions as “popular,” as opposed to “academic” versions of the past, and therefore irrelevant to the academy. But such an attitude reflects a kind of tacit (or indeed avowed) elitism that reinforces the cult of specialization that has made much academic history so inaccessible to the broader public. The historian Wilfred McClay was on to something when he said that much academic history written today exhibits a misplaced privileging of “tedious professional jargon” as the measure of credibility and sophistication.</p>
<p>This tendency, together with excessive specialization, has emptied much of our work of what <a title="link to First Things/McClay" href="http://www.firstthings.com/article/2007/01/tradition-history-and-sequoias-35" target="_blank">McClay describes</a> as “an appreciative sense of the past.” In other words, professional historians have given up on the “founding myth” of academic history, the ideal of objectivity, and as a result they can scarcely perceive for themselves an objective, intelligible, meaningful truth about the past, let alone convey such understanding to a popular audience. (I also recommend Peter Novick&#8217;s <a title="link to Novick book/objectivity" href="http://www.cambridge.org/us/academic/subjects/history/history-ideas-and-intellectual-history/noble-dream-objectivity-question-and-american-historical-profession" target="_blank"><em>That Noble Dream: The ‘Objectivity Question’ and the American Historical Profession</em></a>.)</p>
<p>And yet surely it is one of the most basic tasks of the historian’s craft to offer as accurate a picture of the past as possible. As Carlo Ginzburg <a title="link to ginzberg book" href="http://www.ucpress.edu/book.php?isbn=9780520274488" target="_blank">observed</a>, history is an exercise that pertains to everyday life, “untangling the strands of the true, the false, and the fictional which are the substance of our being in the world.” Historians need to rediscover the elemental roots of their craft and contribute to contemporary debates by distinguishing fact from fiction; by dealing in what was, why, and to what effect precisely so as to have something meaningful to contribute to moral and ethical debates.</p>
<p>Another task of the historian is to convey an appreciable sense of the complexity of the past. Historians are fond of saying that the essence of our craft, the real thrill of historical study, lays not merely in uncovering “facts” about the past and stringing them together in narrative interpretations, but in thinking critically and flexibly about the complexity of historical experience. We tell our students to “embrace the complexity,” to embrace even the confusion that may result from starkly contradictory historical interpretations. After all, historical reality, like life, can be messy, filled with ambiguities, uncertainties and contradictions. (A useful essay on this is Thomas Andrews and Flannery Burke, “<a title="link to Perspectives on History" href="http://www.historians.org/publications-and-directories/perspectives-on-history/what-does-it-mean-to-think-historically" target="_blank">What Does It Mean to Think Historically?</a>” in <em>Perspectives on History</em>, January 2007.)</p>
<p>As the controversies over Pius XII and Giovanni Palatucci show, this is precisely why mythic icons leave out too much, and intentionally so, to be meaningful and properly critical representations of complex historical realities. Clearly, that is not what mythic icons are intended to be. But, then, a kind of <em>caveat emptor</em> for the reading public is in order: beware of mythology or hagiography masking as history.</p>
<p>I was asked once by an interviewer where the truth lay in the starkly contradictory interpretations of Pius XII as “Hitler’s Pope” and “Righteous Gentile.” My answer was that the historical reality, the truth as it were, lay somewhere in between. In retrospect, that answer was sorely imprecise and evasive, perhaps even unintentionally misleading. I now would say simply that such labels &#8212; mythic icons &#8212; have no real value as historical categories. If anything, they hinder historical judgement and, with it, the possibility for informed reflection on the ethical and moral dimensions of historical understanding.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://www.symposium-magazine.com/history-versus-hagiography/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Can Corporations Be Good Citizens?</title>
		<link>http://www.symposium-magazine.com/can-corporations-be-good-citizens/</link>
		<comments>http://www.symposium-magazine.com/can-corporations-be-good-citizens/#comments</comments>
		<pubDate>Sun, 03 Nov 2013 07:05:12 +0000</pubDate>
		<dc:creator><![CDATA[admin]]></dc:creator>
				<category><![CDATA[November 2013 Edition]]></category>
		<category><![CDATA[RSS]]></category>

		<guid isPermaLink="false">http://www.symposium-magazine.com/?p=14058</guid>
		<description><![CDATA[<div><img width="150" height="150" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/11/protest-150x150.jpeg" class="attachment-thumbnail wp-post-image" alt="campaign finance reform rally" style="margin-bottom: 15px;" /></div>The reputation of big business has taken blow after blow in the last few years. The global financial crisis revealed the risks to the economy of Wall Street excess. The Deepwater Horizon oil spill showed the dangers to the environment of corporate decisions that externalized the possibility of serious harm. The explosion of corporate expenditures in the 2012 election cycle indicated that corporations were attempting to exert their influence over our democratic life. Americans are terribly skeptical of big business, and probably increasingly so. According to a 2012 Gallup poll, Americans’ satisfaction with the size and influence of big business is near record lows, and has fallen by 40 percent in the last decade. This skepticism is feeding a lively debate — largely between two camps on the ideological left — on about how to take advantage of this moment to rein in corporate power. Although both camps distrust corporations, they are fundamentally at odds over not only possible remedies, but the nature of the problem. The crucial difference is over what might be called corporate “citizenship.” One camp sees corporate power as something that can be used constructively; the other sees it as the evil to be corrected. For decades, there has been a vocal minority of corporate law scholars (including myself) who have challenged the American corporation to broaden its role in society and enlarge the obligations it owes beyond the bottom line. These scholars have assailed the norm of shareholder primacy and called on corporations to recognize and act on the interests of all stakeholders &#8212; view sometimes called “stakeholder theory.” These critics, in effect, call on corporations to act as if they were players not only in the private sphere but in the public one as well. To act, one might say, as citizens. To call on corporations to act as “good corporate citizens” means that they should act as if they have broader obligations to the polity and society that cannot be entirely satisfied by reference to their financial statements. Meanwhile, a separate camp of corporate critics — less academic and more activist — challenges the corporation to stay within a narrow economic sphere. Corporate activity in politics and the public sphere is viewed skeptically, even hatefully. The most pertinent example of these beliefs is the current effort to amend the Constitution to take away corporate “personhood.” The thought of corporations acting as “citizens” — whether for progressive ends or not — is seen as, at best, nonsensical and, at worst, destructive to democracy. This camp also strenuously argues against the 2010 Supreme Court ruling in Citizens United v Federal Election Commission, which unleashed corporate political expenditures, and it has pushed for tougher campaign-finance legislation so that corporate political influence is circumscribed. It has gone unnoticed until now that the work of the pro-corporate citizenship scholars often directly conflicts with the work of the anti-corporate personhood activists. The arguments of those opposing corporate constitutional rights contradict and undermine the efforts of those who call on corporations to take a more active role in society to protect the interests of all corporate stakeholders, and vice versa. For the anti-personhood activists, the remedy is to keep corporations within a narrow purview; for the corporate citizenship scholars, the remedy is to ask the corporation to acknowledge and accept a broader range of obligations. The core tenets of the progressive corporate law movement include the principles that shareholders are not supreme, and that corporations should be measured by more than economic measures. Anti-corporate personhood activists, meanwhile, often argue for limiting corporate rights by pointing out that the shareholder owners should be protected from managerial misuse of their funds, and that corporations should not themselves engage politically because they have only economic natures. This latter view surfaced in the Citizens United ruling itself, in which Justice John Paul Stevens penned the lead dissent and argued that corporate speech should be limited to protect shareholders’ investments. He saw shareholders as owners, as “those who pay for an electioneering communication,” and who “invested in the business corporation for purely economic reasons.” Moreover, Stevens argued that corporate political speech did not merit protection because “the structure of a business corporation … draws a line between the corporation’s economic interests and the political preferences of the individuals associated with the corporation; the corporation must engage the electoral process with the aim to enhance the profitability of the company, no matter how persuasive the arguments for a broader … set of priorities.” Stevens even quoted the controversial American Law Institute Principles of Corporate Governance: “[A] corporation … should have as its objective the conduct of business activities with a view to enhancing corporate profit and shareholder gain.” It looks like the opponents of Citizens United are so convinced of the dangers of corporate political activity that they are ready to throw stakeholder theory under the bus as part of their broader fight. But the difficulties run the other way as well. Case in point: the work of stakeholder theorists is now being cited to bolster the arguments of those seeking broader constitutional protections for corporations. The best current example is in the context of the recent suits brought by certain corporations to challenge the portion of the Affordable Care Act that requires employers to provide employee health insurance that covers contraceptive care. As many as 60 lawsuits are now pending across the country, and two — one from the Tenth Circuit and one from the Third — have already made it to the certiorari stage at the Supreme Court. These cases turn on the question of whether corporations may assert religion-based conscientious objections to the contraceptive mandate. That question depends in part on whether the corporations can have purposes and obligations that extend beyond the economic sphere. The irony in these cases is that the corporations, asserting an ideologically conservative argument to be free of government regulation, are using arguments often made by progressive stakeholder theorists. For example, in the Tenth Circuit decision upholding...]]></description>
				<content:encoded><![CDATA[<div><img width="150" height="150" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/11/protest-150x150.jpeg" class="attachment-thumbnail wp-post-image" alt="campaign finance reform rally" style="margin-bottom: 15px;" /></div><p>The reputation of big business has taken blow after blow in the last few years. The global financial crisis revealed the risks to the economy of Wall Street excess. The Deepwater Horizon oil spill showed the dangers to the environment of corporate decisions that externalized the possibility of serious harm. The explosion of corporate expenditures in the 2012 election cycle indicated that corporations were attempting to exert their influence over our democratic life.</p>
<p>Americans are terribly skeptical of big business, and probably increasingly so. According to a <a title="link to Gallup poll on big biz" href="http://www.gallup.com/poll/152096/Americans-Anti-Big-Business-Big-Gov.aspx" target="_blank">2012 Gallup poll</a>, Americans’ satisfaction with the size and influence of big business is near record lows, and has fallen by 40 percent in the last decade. This skepticism is feeding a lively debate — largely between two camps on the ideological left — on about how to take advantage of this moment to rein in corporate power. Although both camps distrust corporations, they are fundamentally at odds over not only possible remedies, but the nature of the problem. The crucial difference is over what might be called corporate “citizenship.” One camp sees corporate power as something that can be used constructively; the other sees it as the evil to be corrected.</p>
<p>For decades, there has been a vocal minority of corporate law scholars (including myself) who have challenged the American corporation to broaden its role in society and enlarge the obligations it owes beyond the bottom line. These scholars have assailed the norm of shareholder primacy and called on corporations to recognize and act on the interests of all stakeholders &#8212; view sometimes called “stakeholder theory.” These critics, in effect, call on corporations to act as if they were players not only in the private sphere but in the public one as well. To act, one might say, as citizens. To call on corporations to act as “good corporate citizens” means that they should act as if they have broader obligations to the polity and society that cannot be entirely satisfied by reference to their financial statements.</p>
<p>Meanwhile, a separate camp of corporate critics — less academic and more activist — challenges the corporation to stay within a narrow economic sphere. Corporate activity in politics and the public sphere is viewed skeptically, even hatefully. The most pertinent example of these beliefs is the <a title="link to free speech for people site" href="http://www.freespeechforpeople.org/" target="_blank">current effort</a> to amend the Constitution to take away corporate “personhood.” The thought of corporations acting as “citizens” — whether for progressive ends or not — is seen as, at best, nonsensical and, at worst, destructive to democracy. This camp also strenuously argues against the 2010 Supreme Court ruling in <em>Citizens United v Federal Election Commission</em>, which unleashed corporate political expenditures, and it has pushed for tougher campaign-finance legislation so that corporate political influence is circumscribed.</p>
<p>It has gone unnoticed until now that the work of the pro-corporate citizenship scholars often directly conflicts with the work of the anti-corporate personhood activists. The arguments of those opposing corporate constitutional rights contradict and undermine the efforts of those who call on corporations to take a more active role in society to protect the interests of all corporate stakeholders, and vice versa. For the anti-personhood activists, the remedy is to keep corporations within a narrow purview; for the corporate citizenship scholars, the remedy is to ask the corporation to acknowledge and accept a broader range of obligations. The core tenets of the progressive corporate law movement include the principles that shareholders are not supreme, and that corporations should be measured by more than economic measures. Anti-corporate personhood activists, meanwhile, often argue for limiting corporate rights by pointing out that the shareholder owners should be protected from managerial misuse of their funds, and that corporations should not themselves engage politically because they have only economic natures.</p>
<p>This latter view surfaced in the <a title="link to Citizens United ruling" href="https://www.google.com/search?q=citizens+united&amp;ie=utf-8&amp;oe=utf-8&amp;aq=t&amp;rls=org.mozilla:en-US:official&amp;client=firefox-a#q=citizens+united+opinion&amp;revid=777751116&amp;rls=org.mozilla:en-US:official" target="_blank"><em>Citizens United</em> ruling</a> itself, in which Justice John Paul Stevens penned the lead dissent and argued that corporate speech should be limited to protect shareholders’ investments. He saw shareholders as owners, as “those who pay for an electioneering communication,” and who “invested in the business corporation for purely economic reasons.” Moreover, Stevens argued that corporate political speech did not merit protection because “the structure of a business corporation … draws a line between the corporation’s economic interests and the political preferences of the individuals associated with the corporation; the corporation must engage the electoral process with the aim to enhance the profitability of the company, no matter how persuasive the arguments for a broader … set of priorities.” Stevens even quoted the controversial American Law Institute Principles of Corporate Governance: “[A] corporation … should have as its objective the conduct of business activities with a view to enhancing corporate profit and shareholder gain.”</p>
<p>It looks like the opponents of <em>Citizens United</em> are so convinced of the dangers of corporate political activity that they are ready to throw stakeholder theory under the bus as part of their broader fight. But the difficulties run the other way as well. Case in point: the work of stakeholder theorists is now being cited to bolster the arguments of those seeking broader constitutional protections for corporations. The best current example is in the context of the <a title="link to MYT piece on ACA lawsuits" href="http://www.nytimes.com/2013/09/30/opinion/birth-control-and-a-bosss-religious-views.html?_r=0" target="_blank">recent suits</a> brought by certain corporations to challenge the portion of the Affordable Care Act that requires employers to provide employee health insurance that covers contraceptive care. As many as 60 lawsuits are now pending across the country, and two — one from the Tenth Circuit and one from the Third — have already made it to the certiorari stage at the Supreme Court. These cases turn on the question of whether corporations may assert religion-based conscientious objections to the contraceptive mandate. That question depends in part on whether the corporations can have purposes and obligations that extend beyond the economic sphere.</p>
<p>The irony in these cases is that the corporations, asserting an ideologically conservative argument to be free of government regulation, are using arguments often made by progressive stakeholder theorists. For example, in the <a title="link to 10th circuit decision" href="http://www.ca10.uscourts.gov/opinions/12/12-6294.pdf" target="_blank">Tenth Circuit decision</a> upholding the corporation’s right to be exempted from the mandate, the court noted the existence of “benefit corporations” (a business set up to provide material benefit to society) as an example of the phenomenon that corporations need not always be limited to solely economic purposes. A concurring judge, like Stevens, used the ALI Principles as a source of insight, but depended on a different reading: “But no law requires a strict focus on the bottom line, and it is not uncommon for corporate executives to insist that corporations can and should advance values beyond the balance sheet and income statement.”</p>
<p>Meanwhile, the <a title="link to 3rd circuit decision" href="http://www2.ca3.uscourts.gov/opinarch/131144p.pdf" target="_blank">Third Circuit held against the corporation</a>, arguing that an entity that was “created to make money” could not “exercise such an inherently ‘human’ right.” A dissenting judge, however, used stakeholder theory to bolster his point: “It is commonplace for corporations to have mission statements and credos that go beyond profit maximization. When people speak of ‘good corporate citizens’ they are typically referring to community support and involvement, among other things. Beyond that, recent developments in corporate law … undermine the narrow view that all for-profit corporations are concerned with profit maximization alone.”</p>
<p>In short, the efforts of anti-personhood activists not only conflict with stakeholder theory on the conceptual level. In the political arena, too, a tension exists if only because the potential for reform is a finite resource. If we are indeed situated in a moment in which we can question the very framework of how our society views corporations and their obligations, we might make headway on changing the obligations of corporations within corporate governance law, or we might make headway challenging their role in politics. It is difficult to imagine that we could do both. This is especially true, of course, when the arguments of one conflict with the other.</p>
<p>In my view, the anti-corporate personhood movement understates the importance of corporations asserting constitutional rights, at least in some situations. Corporations are not people, to be sure. But neither are unions, churches, Planned Parenthood, the NAACP, Boston College, Random House, MSNBC, or <em>The New York Times</em>. Under several of the amendments proposed, all of these groups — because they are all organized as corporations — would lose constitutional rights. And it is worth emphasizing that more than free speech rights would be lost. They would also lose rights to be free from warrantless searches, the seizure of property without due process or compensation, and jury trials. Congress could pass a law saying that <em>The New York Times</em> could not pay its reporters, or that Boston College could not teach a course on Islamic law. Planned Parenthood could be raided without a warrant.</p>
<p>This is not to say I support the decision or reasoning in <em>Citizens United</em> &#8212; on the contrary. The decision was ham-fisted in applying First Amendment doctrine; activist in reaching out to decide questions not necessary to the case; and ignorant of the realities of corporate governance. But one does not burn down a house to rid it of termites.</p>
<p>Instead, stakeholder theory offers the best potential remedy to the harms of <em>Citizens United</em> — and indeed, the other risks of corporate power we have witnessed in the last few years. The key flaw of American corporations is that they have become a vehicle for the voices and interests of an exceedingly small managerial and financial elite — the notorious 1 percent. That corporations speak is less a concern than for whom they speak and what they say. The cure for this is more democracy within businesses — more participation in corporate governance by workers, communities, shareholders, and consumers. If corporations were themselves more democratic, their participation in the nation’s political debate would be of little concern. The cure, in other words, is not to fear corporate citizenship but to embrace it.</p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://www.symposium-magazine.com/can-corporations-be-good-citizens/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>An Academic Meets Public Life</title>
		<link>http://www.symposium-magazine.com/an-academic-meets-public-life/</link>
		<comments>http://www.symposium-magazine.com/an-academic-meets-public-life/#comments</comments>
		<pubDate>Sun, 03 Nov 2013 07:04:42 +0000</pubDate>
		<dc:creator><![CDATA[admin]]></dc:creator>
				<category><![CDATA[November 2013 Edition]]></category>
		<category><![CDATA[RSS]]></category>

		<guid isPermaLink="false">http://www.symposium-magazine.com/?p=14046</guid>
		<description><![CDATA[<div><img width="150" height="150" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/10/Holt1-150x150.jpeg" class="attachment-thumbnail wp-post-image" alt="Holt" style="margin-bottom: 15px;" /></div>Rep. Rush Holt (D-NJ) has represented the Garden State’s 12th District since 1999. As one of the only two physicists in Congress, he began his career in academia. After teaching at Swarthmore College and working on arms control at the State Department, he became Assistant Director of the Princeton Plasma Physics Laboratory in 1989. He is perhaps most famous for beating the IBM computer Watson in “Jeopardy” in a demonstration match in 2011. Symposium Magazine interviewed him in October. &#160; How well did academia prepare you for politics? Academia was far more useful than one might think. There are so many topics in public policy that involve some science, and this is what helped me most before coming to the Hill. Often, there are cases where science is “embedded” in an important policy issue and people don’t even know it. Take election reform or voting rights – most people don’t think of these as technical issues, but they definitely have scientific components. If you look at hearings on the Hill on any given day, at least half will have something related to “embedded” science. But chances are, these will have no witnesses from the scientific community. Science is often avoided. This is one the biggest gaps in understanding policy we have today. Do you have examples beyond science, where your background helped? Absolutely. Even though I taught physics, I also once co-taught a course on arms control with a religion professor, and we both talked about just-war theory and arms. Another course I co-taught – with a professor of math and a professor of psychology – addressed the question of how to make decisions in the face of uncertainty. We never got that one quite right, but I think some students enjoyed it. I also often held small seminars with students at my house, where we went into far-reaching policy talks. Beyond science, there are many disciplines that have policy relevance and that get overlooked here on the Hill. Take the recent shutdown, and the subject of history. How much did we talk about past examples of shutdowns? Or more broadly, what does history show us of examples of a minority party trying to wield power beyond its numbers? It’s not as if people were bringing up, say, what the Federalist Papers said on this topic. But it would have been useful to add to the debate. What about academic life more broadly? Can that lifestyle prepare you for politics? I’ll have to say up front: Nothing in academia can really prepare you for politics. I think being a Representative is much harder – intellectually, physically, and psychologically – than being a professor. Intellectually, you have to constantly learn about a whole range of subjects that you may not know anything about, and then make policy decisions. You don’t just learn “more and more about less and less,” as the saying goes. You need to know something about everything. Physically, the demands of campaigning are grueling. And imagine what it’s like trying to stay in touch with 730,000 constituents, rather than several hundred students. Nothing in academia is like that. But the psychological part is perhaps the toughest.  You don’t just have competitors in your department or field – you have people trying to undo you. It’s not just a race to get a paper out or come up with a new clever idea. You always have to be on guard against those who want to undo you. What would you tell fellow lawmakers on how they can have a useful relationship with academic research? And what should academics do to emphasize the broader relevance of their work? Members of Congress would be better off if there was more quantitative understanding of policy. There is very little of that, and we have such complicated issues we need to understand. But of course, this goes both ways. I think almost any academic work has some public implications, and academics should understand that. Even in the humanities and classics, you can do that &#8212; although it is, of course, harder. For every policy issue out there, there are academic studies that would illuminate it. The work is there, and lawmakers just need to read it. And I would tell academics who want to make a bigger difference: just do it. Use whatever time, whatever tools you have to get your research out. Or even run for office!]]></description>
				<content:encoded><![CDATA[<div><img width="150" height="150" src="http://www.symposium-magazine.com/symposium_magazine/wp-content/uploads/2013/10/Holt1-150x150.jpeg" class="attachment-thumbnail wp-post-image" alt="Holt" style="margin-bottom: 15px;" /></div><p><a title="link to Holt page" href="http://holt.house.gov/" target="_blank">Rep. Rush Holt</a> (D-NJ) has represented the Garden State’s 12<sup>th</sup> District since 1999. As one of the only two physicists in Congress, he began his career in academia. After teaching at Swarthmore College and working on arms control at the State Department, he became Assistant Director of the <a title="link to PPPI" href="http://www.pppl.gov/" target="_blank">Princeton Plasma Physics Laboratory </a>in 1989. He is perhaps most famous for beating the <a title="link to Watson" href="http://www-03.ibm.com/innovation/us/watson/" target="_blank">IBM computer Watson</a> in “Jeopardy” in a demonstration match in 2011. Symposium Magazine interviewed him in October.</p>
<p>&nbsp;</p>
<p><i>How well did academia prepare you for politics?</i></p>
<p>Academia was far more useful than one might think. There are so many topics in public policy that involve some science, and this is what helped me most before coming to the Hill. Often, there are cases where science is “embedded” in an important policy issue and people don’t even know it. Take election reform or voting rights – most people don’t think of these as technical issues, but they definitely have scientific components.</p>
<p>If you look at hearings on the Hill on any given day, at least half will have something related to “embedded” science. But chances are, these will have no witnesses from the scientific community. Science is often avoided. This is one the biggest gaps in understanding policy we have today.<i><br />
</i></p>
<p><i>Do you have examples beyond science, where your background helped?</i></p>
<p>Absolutely. Even though I taught physics, I also once co-taught a course on arms control with a religion professor, and we both talked about just-war theory and arms. Another course I co-taught – with a professor of math and a professor of psychology – addressed the question of how to make decisions in the face of uncertainty. We never got that one quite right, but I think some students enjoyed it. I also often held small seminars with students at my house, where we went into far-reaching policy talks.</p>
<p>Beyond science, there are many disciplines that have policy relevance and that get overlooked here on the Hill. Take the recent shutdown, and the subject of history. How much did we talk about past examples of shutdowns? Or more broadly, what does history show us of examples of a minority party trying to wield power beyond its numbers? It’s not as if people were bringing up, say, what the Federalist Papers said on this topic. But it would have been useful to add to the debate.</p>
<p><i>What about academic life more broadly? Can that lifestyle prepare you for politics?</i></p>
<p>I’ll have to say up front: Nothing in academia can really prepare you for politics. I think being a Representative is much harder – intellectually, physically, and psychologically – than being a professor. Intellectually, you have to constantly learn about a whole range of subjects that you may not know anything about, and then make policy decisions. You don’t just learn “more and more about less and less,” as the saying goes. You need to know something about everything. Physically, the demands of campaigning are grueling. And imagine what it’s like trying to stay in touch with 730,000 constituents, rather than several hundred students. Nothing in academia is like that.</p>
<p>But the psychological part is perhaps the toughest.  You don’t just have competitors in your department or field – you have people trying to undo you. It’s not just a race to get a paper out or come up with a new clever idea. You always have to be on guard against those who want to undo you.</p>
<p><i>What would you tell fellow lawmakers on how they can have a useful relationship with academic research? And what should academics do to emphasize the broader relevance of their work?</i></p>
<p>Members of Congress would be better off if there was more quantitative understanding of policy. There is very little of that, and we have such complicated issues we need to understand. But of course, this goes both ways. I think almost any academic work has some public implications, and academics should understand that. Even in the humanities and classics, you can do that &#8212; although it is, of course, harder. For every policy issue out there, there are academic studies that would illuminate it. The work is there, and lawmakers just need to read it. And I would tell academics who want to make a bigger difference: just do it. Use whatever time, whatever tools you have to get your research out. Or even run for office!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.symposium-magazine.com/an-academic-meets-public-life/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
