The post The deconstruction of paradoxes in epidemiology appeared first on OUPblog.

]]>I think a methodological “revolution” is probably going on in the science of epidemiology, but I’m not totally sure. Of course, in science not being sure is part of our normal state. And we mostly like it. I had the feeling that a revolution was ongoing in epidemiology many times. While reading scientific articles, for example. And I saw signs of it, which I think are clear, when reading the latest draft of the forthcoming book *Causal Inference* by M.A. Hernán and J.M. Robins from Harvard (Chapman & Hall / CRC, 2015). I think the “revolution” — or should we just call it a “renewal”? — is deeply changing how epidemiological and clinical research is conceived, how causal inferences are made, and how we assess the validity and relevance of epidemiological findings. I suspect it may be having an immense impact on the production of scientific evidence in the health, life, and social sciences. If this were so, then the impact would also be large on most policies, programs, services, and products in which such evidence is used. And it would be affecting thousands of institutions, organizations and companies, millions of people.

One example: at present, in clinical and epidemiological research, every week “paradoxes” are being deconstructed. Apparent paradoxes that have long been observed, and whose causal interpretation was at best dubious, are now shown to have little or no causal significance. For example, while obesity is a well-established risk factor for type 2 diabetes (T2D), among people who already developed T2D the obese fare better than T2D individuals with normal weight. Obese diabetics appear to survive longer and to have a milder clinical course than non-obese diabetics. But it is now being shown that the observation lacks causal significance. (Yes, indeed, an observation may be real and yet lack causal meaning.) The demonstration comes from physicians, epidemiologists, and mathematicians like Robins, Hernán, and colleagues as diverse as S. Greenland, J. Pearl, A. Wilcox, C. Weinberg, S. Hernández-Díaz, N. Pearce, C. Poole, T. Lash , J. Ioannidis, P. Rosenbaum, D. Lawlor, J. Vandenbroucke, G. Davey Smith, T. VanderWeele, or E. Tchetgen, among others. They are building methodological knowledge upon knowledge and methods generated by graph theory, computer science, or artificial intelligence. Perhaps one way to explain the main reason to argue that observations as the mentioned “obesity paradox” lack causal significance, is that “conditioning on a collider” (in our example, focusing only on individuals who developed T2D) creates a spurious association between obesity and survival.

The “revolution” is partly founded on complex mathematics, and concepts as “counterfactuals,” as well as on attractive “causal diagrams” like Directed Acyclic Graphs (DAGs). Causal diagrams are a simple way to encode our subject-matter knowledge, and our assumptions, about the qualitative causal structure of a problem. Causal diagrams also encode information about potential associations between the variables in the causal network. DAGs must be drawn following rules much more strict than the informal, heuristic graphs that we all use intuitively. Amazingly, but not surprisingly, the new approaches provide insights that are beyond most methods in current use. In particular, the new methods go far deeper and beyond the methods of “modern epidemiology,” a methodological, conceptual, and partly ideological current whose main eclosion took place in the 1980s lead by statisticians and epidemiologists as O. Miettinen, B. MacMahon, K. Rothman, S. Greenland, S. Lemeshow, D. Hosmer, P. Armitage, J. Fleiss, D. Clayton, M. Susser, D. Rubin, G. Guyatt, D. Altman, J. Kalbfleisch, R. Prentice, N. Breslow, N. Day, D. Kleinbaum, and others.

We live exciting days of paradox deconstruction. It is probably part of a wider cultural phenomenon, if you think of the “deconstruction of the Spanish omelette” authored by Ferran Adrià when he was the world-famous chef at the elBulli restaurant. Yes, just kidding.

Right now I cannot find a better or easier way to document the possible “revolution” in epidemiological and clinical research. Worse, I cannot find a firm way to assess whether my impressions are true. No doubt this is partly due to my ignorance in the social sciences. Actually, I don’t know much about social studies of science, epistemic communities, or knowledge construction. Maybe this is why I claimed that a sociology of epidemiology is much needed. A sociology of epidemiology would apply the scientific principles and methods of sociology to the science, discipline, and profession of epidemiology in order to improve understanding of the wider social causes and consequences of epidemiologists’ professional and scientific organization, patterns of practice, ideas, knowledge, and cultures (e.g., institutional arrangements, academic norms, scientific discourses, defense of identity, and epistemic authority). It could also address the patterns of interaction of epidemiologists with other branches of science and professions (e.g. clinical medicine, public health, the other health, life, and social sciences), and with social agents, organizations, and systems (e.g. the economic, political, and legal systems). I believe the tradition of sociology in epidemiology is rich, while the sociology of epidemiology is virtually uncharted (in the sense of not mapped neither surveyed) and unchartered (i.e. not furnished with a charter or constitution).

Another way I can suggest to look at what may be happening with clinical and epidemiological research methods is to read the changes that we are witnessing in the definitions of basic concepts as risk, rate, risk ratio, attributable fraction, bias, selection bias, confounding, residual confounding, interaction, cumulative and density sampling, open population, test hypothesis, null hypothesis, causal null, causal inference, Berkson’s bias, Simpson’s paradox, frequentist statistics, generalizability, representativeness, missing data, standardization, or overadjustment. The possible existence of a “revolution” might also be assessed in recent and new terms as collider, M-bias, causal diagram, backdoor (biasing path), instrumental variable, negative controls, inverse probability weighting, identifiability, transportability, positivity, ignorability, collapsibility, exchangeable, g-estimation, marginal structural models, risk set, immortal time bias, Mendelian randomization, nonmonotonic, counterfactual outcome, potential outcome, sample space, or false discovery rate.

You may say: “And what about textbooks? Are they changing dramatically? Has one changed the rules?” Well, the new generation of textbooks is just emerging, and very few people have yet read them. Two good examples are the already mentioned text by Hernán and Robins, and the soon to be published by T. VanderWeele, *Explanation in causal inference: Methods for mediation and interaction* (Oxford University Press, 2015). Clues can also be found in widely used textbooks by K. Rothman et al. (*Modern Epidemiology*, Lippincott-Raven, 2008), M. Szklo and J Nieto (*Epidemiology: Beyond the Basics*, Jones & Bartlett, 2014), or L. Gordis (*Epidemiology*, Elsevier, 2009). I only wish to make justice to a truly giant, Judea Pearl, whose book absolutely deserves to be mentioned. I thought of him when he sent me a very nice comment about the blog, but he did not ask his book to be mentioned. Above all, the foundations of the current revolution can be seen in “Causality: Models, Reasoning and Inference” by Judea Pearl (2nd. edition, Cambridge University Press, 2009).

Finally, another good way to assess what might be changing is to read what gets published in top journals as *Epidemiology*, the *International Journal of Epidemiology*, the *American Journal of Epidemiology*, or the *Journal of Clinical Epidemiology*. Pick up any issue of the main epidemiologic journals and you will find several examples of what I suspect is going on. If you feel like it, look for the DAGs. I recently saw a tweet saying “A DAG in The Lancet!”. It was a surprise: major clinical journals are lagging behind. But they will soon follow and adopt the new methods: the clinical relevance of the latter is huge. Or is it not such a big deal? If no “revolution” is going on, how are we to know?

*Feature image credit: Test tubes by PublicDomainPictures. Public Domain via Pixabay.
*

The post The deconstruction of paradoxes in epidemiology appeared first on OUPblog.

]]>The post Recurring decimals, proof, and ice floes appeared first on OUPblog.

]]>Partly, of course, so they develop thinking skills to use on questions whose truth-status they won’t know in advance. Another part, however, concerns the dialogue nature of proof: a proof must be not only correct, but also persuasive: and persuasiveness is not objective and absolute, it’s a two-body problem. Not only to tango does one need two.

The statements — (1) ice floats on water, (2) ice is less dense than water — are widely acknowledged as facts and, usually, as interchangeable facts. But although rooted in everyday experience, they are not that experience. We have firstly represented stuffs of experience by sounds English speakers use to stand for them, then represented these sounds by word-processor symbols that, by common agreement, stand for them. Two steps away from reality already! This is what humans do: we invent symbols for perceived realities and, eventually, evolve procedures for manipulating them in ways that mirror how their real-world origins behave. Virtually no communication between two persons, and possibly not much internal dialogue within one mind, can proceed without this. Man is a symbol-using animal.

Statement (1) counts as fact because folk living in cooler climates have directly observed it throughout history (and conflicting evidence is lacking). Statement (2) is factual in a significantly different sense, arising by further abstraction from (1) and from a million similar experiential observations. Partly to explain (1) and its many cousins, we have conceived ideas like mass, volume, ratio of mass to volume, and explored for generations towards the conclusion that mass-to-volume works out the same for similar materials under similar conditions, and that the comparison of mass-to-volume ratios predicts which materials will float upon others.

Statement (3): 19 is a prime number. In what sense is this a fact? Its roots are deep in direct experience: the hunter-gatherer wishing to share nineteen apples equally with his two brothers or his three sons or his five children must have discovered that he couldn’t without extending his circle of acquaintance so far that each got only one, long before he had a name for what we call ‘nineteen’. But (3) is many steps away from the experience where it is grounded. It involves conceptualisation of numerical measurements of sets one encounters, and millennia of thought to acquire symbols for these and codify procedures for manipulating them in ways that mirror how reality functions. We’ve done this so successfully that it’s easy to forget how far from the tangibles of experience they stand.

Statement (4): √2 is not exactly the ratio of two whole numbers. Most first-year mathematics students know this. But by this stage of abstraction, separating its fact-ness from its demonstration is impossible: the property of being exactly a fraction is not detectable by physical experience. It is a property of how we abstracted and systematised the numbers that proved useful in modelling reality, not of our hands-on experience of reality. The reason we regard √2’s irrationality as factual is precisely because we can give a demonstration within an accepted logical framework.

What then about recurring decimals? For persuasive argument, first ascertain the distance from reality at which the question arises: not, in this case, the rarified atmosphere of undergraduate mathematics but the primary school classroom. Once a child has learned rituals for dividing whole numbers and the convenience of decimal notation, she will try to divide, say, 2 by 3 and will hit a problem. The decimal representation of the answer does not cease to spew out digits of lesser and lesser significance no matter how long she keeps turning the handle. What should we reply when she asks whether zero point infinitely many 6s is or is not two thirds, or even — as a thoughtful child should — whether zero point infinitely many 6s is a legitimate symbol at all?

The answer must be tailored to the questioner’s needs, but the natural way forward — though it took us centuries to make it logically watertight! — is the nineteenth-century definition of sum of an infinite series. For the primary school kid it may suffice to say that, by writing down enough 6s, we’d get as close to 2/3 as we’d need for any practical purpose. For differential calculus we’d need something better, and for model-theoretic discourse involving infinitesimals something better again. Yet the underpinning mathematics for equalities like 0.6666••• = 2/3 where the question arises is the nineteenth-century one. Its fact-ness therefore resembles that of ice being less dense than water, of 19 being prime or of √2 being irrational. It can be demonstrated within a logical framework that systematises our observations of real-world experiences. So it is a fact not about reality but about the models we build to explain reality. Demonstration is the only tool available for establishing its truth.

Mathematics without proof is not like an omelette without salt and pepper; it is like an omelette without egg.

*Headline image credit: Floating ice sheets in Antarctica. CC0 via Pixabay. *

The post Recurring decimals, proof, and ice floes appeared first on OUPblog.

]]>The post Why study paradoxes? appeared first on OUPblog.

]]>In 2002 I was attending a conference on self-reference in Copenhagen, Denmark. During one of the breaks I got a chance to chat with Raymond Smullyan, who is amongst other things an accomplished magician, a distinguished mathematical logician, and perhaps the most well-known popularizer of `Knight and Knave’ (K&K) puzzles.

K&K puzzles involve an imaginary island populated by two tribes: the Knights and the Knaves. Knights always tell the truth, and Knaves always lie (further, members of both tribes are forbidden to engage in activities that might lead to paradoxes or situations that break these rules). Other than their linguistic behavior, there is nothing that distinguishes Knights from Knaves.

Typically, K&K puzzles involve trying to answer questions based on assertions made by, or questions answered by, an inhabitant of the island. For example, a classic K&K puzzle involves meeting an islander at a fork in the road, where one path leads to riches and success and the other leads to pain and ruin. You are allowed to ask the islander one question, after which you must pick a path. Not knowing to which tribe the islander belongs, and hence whether she will lie or tell the truth, what question should you ask?

(Answer: You should ask “Which path would someone from the other tribe say was the one leading to riches and success?”, and then take the path *not* indicated by the islander).

Back to Copenhagen in 2002: Seizing my chance, I challenged Smullyan with the following K&K puzzle, of my own devising:

There is a nightclub on the island of Knights and Knaves, known as the Prime Club. The Prime Club has one strict rule: the number of occupants in the club must be a prime number at all times.

The Prime Club also has strict bouncers (who stand outside the doors and do not count as occupants) enforcing this rule. In addition, a strange tradition has become customary at the Prime Club: Every so often the occupants form a conga line, and sing a song. The first lyric of the song is:

“At least one of us in the club is a Knave.”

and is sung by the first person in the line. The second lyric of the song is:

“At least two of us in the club are Knaves.”

and is sung by the second person in the line. The third person (if there is one) sings:

“At least three of us in the club are Knaves.”

And so on down the line, until everyone has sung a verse.

One day you walk by the club, and hear the song being sung. How many people are in the club?

Smullyan’s immediate response to this puzzle was something like “That can’t be solved – there isn’t enough information”. But he then stood alone in the corner of the reception area for about five minutes, thinking, before returning to confidently (and correctly, of course) answer “Two!”

I won’t spoil things by giving away the solution – I’ll leave that mystery for interested readers to solve on their own. (Hint: if the song is sung with any other prime number of islanders in the club, a paradox results!) I will note that the song is equivalent to a more formal construction involving a list of sentences of the form:

At least one of sentences S_{1} – S_{n} is false.

At least two of sentences S_{1} – S_{n} is false.

⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮

At least n of sentences S_{1} – S_{n} is false.

The point of this story isn’t to brag about having stumped a famous logician (even for a mere five minutes), although I admit that this episode (not only stumping Smullyan, but meeting him in the first place) is still one of the highlights of my academic career.

Instead, the story, and the puzzle at the center of it, illustrates the reasons why I find paradoxes so fascinating and worthy of serious intellectual effort. The standard story regarding why paradoxes are so important is that, although they are sometimes silly in-and-of-themselves, paradoxes indicate that there is something deeply flawed in our understanding of some basic philosophical notion (truth, in the case of the semantic paradoxes linked to K&K puzzles).

Another reason for their popularity is that they are a lot of fun. Both of these are really good reasons for thinking deeply about paradoxes. But neither is the real reason why I find them so fascinating. The real reason I find paradoxes so captivating is that they are much more mathematically complicated, and as a result much more mathematically interesting, than standard accounts (which typically equate paradoxes with the presence of some sort of circularity) might have you believe.

The Prime Club puzzle demonstrates that whether a particular collection of sentences is or is not paradoxical can depend on all sorts of surprising mathematical properties, such as whether there is an even or odd number of sentences in the collection, or whether the number of sentences in the collection is prime or composite, or all sorts of even weirder and more surprising conditions.

Other examples demonstrate that whether a construction (or, equivalently, a K&K story) is paradoxical can depend on whether the referential relation involved in the construction (i.e. the relation that holds between two sentences if one refers to the other) is symmetric, or is transitive.

The paradoxicality of still another type of construction, involving infinitely many sentences, depends on whether cofinitely many of the sentences each refer to cofinitely many of the other sentences in the construction (a set is cofinite if its complement is finite). And this only scratches the surface!

The more I think about and work on paradoxes, the more I marvel at how complicated the mathematical conditions for generating paradoxes are: it takes a lot more than the mere presence of circularity to generate a mathematical or semantic paradox, and stating exactly what is minimally required is still too difficult a question to answer precisely. And that’s why I work on paradoxes: their surprising mathematical complexity and mathematical beauty. Fortunately for me, there is still a lot of work that remains to be done, and a lot of complexity and beauty remaining to be discovered.

The post Why study paradoxes? appeared first on OUPblog.

]]>The post Special events and the dynamical statistics of Twitter appeared first on OUPblog.

]]>One main idea to derive warning signs is to monitor the fluctuations of the dynamical process by calculating the variance of a suitable monitoring variable. When the tipping point is approached via a slowly-drifting parameter, the stabilizing effects of the system slowly diminish and the noisy fluctuations increase via certain well-defined scaling laws.

Based upon these observations, it is natural to ask, whether these scaling laws are also present in human social networks and can allow us to make predictions about future events. This is an exciting open problem, to which at present only highly speculative answers can be given. It is indeed to predict *a priori* unknown events in a social system. Therefore, as an initial step, we try to reduce the problem to a much simpler problem to understand whether the same mechanisms, which have been observed in the context of natural sciences and engineering, could also be present in sociological domains.

In our work, we provide a very first step towards tackling a substantially simpler question by focusing on *a priori *known events. We analyse a social media data set with a focus on classical variance and autocorrelation scaling law warning signs. In particular, we consider a few events, which are known to occur on a specific time of the year, e.g., Christmas, Halloween, and Thanksgiving. Then we consider time series of the frequency of Twitter hashtags related to the considered events a few weeks before the actual event, but excluding the event date itself and some time period before it.

Now suppose we do not know that a dramatic spike in the number of Twitter hashtags, such as #xmas or #thanksgiving, will occur on the actual event date. Are there signs of the same stochastic scaling laws observed in other dynamical systems visible some time before the event? The more fundamental question is: Are there similarities to known warning signs from other areas also present in social media data?

We answer this question affirmatively as we find that the *a priori* known events mentioned above are preceded by variance and autocorrelation growth (see Figure). Nevertheless, we are still very far from actually using social networks to predict the occurrence of many other drastic events. For example, it can also be shown that many spikes in Twitter activity are not predictable through variance and autocorrelation growth. Hence, a lot more research is needed to distinguish different dynamical processes that lead to large outburst of activity on social media.

The findings suggest that further investigations of dynamical processes in social media would be worthwhile. Currently, a main focus in the research on social networks lies on structural questions, such as: Who connects to whom? How many connections do we have on average? Who are the hubs in social media? However, if one takes dynamical processes on the network, as well as the changing dynamics of the network topology, into account, one may obtain a much clearer picture, how social systems compare and relate to classical problems in physics, chemistry, biology and engineering.

The post Special events and the dynamical statistics of Twitter appeared first on OUPblog.

]]>The post A Fields Medal reading list appeared first on OUPblog.

]]>This year sees the first ever female recipient of the Fields Medal, Maryam Mirzakhani, recognised for her highly original contributions to geometry and dynamical systems. Her work bridges several mathematic disciplines – hyperbolic geometry, complex analysis, topology, and dynamics – and influences them in return.

We’re absolutely delighted for Professor Mirzakhani, who serves on the editorial board for *International Mathematics Research Notices*. To celebrate the achievements of all of the winners, we’ve put together a reading list of free materials relating to their work and to fellow speakers at the International Congress of Mathematicians.

**“Ergodic Theory of the Earthquake Flow” by Maryam Mirzakhani, published in International Mathematics Research Notices**

Noted by the International Mathematical Union as work contributing to Mirzakhani’s achievement, this paper investigates the dynamics of the earthquake flow defined by Thurston on the bundle *PMg *of geodesic measured laminations.

**“Ergodic Theory of the Space of Measured Laminations” by Elon Lindenstrauss and Maryam Mirzakhani, published in International Mathematics Research Notices**

A classification of locally finite invariant measures and orbit closure for the action of the mapping class group on the space of measured laminations on a surface.

**“Mass Forumlae for Extensions of Local Fields, and Conjectures on the Density of Number Field Discriminants” by Majul Bhargava, published in International Mathematics Research Notices**

Manjul Bhargava joins Maryam Mirzakhani amongst this year’s winners of the Fields Medal. Here he uses Serre’s mass formula for totally ramified extensions to derive a mass formula that counts all étale algebra extentions of a local field *F* having a given degree *n.*

**“Model theory of operator algebras” by Ilijas Farah, Bradd Hart, and David Sherman, published in International Mathematics Research Notices**

Several authors, some of whom speaking at the International Congress of Mathematicians, have considered whether the ultrapower and the relative commutant of a C*-algebra or II1 factor depend on the choice of the ultrafilter.

**“Small gaps between products of two primes” by D. A. Goldston, S. W. Graham, J. Pintz, and C. Y. Yildrim, published in Proceedings of the London Mathematical Society**

Speaking on the subject at the International Congress, Dan Goldston and colleagues prove several results relating to the representation of numbers with exactly two prime factors by linear forms.

**“On Waring’s problem: some consequences of Golubeva’s method” by Trevor D. Wooley, published in the Journal of the London Mathematical Society**

Wooley’s paper, as well as his talk at the congress, investigates sums of mixed powers involving two squares, two cubes, and various higher powers concentrating on situations inaccessible to the Hardy-Littlewood method.

*Image credit: (1) Inner life of human mind and maths, © agsandrew, via iStock Photo. (2) Maryam Mirzakhani 2014. Photo by International Mathematical Union. Public Domain via Wikimedia Commons.*

The post A Fields Medal reading list appeared first on OUPblog.

]]>Philosophy is a bit like a computer with a memory leak. It starts well, dealing with significant and serious issues that matter to anyone. Yet, in time, its very success slows it down. Philosophy begins to care more about philosophers’ questions than philosophical ones, consuming increasing amount of intellectual attention.

The post Rebooting Philosophy appeared first on OUPblog.

]]>** **

When we use a computer, its performance seems to degrade progressively. This is not a mere impression. An old version of Firefox, the free Web browser, was infamous for its “memory leaks”: it would consume increasing amounts of memory to the detriment of other programs. Bugs in the software actually do slow down the system. We all know what the solution is: reboot. We restart the computer, the memory is reset, and the performance is restored, until the bugs slow it down again.

Philosophy is a bit like a computer with a memory leak. It starts well, dealing with significant and serious issues that matter to anyone. Yet, in time, its very success slows it down. Philosophy begins to care more about philosophers’ questions than philosophical ones, consuming increasing amount of intellectual attention. Scholasticism is the ultimate freezing of the system, the equivalent of Windows’ “blue screen of death”; so many resources are devoted to internal issues that no external input can be processed anymore, and the system stops. The world may be undergoing a revolution, but the philosophical discourse remains detached and utterly oblivious. Time to reboot the system.

Philosophical “rebooting” moments are rare. They are usually prompted by major transformations in the surrounding reality. Since the nineties, I have been arguing that we are witnessing one of those moments. It now seems obvious, even to the most conservative person, that we are experiencing a turning point in our history. The information revolution is profoundly changing every aspect of our lives, quickly and relentlessly. The list is known but worth recalling: education and entertainment, communication and commerce, love and hate, politics and conflicts, culture and health, … feel free to add your preferred topics; they are all transformed by technologies that have the recording and processing of information as their core functions. Meanwhile, philosophy is degrading into self-referential discussions on irrelevancies.

The result of a philosophical rebooting today can only be beneficial. Digital technologies are not just tools merely modifying how we deal with the world, like the wheel or the engine. They are above all formatting systems, which increasingly affect how we understand the world, how we relate to it, how we see ourselves, and how we interact with each other.

The ‘Fourth Revolution’ betrays what I believe to be one of the topics that deserves our full intellectual attention today. The idea is quite simple. Three scientific revolutions have had great impact on how we see ourselves. In changing our understanding of the external world they also modified our self-understanding. After the Copernican revolution, the heliocentric cosmology displaced the Earth and hence humanity from the centre of the universe. The Darwinian revolution showed that all species of life have evolved over time from common ancestors through natural selection, thus displacing humanity from the centre of the biological kingdom. And following Freud, we acknowledge nowadays that the mind is also unconscious. So we are not immobile, at the centre of the universe, we are not unnaturally separate and diverse from the rest of the animal kingdom, and we are very far from being minds entirely transparent to ourselves. One may easily question the value of this classic picture. After all, Freud was the first to interpret these three revolutions as part of a single process of reassessment of human nature and his perspective was blatantly self-serving. But replace Freud with cognitive science or neuroscience, and we can still find the framework useful to explain our strong impression that something very significant and profound has recently happened to our self-understanding.

Since the fifties, computer science and digital technologies have been changing our conception of who we are. In many respects, we are discovering that we are not standalone entities, but rather interconnected informational agents, sharing with other biological agents and engineered artefacts a global environment ultimately made of information, the infosphere. If we need a champion for the fourth revolution this should definitely be Alan Turing.

The fourth revolution offers a historical opportunity to rethink our exceptionalism in at least two ways. Our intelligent behaviour is confronted by the smart behaviour of engineered artefacts, which can be adaptively more successful in the infosphere. Our free behaviour is confronted by the predictability and manipulability of our choices, and by the development of artificial autonomy. Digital technologies sometimes seem to know more about our wishes than we do. We need philosophy to make sense of the radical changes brought about by the information revolution. And we need it to be at its best, for the difficulties we are facing are challenging. Clearly, we need to reboot philosophy now.

Luciano Floridi is Professor of Philosophy and Ethics of Information at the University of Oxford, Senior Research Fellow at the Oxford Internet Institute, and Fellow of St Cross College, Oxford. He was recently appointed as ethics advisor to Google. His most recent book is

The Fourth Revolution: How the Infosphere is Reshaping Human Reality.

Subscribe to the OUPblog via email or RSS.

Subscribe to only philosophy articles on the OUPblog via email or RSS.

*Image credit: Alan Turing Statue at Bletchley Park. By Ian Petticrew. CC-BY-SA-2.0 via Wikimedia Commons.*

The post Rebooting Philosophy appeared first on OUPblog.

]]>Suppose you are watching a tennis match between Novak Djokovic and Rafael Nadal. The commentator says: “Djokovic serves first in the set, so he has an advantage.” Why would this be the case? Perhaps because he is then ‘always’ one game ahead, thus serving under less pressure. But does it actually influence him and, if so, how?

The post Does the “serving-first advantage” actually exist? appeared first on OUPblog.

]]>

Suppose you are watching a tennis match between Novak Djokovic and Rafael Nadal. The commentator says: “Djokovic serves first in the set, so he has an advantage.” Why would this be the case? Perhaps because he is then ‘always’ one game ahead, thus serving under less pressure. But does it actually influence him and, if so, how?

Now we come to the seventh game, which some consider to be the most important game in the set. But is it? Nadal serves an ace at break point down (30-40). Of course! Real champions win the big points, but they win most points on service anyway. At first, it may appear that real champions outperform on big points, but it turns out that weaker players underperform, so that it only seems that the champions outperform. And Nadal goes on to win three consecutive games. He is in a winning mood, the momentum is on his side. But does a ‘winning mood’ actually exist in tennis? (*Spoiler*: It does, but it is smaller than many expect.)

To figure out whether the “serving-first advantage” actually exists, we can use data on more than one thousand sets played at Wimbledon in order to calculate how often the player who served first also won the set. This statistic shows that for the men there is a slight advantage in the first set, but no advantage in the other sets.

On the contrary, in the other sets, there is actually a disadvantage: the player who serves first in the set is more likely to lose the set than to win it. This is surprising. Perhaps it is different for the women? But no, the same pattern occurs in the women’s singles.

It so happens that the player who serves first in a set (if it is not the first set) is usually the weaker player. This is so, because (a) the stronger player is more likely to win the previous set, and (b) the previous set is more likely won by serving the set out rather than by breaking serve. Therefore, the stronger player typically wins the previous set on service, so that the weaker player serves first in the next set. The weaker player is more likely to lose the current set as well, not because of a service (dis)advantage, but because he or she is the weaker player.

This example shows that we must be careful when we try to draw conclusions based on simple statistics. The fact that the player who serves first in the second and subsequent sets often loses the set is true, but this primarily concerns weaker players, while the original hypothesis includes all players. Therefore, we must control for quality differences, and statistical models enable us to do that properly. It then becomes clear that there is no advantage or disadvantage for the player who serves first in the second or subsequent sets; but it does matter in the first set, so it is wise to elect to serve after winning the toss.

Franc Klaassenis Professor of International Economics at University of Amsterdam.Jan R. Magnusis Emeritus Professor at Tilburg University and Visiting Professor of Econometrics at the Vrije Universiteit Amsterdam. They are the co-authors of.Analyzing Wimbledon: The Power of Statistics

Subscribe to the OUPblog via email or RSS.

Subscribe to only business and economics articles on the OUPblog via email or RSS.

*Image Credit: “Wimbledon Centre Court Panoramic: Rafael Nadal vs Del Potro” (2011) by Rian (Ree) Saunders. CC BY 2.0 via 58996719@N07 Flickr*

The post Does the “serving-first advantage” actually exist? appeared first on OUPblog.

]]>Nowadays it appears impossible to open a newspaper or switch on the television without hearing about “big data”. Big data, it sometimes seems, will provide answers to all the world’s problems. Management consulting company McKinsey, for example, promises “a tremendous wave of innovation, productivity, and growth … all driven by big data”.

The post Statistics and big data appeared first on OUPblog.

]]>** **

Nowadays it appears impossible to open a newspaper or switch on the television without hearing about “big data”. Big data, it sometimes seems, will provide answers to all the world’s problems. Management consulting company McKinsey, for example, promises “a tremendous wave of innovation, productivity, and growth … all driven by big data”.

An alien observer visiting the Earth might think it represents a major scientific breakthrough. Google Trends shows references to the phrase bobbing along at about one per week until 2011, at which point there began a dramatic, steep, and almost linear increase in references to the phrase. It’s as if no one had thought of it until 2011. Which is odd because data mining, the technology of extracting valuable, useful, or interesting information from large data sets, has been around for some 20 years. And statistics, which lies at the heart of all of this, has been around as a formal discipline for a century or more.

Or perhaps it’s not so odd. If you look back to the beginning of data mining, you find a very similar media enthusiasm for the advances it was going to bring, the breakthroughs in understanding, the sudden discoveries, the deep insights. In fact it almost looks as if we have been here before. All of this leads one to suspect that there’s less to the big data enthusiasm than meets the eye. That it’s not so much a sudden change in our technical abilities as a sudden media recognition of what data scientists, and especially statisticians, are capable.

Of course, I’m not saying that the increasing size of data sets does not lead to promising new opportunities – though I would question whether it’s the “large” that really matters as much as the novelty of the data sets. The tremendous economic impact of GPS data (estimated to be $150-270bn per year), retail transaction data, or genomic and bioinformatics data arise not from the size of these data sets, but from the fact that they provide new kinds of information. And while it’s true that a massive mountain of data needed to be explored to detect the Higgs boson, the core aspect was the nature of the data rather than its amount.

Moreover, if I’m honest, I also have to admit that it’s not solely statistics which leads to the extraction of value from these massive data sets. Often it’s a combination of statistical inferential methods (e.g. determining an accurate geographical location from satellite signals) along with data manipulation algorithms for search, matching, sorting and so on. How these two aspects are balanced depends on the particular application. Locating a shop which stocks that out of print book is less of an inferential statistical problem and more of a search issue. Determining the riskiness of a company seeking a loan owes little to search but much to statistics.

Some time after the phrase “data mining” hit the media, it suffered a backlash. Predictably enough, much of this was based around privacy concerns. A paradigmatic illustration was the *Total Information Awareness* project in the United States. Its basic aim was to search for suspicious behaviour patterns within vast amounts of personal data, to identify individuals likely to commit crimes, especially terrorist offences. It included data on web browsing, credit card transactions, driving licences, court records, passport details, and so on. After concerns were raised, it was suspended in 2003 (though it is claimed that the software continued to be used by various agencies). As will be evident from recent events, concerns about the security agencies monitoring of the public continues.

The key question is whether proponents of the huge potential of big data and its allied notion of open data are learning from the past. Recent media concern in the UK about merging of family doctor records with hospital records, leading to a six-month delay in the launch of the project, illustrates the danger. Properly informed debate about the promise and the risks is vital.

Technology is amoral — neither intrinsically moral nor immoral. Morality lies in the hands of those who wield it. This is as true of big data technology as it is of nuclear technology and biotechnology. It is abundantly clear — if only from the examples we have already seen — that massive data sets do hold substantial promise for enhancing the well-being of mankind, but we must be aware of the risks. A suitable balance must be struck.

It’s also important to note that the mere existence of huge data files is of itself of no benefit to anyone. For these data sets to be beneficial, it’s necessary to be able to use the data to build models, to estimate effect sizes, to determine if an observed effect should be regarded as mere chance variation, to be sure it’s not a data quality issue, and so on. That is, statistical skills are critical to making use of the big data resources. In just the same way that vast underground oil reserves were useless without the technology to turn them into motive power, so the vast collections of data are useless without the technology to analyse them. Or, as I sometimes put it, *people don’t want data, what they want are answers*. And statistics provides the tools for finding those answers.

David J. Handis Professor of Statistics at Imperial College, London and author of Statistics: A Very Short Introduction

The Very Short Introductions (VSI) series combines a small format with authoritative analysis and big ideas for hundreds of topic areas. Written by our expert authors, these books can change the way you think about the things that interest you and are the perfect introduction to subjects you previously knew nothing about. Grow your knowledge with OUPblog and the VSI series every Friday and like Very Short Introductions on Facebook. Subscribe to on Very Short Introductions articles on the OUPblog via email or RSS.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS

*Image credit: Diagram of Total Information Awareness system designed by the Information Awareness Office. Public domain via Wikimedia Commons*

The post Statistics and big data appeared first on OUPblog.

]]>Politically, socially, and culturally, the 1960s were tumultuous times. But tucked away amidst the folds of the Cold War, civil rights activism, anti-war demonstrations, the feminist movement, revolts of students and workers, flower power, sit-ins, Marxist and Maoist revolutions – almost unnoticed -- a new science was born in university campuses across North America, Britain, Europe and even, albeit tentatively, certain non-Western parts of the world.

The post The genesis of computer science appeared first on OUPblog.

]]>

Politically, socially, and culturally, the 1960s were tumultuous times. But tucked away amidst the folds of the Cold War, civil rights activism, anti-war demonstrations, the feminist movement, revolts of students and workers, flower power, sit-ins, Marxist and Maoist revolutions — almost unnoticed — a new science was born in university campuses across North America, Britain, Europe. and even, albeit tentatively, certain non-Western parts of the world. This new science acquired a name of its own: *computer science* (or some variations thereof, ‘computing science’, ‘informatique’, ‘informatik’).

At the heart of this new science was the process by which symbols, representing information, could be automatically (or with minimal human intervention) transformed into other symbols (representing other kinds or new information). This process was called, variously, *automatic computation, information processing, *or *symbol processing*. The agent of this process was the artifact named, generically, *computer.*

The computer is an *automaton*. In the past, this word, ‘automation’ (coined in the 17th century) was used to mean an artifact which, largely driven by its own source of motive power, performs certain repetitive patterns of movement and action without any external influences. Often, these actions imitated those of humans and animals. Ingenious mechanical automata had been invented since antiquity, largely for the amusement of the wealthy though some were of a more utilitarian nature (such as the water clock, said to be invented in the 1st century CE by the engineer/inventor Hero of Alexandria).

So mechanical automata that carry out physical actions of one sort or another form a venerable tradition. But the automatic electronic digital computer marked the birth of a whole new genus of automata, for this artifact was designed or intended to imitate human thinking; and, indeed, to extend or even replace humans in some of their highest cognitive capacities. Such was the power and scope of this artifact, it became the fount of a socio-technological revolution now commonly referred to as the Information Revolution, and a brand new science, computer science.

But computer science is not a *natural* science. It is not of the same kind as, say, physics, chemistry, biology, or astronomy. The gazes of these sciences are directed toward the natural world, inorganic and organic. The domain of computer science is the artificial world, the world of made objects, of artifacts — in particular, *computational artifacts*. Computer science is a *science of the artificial*, to use a term coined by Nobel laureate polymath scientist Herbert Simon.

A fundamental difference between a natural science like physics and an artificial science such as computer science relates to the age old philosophical distinction between *is* and *ought*. The natural scientist is concerned with the world *as it is*; she is not in the business of deliberately changing the natural world. Thus, the astronomer peering at the cosmos does not desire to change it but to understand it; the paleontologist examining rock layers in search of fossils is doing this to learn more about the history of life on earth, not to change the earth (or life) itself. For the natural scientist, understanding the natural world is an end in itself.

The scientist of the artificial also wishes to understand, not nature but artifacts. However that desire is a means to an end, for the scientist of the artificial, ultimately, wishes to *alter *the world in some respect. Thus the computer scientist wants to alter some aspect of the world by creating computational artifacts as improvements on existing one, or by creating new computational artifacts that have never existed before. If the natural scientist is concerned with the world *as it is*, the computer scientist obsesses with the world as she thinks *it ought to be*. For computer scientists, like other scientists of the artificial (such as engineering scientists) their domain comprises of artifacts that are intended to serve some purpose. An astronomer does not ask what a particular galaxy or planet is *for*; it just *is*. A computer scientist, striving to understand a particular computational artifact begins with the purpose for which it was created. Artifacts are imbued with purpose, reflecting the purposes or goals imagined for them by their human creators.

So how was this science of the artificial called computer science born? Where, when, and how did it begin? Who were its creators? What kinds of purposes drove the birth of this science? What were its seminal ideas? What makes it distinct from other, more venerable, sciences of the artificial? Was the genesis of computer science evolutionary or revolutionary? A ‘big bang’ or a ‘steady state’ birth? These are the kinds of questions that interest historians of science peering into the origins of what is one of the youngest artificial sciences of the 20th century.

Subrata Dasgupta is the Computer Science Trust Fund Eminent Scholar Chair in the School of Computing & Informatics at the University of Louisiana at Lafayette, where he is also a professor in the Department of History. Dasgupta has written fourteen books, most recently

It Began with Babbage: The Genesis of Computer Science.

Subscribe to the OUPblog via email or RSS.

Subscribe to only technology articles on the OUPblog via email or RSS.

*Image Credit: A reflection of a man typing on a laptop computer. Photo by Matthew Roth. CC-BY-SA-3.0 via Wikimedia Commons.*

The post The genesis of computer science appeared first on OUPblog.

]]>Fractal shapes, as visualizations of mathematical equations, are astounding to look at. But fractals look even more amazing in their natural element—and that happens to be in more places than you might think.

The post Fractal shapes and the natural world appeared first on OUPblog.

]]>

Fractal shapes, as visualizations of mathematical equations, are astounding to look at. But fractals look even more amazing in their natural element—and that happens to be in more places than you might think.

Kenneth Falconer is a mathematician who specializes in Fractal Geometry and related topics. He is Professor of Pure Mathematics at the University of St. Andrews and a member of the Analysis Research Group of the School of Mathematics and Statistics. Kenneth’s main research interests are in fractal and multifractal geometry, geometric measure theory and related areas. He has published over 100 papers in mathematical journals. He is author of

Fractals: A Very Short Introduction.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credits:
*

The post Fractal shapes and the natural world appeared first on OUPblog.

]]>With the arrival of the new year, you can be certain that the annual extravaganza known as the Joint Mathematics Meetings cannot be far behind. This year’s conference is taking place in Baltimore, Maryland. It is perhaps more accurate to say that it is a conference of conferences, since much of the business to be transacted will take place in smaller sessions devoted to this or that branch of mathematics

The post The *real* unsolved problems of mathematics appeared first on OUPblog.

With the arrival of the new year, you can be certain that the annual extravaganza known as the Joint Mathematics Meetings cannot be far behind. This year’s conference is taking place in Baltimore, Maryland. It is perhaps more accurate to say that it is a conference of conferences, since much of the business to be transacted will take place in smaller sessions devoted to this or that branch of mathematics. In these sessions, researchers at the cutting edge of the discipline will discuss the most recent developments on the biggest open problems in the business. It will all be terribly clever and largely impenetrable. You can be certain, however, that the real open questions of mathematics will barely be addressed.

It is hardly a secret that large conferences like this are as much about socialization as they are about research. This presents some problems, since the Joint Meetings can be a minefield of social awkwardness and ambiguous etiquette.

For example, imagine that you are walking across the lobby and you notice someone you know slightly coming the other way. Should you stop and chat? Or is a nod of acknowledgement sufficient? If you do stop, what sort of greeting is appropriate? A handshake? A hug? And how do you exit the conversation once the idle chit chat runs out? Sometimes you stop and chat, and then someone friendlier with the other person arrives to interrupt. One minute you’re making small talk about your recent job histories, and the next you’re just standing there watching your conversation partner make dinner plans with someone who just appeared. Now what do you do? Usually your only course is to mutter something about being late for a talk and then slink off with whatever dignity you can muster.

The exhibition center presents its own problems. How long can you stand in one place perusing a book before it becomes rude? Quite a while, apparently, if we are to judge from some of the stingier characters we inevitably meet. If the book is that interesting just buy it and be done with it. Come to think of it, when you are standing there looking through books, what is the maximum allowable angle to which you can separate the covers? Cracking the spine is definitely frowned upon. How many Hershey’s miniatures can you reasonably pilfer from the MAA booth? Which book should you buy to burn up your AMS points? Let me suggest that the answer to that one depends on which book will look best on your shelf, since you know full-well you are never going to read it.

Actually presenting a talk brings with it some challenges of its own. Perhaps you are giving a contributed talk, and you get the first slot after lunch. So it’s just you, the person speaking after you, and whoever drew the short straw for moderation duty. Do you acknowledge the lack of an audience? Or do you go through the motions like you’re keynoting? After giving your talk, is it acceptable simply to leave? Or are you ethically obligated to stay for the talk right after yours? What do you do if you notice an error in someone else’s talk? Should you expose it to the world during the question period, or just discuss it privately with the speaker afterward?

Perhaps we need a special session to discuss these questions. That, at least would be a session where everyone could understand what was being said. On the other hand, given the occasionally strained relationship between mathematicians and social graces, perhaps I should not be so cavalier about that.

Jason Rosenhouse is Associate Professor of Mathematics at James Madison University. He is the author of Taking Sudoku Seriously: The Math Behind the World’s Most Popular Pencil Puzzle with Laura Taalman; The Monty Hall Problem: The Remarkable Story of Math’s Most Contentious Brain Teaser; and Among The Creationists: Dispatches from the Anti-Evolutionist Front Lines. Read Jason Rosenhouse’s previous blog articles.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credit: Complex formulae on a blackboard. © JordiDelgado via iStockphoto. *

The post The *real* unsolved problems of mathematics appeared first on OUPblog.

Almost exactly twenty years ago, on 19 October 1993, the US House of Representatives voted 264 to 159 to reject further financing for the Superconducting Super Collider (SSC), the particle accelerator being built under Texas. $2bn had already been spent on the Collider, and its estimated total cost had grown from $4.4bn to $11bn; a budget saving of $9bn beckoned. Later that month President Clinton signed the bill officially terminating the project.

The post The legacy of the Superconducting Super Collider appeared first on OUPblog.

]]> ]]>

Almost exactly 20 years ago, on 19 October 1993, the US House of Representatives voted 264 to 159 to reject further financing for the Superconducting Super Collider (SSC), the particle accelerator being built under Texas. Two billion dollars had already been spent on the Collider, and its estimated total cost had grown from $4.4bn to $11bn; a budget saving of $9bn beckoned. Later that month President Clinton signed the bill officially terminating the project.

This was not good news for two of my Harvard roommates, PhD students in theoretical physics. Seeing the academic job market for physicists collapsing around them, they both found employment at a large investment bank in New York in the nascent field of quantitative finance. It was their assertion that derivative markets, whatever in fact they were, seemed mathematically challenging that catalyzed my own move to Wall Street from an academic career.

The cohort of PhDs in science, technology, engineering, and mathematics that moved to finance from academia in the early 1990s (a cohort I have called the “SSC generation”) sparked a remarkable growth in the sophistication and complexity of financial markets. They built models which enabled banks and hedge funds to price and trade complex financial instruments called derivatives, contracts whose value derives from the levels of other financial variables, such as the price of the Japanese Yen or a collection of mortgages on apartments in Florida. They created a new subject, known as financial engineering or quantitative finance, and a brand new career path, that of quantitative analyst (“quant”), a vocation that became so popular — for its monetary rewards certainly, but also for its dynamism and innovation — that by June 2008, 28% of graduating Harvard seniors going into full time employment were heading to finance.

However, just as some investors in 2007-2008 were questioning the inexorable rise in house prices and the potential for a market bubble, so too were many students questioning their own career choices, sensing the possibility of a career bubble. As Harvard University President Drew Faust said in her first address to the senior class in June 2008, “You repeatedly asked me: Why are so many of us going to Wall Street?”

Three months later, both market and career bubbles collapsed as Lehman Brothers filed for bankruptcy. In the midst of the financial crisis, on 3 October 2008, the House of Representatives voted 263 to 171 to pass the Emergency Economic Stabilization Act, authorizing the Treasury secretary to spend $700bn — roughly 65 Super Colliders — to purchase distressed assets.

What went wrong? While the causes of the financial crisis have been widely debated, it is clear that many financial engineers were caught in what I have termed the “quant delusion,” an over-confidence in and over-reliance on mathematical models. The edifice of quantitative finance built over 15 years by the SSC generation was dramatically rocked by the events of 2008. Fundamental logical arguments that practitioners had taken for granted were shown not to hold. Decades of modeling advances were revealed to be invalid or thrown into question.

It is hard to prove a direct causal link between the cancellation of the SSC, the rise of financial engineering, and the chaos of 2008. However, if some roots of the financial crisis can be traced, however distantly, to October 1993, might one consequence of the financial crisis itself be a healthy reassessment of career choices amongst graduates?

I encounter evolving attitudes among students in the class that I teach at Harvard, Statistics 123, “Applied Quantitative Finance”. Many still plan a future on Wall Street, and are motivated by the mathematical challenges and dynamic environment ahead of them. Some are interested in the elegant mathematical and probabilistic theory that underlies derivatives markets, and are keen to understand the way of thinking that exists on Wall Street. Others appreciate that they have a broad range of equally compelling career options, whether in technology, life sciences, climate science, or fundamental research, and take my course simply because they have enjoyed their introduction to probability and want to experience one of its most compelling applications.

Stephen Blyth is Professor of the Practice of Statistics at Harvard University, and Managing Director at the Harvard Management Company. His book, An Introduction to Quantitative Finance, was published by Oxford University Press in November 2013.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credit: graphs and charts. © studiocasper via iStockphoto. *

The post The legacy of the Superconducting Super Collider appeared first on OUPblog.

]]>“This is not maths – maths is about doing calculations, not proving theorems!” So wrote a disaffected student at the end of my recent pure maths lecture course. Theorems, along with their proofs, have gotten a bad name.

The post Let them eat theorems! appeared first on OUPblog.

]]> ]]>** **

“This is not maths – maths is about doing calculations, not proving theorems!” So wrote a disaffected student at the end of my recent pure maths lecture course. Theorems, along with their proofs, have gotten a bad name.

The first (and often only) theorem most people encounter is Pythagoras Theorem, discovered over 2500 years ago; that if you square the lengths of the two perpendicular sides of a right-angled triangle and add these numbers together then you get the square of the length of the third side. To many, the name Pythagoras conjures up memories of eccentric maths teachers enthusing over spiders webs of lines. Yet, if the writer of the software underlying your computer had not known their Pythagoras and other such theorems, you would not now be viewing this neatly aligned text or navigating around your screen at the touch of a mouse.

A theorem is the name for an incontrovertible mathematical fact, a statement that is an unavoidable consequence of precisely defined terms or facts that have already been established. Pythagoras’ Theorem follows inexorably from the notions of a straight line, a right-angle and length. A couple of hundred years later, Euclid formulated his theorems or ‘Propositions’ of geometry which became the foundation of western mathematical education for the next 2000 years. My favourite is the Intersecting Chord Theorem: if you draw two intersecting straight lines across a circle and multiply together the lengths of the parts of the chords on either side of the intersection point then you get the same answer for both chords (see diagram). This is a remarkable statement: there seems no obvious reason why it should be so. Yet it is an inevitable consequence of the definition of a circle. Sadly, learning the formal propositions of Euclid by rote, as they were often taught in the past, may have hidden their substance and elegance and turned off many budding mathematicians.

Many further geometrical theorems have been established since Euclid’s days, some with evocative names. The Ham Sandwich Theorem says that given three objects there is always a plane that simultaneously divides each object into two parts of equal volume; thus a sandwich can always be divided by a straight slice so that the bread, butter, and ham are all equally divided between the two portions. Then, according to the Hairy Ball Theorem, it is impossible to comb a sphere covered with hair or fur in such a way that the hairs lie down smoothly everywhere on the sphere. One consequence, perhaps reassuring at times of extreme weather, is that at any instant there is somewhere on the earth’s surface where there is no wind.

The Mandelbrot set has become an icon recognised by many with little or no mathematical knowledge but who have been fascinated by its intriguing beauty. The Fundamental Theorem of the Mandelbrot Set, as it is sometimes called, relates geometrical aspects of this extraordinarily complicated object to the simple formula *z*^{2} + *c*. The theorem was contained in the writings of Pierre Fatou and Gaston Julia back in 1919, but was virtually forgotten until in the mid-1970s Mandelbrot’s computer images revealed the set’s intricate detail. A picture can bring a theorem to life!

Of course, not all theorems are about geometry. Some concern properties of numbers; perhaps the most famous is Fermat’s Last Theorem, that the equation *x ^{n}* +

Theorems are the pillars of mathematics. New theorems, often building on the foundations of earlier ones, are continually being proved. Yes, some may be esoteric, but others have been fundamental in the development of things that we take for granted, such as Stokes’ Theorem for electronic communication and fluid flow. And, though I obviously failed to convince my student, they are the basis for many of the calculations undertaken daily by scientists and engineers.

Kenneth Falconer is author of Fractals: A Very Short Introduction and Fractal Geometry: Mathematical Foundations and Applications (Wiley, 2014). He has been Professor of Pure Mathematics at the University of St Andrews since 1993.

The Very Short Introductions (VSI) series combines a small format with authoritative analysis and big ideas for hundreds of topic areas. Written by our expert authors, these books can change the way you think about the things that interest you and are the perfect introduction to subjects you previously knew nothing about. Grow your knowledge with OUPblog and the VSI series every Friday and like Very Short Introductions on Facebook.

Subscribe to the OUPblog via email or RSS.

Subscribe to Very Short Introductions articles on the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credits: 1) Figure drawn by author; 2) Image computed by Ben Falconer*

The post Let them eat theorems! appeared first on OUPblog.

]]>Statistics to me has always been about trying to make the best sense of incomplete information and having some feeling for how good that ‘best sense’ is. At a very crude level if you have a firm employing 235 people and you randomly sample 200 of these on some topic, I would feel my information was pretty good (even though it is incomplete).

The post Making sense with data visualisation appeared first on OUPblog.

]]> ]]>

Statistics to me has always been about trying to make the best sense of incomplete information and having some feeling for how good that ‘best sense’ is. At a very crude level if you have a firm employing 235 people and you randomly sample 200 of these on some topic, I would feel my information was pretty good (even though it is incomplete). If the information I have is based on a sample of five people or I have asked all the people in one office, then I would know my information was nothing like as good as in the former case.

More than ever, in the current International Year of Statistics, there is an acceptance that understanding quantitative information is a necessary skill in almost any academic discipline and in almost all professional jobs (and very many jobs at lower levels). Statistics is used wide range of contexts such as physical, life and social sciences, sports, marketing, finance, geography, and psychology. In fact it’s used anywhere there is interesting data, and with supporting visual explanations of what is happening in various statistical techniques, it need not be an intimidating area to be involved in.

I am currently doing some work at Durham on data visualisation, including on education performance data, the 2011 UK riots, and health. For example, interactive data resources show the proportions of pupils gaining five good GCSEs (with and without a requirement to include English and Maths), disaggregated by sex, ethnicity and whether they are eligible for free school meals. The first screen shot shows boys’ performance rates for various ethnic groups and how eligibility for free school meals varies across ethnic groups. You can see it’s very dramatic in both white and mixed groups, and much more modest for asian, black and other groups, and almost non-existent for the Chinese. The second screen shot shows how the display changes if the bottom slider is moved to change the performance measure to remove the requirement for English and Maths. The position of the variables can be moved (just drag and drop) to different positions to allow other comparisons to be made directly, and to develop a real sense of the stories in the data.

It would be much more logical if social scientists wanting to put forward theoretical explanations for inequalities in health, in education, in crime etc., were able to explore the data actively in an interface like this – to develop a rich picture of the relationships between factors, which are important and which less so, where particular combinations of factors give unexpected outcomes – and then to try to provide theory which is consistent with the observed patterns of behaviour.

Additionally, I have just started working on a new project working on visualisations of 2011 UK Census data and with Imperial College Reach Out Lab on supporting data sharing in science. Essentially there is a Pratice Transfer Partnership of HE Reach Out Labs where we are trying to develop experiments with more variables that different institutions will collaborate on to bulld a large multi-variate data set which students and teachers would then have access to embedded in our visualisation tools. The ambition is to tie more mathematics in with authentic scientific enquiry, so the collaboration between Science and Mathematics is something with real potential in making mathematics and statistics more directly and obviously relevant to students.

James Nicholson is the author of Statistics S1 and Statistics S2 in the A Level Mathematics for Edexcel course published by Oxford University Press. He is also Principal Research Fellow at the SMART Centre at Durham University.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credit: Graphs created by James Nicholson. Used with permission. All rights reserved. *

The post Making sense with data visualisation appeared first on OUPblog.

]]>The post Why launch a new journal? appeared first on OUPblog.

]]> ]]>**Why have you decided to launch a new journal of survey research?**

Well, we thought the field of survey research needed a flagship journal and, fortunately for us, the two largest professional organizations for survey researchers — the American Association for Public Opinion Research (AAPOR) and the American Statistical Association (ASA) — shared our view. These organizations have agreed to sponsor the new journal. AAPOR will make the journal available to its more than 2,000 members as part of their annual dues — that is, at no added cost to them. And ASA will offer a similar deal to the 1,000+ members of its Survey Research Methods Section.

**Isn’t there a danger of journal overload? How did you make such considerations?**

Articles on survey statistics and methodology have traditionally been scattered across journals that focus primarily on statistics, sociology, political science, communications, epidemiology, demography, and a range of other disciplines. We thought it was time to have a journal that would focus only on survey statistics and methodology. Of course, there are now journals devoted mainly to survey topics, such as the *Journal of Official Statistics* and *Survey Methodology*. However, as valuable as these journals are, they are sponsored by government agencies and we believe that the flagship journal for the field should have the backing of the largest, most prestigious professional organizations for survey researchers. Hence, the new journal.

**How has the field changed in the last 25 years?**

The field has grown up. In the United States, three programs — at the University of Maryland, the University of Michigan, and the University of Nebraska — now offer doctoral degrees in survey methodology. There are also academic programs in survey methodology in the United Kingdom and elsewhere in Europe. In the United States alone, more than forty doctorates in survey methodology have been awarded. There are now textbooks covering every aspect of survey statistics and methodology. Survey statistics and methodology has become a fully-fledged discipline and we believe the time is ripe for it to have a journal that reflects that status.

**What are some of the latest developments in survey research?**

This may be a pivotal time for surveys. Survey costs are spiraling upward, response rates are falling, and many of the government agencies that sponsor surveys are likely to face serious budget cuts in the coming years. Moreover, partly in response to these problems, some researchers are giving up on probability sampling, a mainstay for survey research for the last sixty years. At the same time, everyone seems to want estimates based on survey data, often for ever-smaller areas or subgroups, and to make policy decisions based on these estimates.

Despite all these worrisome developments, surveys still seem to give accurate results. Whatever their problems, the polls were able forecast the outcome of the 2012 elections with almost uncanny accuracy. Similarly, according to Census Bureau evaluations, the 2010 census may have been the most accurate census ever done.

**What do you hope to see in the coming years from both the field and the journal?**

We hope that authors will surprise us with articles describing good work in areas we had not anticipated and we promise to be open to such work. Most of all, we hope that journal becomes a fount of high quality research in all areas of survey statistics and methodology.

Joseph Sedransk is Professor Emeritus of Statistics at Case Western Reserve University. Roger Tourangeau is a Vice President at Westat. Before going to Westat, he headed the Joint Program in Survey Methodology at the University of Maryland for nearly 10 years; during this time, he was also a Research Professor in the University of Michigan’s Survey Research Center. Joseph Sedransk is the editor for statistical papers and Roger Tourangeau the editor for the methodological papers for the new

Journal of Survey Statistics and Methodology.

The Journal of Survey Statistics and Methodology, sponsored by AAPOR and the American Statistical Association, will begin publishing in 2013. Its objective is to publish cutting edge scholarly articles on statistical and methodological issues for sample surveys, censuses, administrative record systems, and other related data. It aims to be the flagship journal for research on survey statistics and methodology.

Subscribe to the OUPblog via email or RSS.

Subscribe to only social sciences articles on the OUPblog via email or RSS.

*Image credit: Check mark. Composición 3D. Mostrando un concepto de selección. Image by ricardoinfante, iStockphoto. *

The post Why launch a new journal? appeared first on OUPblog.

]]>There’s a prevailing notion that communicating science is difficult, and it is therefore difficult to engage the general public. People can be fazed by statistics in particular, so how can we convey the importance of this science effectively?

The post Tragedy of the science-communication commons appeared first on OUPblog.

]]> ]]>

There’s a prevailing notion that communicating science is difficult, and it is therefore difficult to engage the general public. People can be fazed by statistics in particular, so how can we convey the importance of this science effectively?

I’ve earlier written that science is science communication — that is, the act of communicating scientific ideas and findings to ourselves and others is itself a central part of science. My point was to push against a conventional separation between the act of science and the act of communication, the idea that science is done by scientists and communication is done by communicators. It’s a rare bit of science that does not include communication as part of it. As a scientist and science communicator myself, I’m particularly sensitive to devaluing of communication. (For example, Bayesian Data Analysis is full of original research that was done in order to communicate; or, to put it another way, we often think we understand a scientific idea, but once we try to communicate it, we recognize gaps in our understanding that motivate further research.)

I once saw the following on one of those inspirational-sayings-for-every-day desk calendars: “To have ideas is to gather flowers. To think is to weave them into garlands.” Similarly, writing — more generally, communication to oneself or others — forces logic and structure, which are central to science.

Dan Kahan saw what I wrote and responded by flipping it around: He pointed out that there is a science of science communication. As scientists, we should move beyond the naive view of communication as the direct imparting of facts and ideas. We should think more systematically about how communications are produced and how they are understood by their immediate and secondary recipients.

The science of science communication is still in its early stages, and I’m glad that people such as Kahan are working on it. Here’s something he wrote recently explicating his theory of cultural cognition:

The motivation behind this research has been to understand the science communication problem. The “science communication problem” (as I use this phrase) refers to the failure of valid, compelling, widely available science to quiet public controversy over risk and other policy relevant facts to which it directly speaks. The climate change debate is a conspicuous example, but there are many others, including (historically) the conflict over nuclear power safety, the continuing debate over the risks of HPV vaccine, and the never-ending dispute over the efficacy of gun control…. The research I will describe reflects the premise that making sense of these peculiar packages of types of people and sets of factual beliefs is the key to understanding—and solving—the science communication problem. The cultural cognition thesis posits that people’s group commitments are integral to the mental processes through which they apprehend risk…

I think of Kahan as part of a loose network of constructive skeptics, along with various people including Thomas Basbøll, John Ioannidis, the guys at Retraction Watch, bloggers such as Felix Salmon, and a whole bunch of psychology researchers such as Wicherts, Wagenmakers, Simonsohn, Nosek, etc. This doesn’t represent a complete list but rather is intended to give a sense of the different aspect of this movement-without-a-name. Ten or twenty or thirty years ago, I don’t think such a movement existed. There were concerns about individual studies or research programs, but not such a sense of a statistics-centered crisis in science as a whole.

Andrew Gelman is a Professor in the Department of Statistics at Columbia University. He is the co-author of Teaching Statistics: A Bag of Tricks with Deborah Nolan. Read his blog Statistical Modeling, Causal Inference, and Social Science.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

The post Tragedy of the science-communication commons appeared first on OUPblog.

]]>Symmetry has been recognised in art for millennia as a form of visual harmony and balance, but it has now become one of the great unifying principles of mathematics. A precise mathematical concept of symmetry emerged in the nineteenth century, as an unexpected side-effect of research into algebraic equations. Since then it has developed into a huge area of mathematics, with applications throughout the sciences.

The post Symmetry is transformation appeared first on OUPblog.

]]> ]]>

Symmetry has been recognised in art for millennia as a form of visual harmony and balance, but it has now become one of the great unifying principles of mathematics. A precise mathematical concept of symmetry emerged in the nineteenth century, as an unexpected side-effect of research into algebraic equations. Since then it has developed into a huge area of mathematics, with applications throughout the sciences.

Today we usually think of symmetry as a regularity of visual pattern—the sixfold symmetry of a snowflake, the circular symmetry of ripples on a pond, the spherical symmetry of a droplet of water or a planet. Here the role of symmetry is mainly descriptive. But there is a sense in which a natural *process* can also be symmetric, and the mathematics of symmetry can predict the results of that process, helping us to understand how nature’s patterns arise.

The key step towards a rigorous notion of symmetry arose not in geometry, but in algebra: attempts to solve quintic equations. The ancient Babylonians knew how to solve quadratic equations, and Renaissance Italian mathematicians discovered how to solve cubic and quartic equations, but here, everyone got stuck. Eventually, it turned out that no solution of the required kind exists for the general quintic equation.

The deep reason for this impossibility lies in the symmetries of the equation, which are the possible ways to permute its solutions while preserving all algebraic relations among them. When an equation has ‘the wrong kind of symmetry’ it can’t be solved by a formula of the traditional type. And equations of the fifth degree have the wrong kind of symmetry.

Mathematicians realised that symmetry is not a thing, but a *transformation*: a way to move or otherwise disturb something while—paradoxically—leaving it unchanged. For example, to a good approximation a human figure viewed in a mirror looks just like the original. Mixing up the roots of an equation doesn’t change suitable formulas in which they appear. Rotating a sphere through some angles produces an identical sphere.

The collection of all such transformations is called the symmetry group of the object; the structure of this group provides a powerful way to find out how the object behaves. The upshot of this discovery was a new, abstract branch of algebra: group theory.

Groups turned out to be fundamental to the study of crystals; the form and behaviour of a crystal depends on the symmetry group of its atomic lattice. Groups are also vital to chemistry: the way a molecule vibrates depends on its symmetries. The symmetries of a uniformly flat desert determine the possible patterns of sand dunes when the flat pattern becomes unstable. The symmetries of biological tissue determine the possible patterns of animal markings, such as stripes and spots. The symmetries of a cloud of gas determine the spiral form of a galaxy. The symmetries of space and time underpin Einstein’s theories of special and general relativity. The symmetries of fundamental particles constrain quantum field theory and affect the possibilities for unifying it with relativity.

Symmetry is such a huge idea, with so many diverse ramifications, that only an encyclopaedia could really do it justice. But it is possible to sketch its origins, give some idea of how the formal theory works out, sample its applications, and witness its diversity and generality. Moreover, the subject has great visual beauty and appeal: here, for once, mathematics can be a spectator sport, and audience participation is not mandatory.

I have spent much of my research career working on connections between symmetries and nature’s patterns, in fluid flow, animal movement, visual perception, and evolutionary biology—and I am just one of many. The well is nowhere near running dry. New applications are constantly being found. Symmetry is one of the truly deep concepts, possessing both visual and logical beauty. Its effects can be seen everywhere, if you know how to look.

Ian Stewart is Emeritus Professor of Mathematics at Warwick University. He is a well-established communicator of mathematics, and the author of over 80 books, including several on the subject of symmetry, such as Symmetry: A Very Short Introduction. His summary of the problems of mathematics,

From Here to Infinity, and collections of his columns fromScientific American (How to Cut a Cake, Cows in the Maze), have been very successful, and his recent bookProfessor Stewart’s Cabinet of Mathematical Curiosities, has been a bestseller.

The Very Short Introductions (VSI) series combines a small format with authoritative analysis and big ideas for hundreds of topic areas. Written by our expert authors, these books can change the way you think about the things that interest you and are the perfect introduction to subjects you previously knew nothing about. Grow your knowledge with OUPblog and the VSI series every Friday and like Very Short Introductions on Facebook.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credits: Symmetrical landscape, By Johann Jaritz (Own work), Creative Commons Licence via Wikimedia Commons*

The post Symmetry is transformation appeared first on OUPblog.

]]>Two contrasting experiences stick in mind from my first year at university. First, I spent a lot of time in lectures that I did not understand. I don’t mean lectures in which I got the general gist but didn’t quite follow the technical details. I mean lectures in which I understood not one thing from the beginning to the end. I still went to all the lectures and wrote everything down – I was a dutiful sort of student – but this was hardly the ideal learning experience...

The post Memories of undergraduate mathematics appeared first on OUPblog.

]]> ]]>

Two contrasting experiences stick in mind from my first year at university.

First, I spent a lot of time in lectures that I did not understand. I don’t mean lectures in which I got the general gist but didn’t quite follow the technical details. I mean lectures in which I understood not one thing from the beginning to the end. I still went to all the lectures and wrote everything down – I was a dutiful sort of student – but this was hardly the ideal learning experience.

Second, at the end of the year, I was awarded first class marks. The best thing about this was that later that evening, a friend came up to me in the bar and said, “Hey Lara, I hear you got a first!” and I was rapidly surrounded by other friends offering enthusiastic congratulations. This was a revelation. I had attended the kind of school at which students who did well were derided rather than congratulated. I was delighted to find myself in a place where success was celebrated.

Looking back, I think that the interesting thing about these two experiences is the relationship between the two. How could I have done so well when I understood so little of so many lectures?

I don’t think that there was a problem with me. I didn’t come out at the very top, but obviously I had the ability and dedication to get to grips with the mathematics. Nor do I think that there was a problem with the lecturers. Like the vast majority of the mathematicians I have met since, my lecturers cared about their courses and put considerable effort into giving a logically coherent presentation. Not all were natural entertainers, but there was nothing fundamentally wrong with their teaching.

I now think that the problems were more subtle, and related to two issues in particular.

First, there was a communication gap: the lecturers and I did not understand mathematics in the same way. Mathematicians understand mathematics as a network of axioms, definitions, examples, algorithms, theorems, proofs, and applications. They present and explain these, hoping that students will appreciate the logic of the ideas and will think about the ways in which they can be combined. I didn’t really know how to learn effectively from lectures on abstract material, and research indicates that I was pretty typical in this respect.

Students arrive at university with a set of expectations about what it means to ‘do mathematics’ – about what kind of information teachers will provide and about what students are supposed to do with it. Some of these expectations work well at school but not at university. Many students need to learn, for instance, to treat definitions as stipulative rather than descriptive, to generate and check their own examples, to interpret logical language in a strict, mathematical way rather than a more flexible, context-influenced way, and to infer logical relationships within and across mathematical proofs. These things are expected, but often they are not explicitly taught.

My second problem was that I didn’t have very good study skills. I wasn’t terrible – I wasn’t lazy, or arrogant, or easily distracted, or unwilling to put in the hours. But I wasn’t very effective in deciding how to spend my study time. In fact, I don’t remember making many conscious decisions about it at all. I would try a question, find it difficult, stare out of the window, become worried, attempt to study some section of my lecture notes instead, fail at that too, and end up discouraged. Again, many students are like this. I have met a few who probably should have postponed university until they were ready to exercise some self-discipline, but most do want to learn.

What they lack is a set of strategies for managing their learning – for deciding how to distribute their time when no-one is checking what they’ve done from one class to the next, and for maintaining momentum when things get difficult. Many could improve their effectiveness by doing simple things like systematically prioritizing study tasks, and developing a routine in which they study particular subjects in particular gaps between lectures. Again, the responsibility for learning these skills lies primarily with the student.

Personally, I never got to a point where I understood every lecture. But I learned how to make sense of abstract material, I developed strategies for studying effectively, and I maintained my first class marks. What I would now say to current students is this: take charge. Find out what lecturers and tutors are expecting, and take opportunities to learn about good study habits. Students who do that should find, like I did, that undergraduate mathematics is challenging, but a pleasure to learn.

Lara Alcock is a Senior Lecturer in the Mathematics Education Centre at Loughborough University. She has taught both mathematics and mathematics education to undergraduates and postgraduates in the UK and the US. She conducts research on the ways in which undergraduates and mathematicians learn and think about mathematics, and she was recently awarded the Selden Prize for Research in Undergraduate Mathematics Education. She is the author of How to Study for a Mathematics Degree (2012, UK) and How to Study as a Mathematics Major (2013, US).

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

Subscribe to only education articles on the OUPblog via email or RSS.

*Image credit: Screenshot of Oxford English Dictionary definition of mathematics, n., via OED Online. All rights reserved.*

The post Memories of undergraduate mathematics appeared first on OUPblog.

]]>In the heyday of the British Empire, Britain’s second most-widely-read book, after the Bible, was: (a)

The post The map she carried appeared first on OUPblog.

]]> ]]>

In the heyday of the British Empire, Britain’s second most-widely-read book, after the Bible, was: (a) *Richard III* (b) *Robinson Crusoe* (c) *The Elements* (d) *Beowulf* ? Why do I ask?

“Since late medieval or early modern time,” Michael Walzer writes in *Exodus and Revolution*, “there has existed in the West a characteristic way of thinking about political change, a pattern that we commonly impose upon events, a story that we repeat to one another. The story has roughly this form: oppression, liberation, social contract, political struggle, new society…. Because of the centrality of the Bible in Western thought and the endless repetition of the story, the pattern has been etched deeply into our political culture. It isn’t only the case that events fall, almost naturally, into an Exodus shape; we work actively to give them that shape.”

The second-most-widely-read book plays that role in Western thought too: (c) *The Elements* by Euclid. Since late medieval or early modern time, there has existed in the West a characteristic way of organizing knowledge, a pattern that we commonly impose upon observations, concepts, and ideas, a pattern we teach our children. Because of the centrality of Euclid in Western education and the endless repetition of his axioms, definitions, theorems and proofs, the pattern has been etched deeply into our intellectual culture. It isn’t only the case that knowledge falls, almost naturally, into a Euclidean shape; we work actively to give it that shape.

Euclid was the geometry of the medieval university and the bedrock of European education for centuries. It wasn’t just about the triangles; Euclid sharpened your mind, trained your logic. His clever proofs were the very model of argument. To master Euclid was to master the world, the world around you and beyond. “Nature and Nature’s laws lay hid in night; God said, Let Newton be! and all was light.” And what did Newton’s lamp look like? See for yourself in the *Principia Mathematica*. “All human knowledge begins with intuitions,” said Kant, “proceeds from there to concepts, and ends with ideas.” Where do you think he got that? Euclideana even permeates our politics, but for this blog I’ll stick to science.

Non-Euclidean geometries put an end to that? No, they didn’t. Non-Euclidean geometries substituted one axiom for another, but they kept Euclid’s vision of organized knowledge, his faith in deductive reasoning. Non-Euclidean geometry is as Euclidean as Euclid’s! So is the new, improved axiom set David Hilbert proposed for geometry in the 19th century. (It turned out that Euclid’s wasn’t perfect.) So is the quixotic Russell-Whitehead program, in the early 20th century, to reduce mathematics to logic. Modern mathematics is consciously Euclidean to the core. In 1900, in a still-influential address, David Hilbert proposed rewriting Newton for modern physics along this vision of organized knowledge.

Born in 1894, Dorothy Wrinch grew up in a London suburb. She aced the mathematics program at Cambridge University and then studied logic with Bertrand Russell. The naturalist D’Arcy Thompson was another mentor and friend; his *Growth and Form* was her bible. Tugged by philosophy, mathematics, and biology for a decade, she cast her lot with biology, determined to unravel it through the powerful lens of logic. The model of protein architecture she came up with catalyzed protein chemists despite or because of its weaknesses. Why?

With this map to guide her, she found what she was looking for. “A number of new sciences have passed from the embryonic stage,” she wrote in 1934. “Discarding description as their ultimate purpose, they are now ready to take their places in the world state of science. The thesis which I wish now to develop is but a logical consequence of the thorough-going application of this principle.” Her protein model was one such consequence.

Biology ripe for logic? Some natives were not amused. (Or they were.) “Her idea of science is completely different from theirs,” Linus Pauling put it. You betcha!

Euclid fell from his curricular throne and the British Empire collapsed at about the same time. Quantum mechanics scotched Hilbert’s program and Gödel scotched Russell’s. Biology has resisted Euclid too. Though the structures of thousands of proteins are now known in exact detail, their inner logic remains where Dorothy left it, the brass ring on the Nobel carousel.

Marjorie Senechal is the Louise Wolff Kahn Professor Emerita in Mathematics and History of Science and Technology, Smith College, author of I Died for Beauty: Dorothy Wrinch and the Cultures of Science, and Editor-in-Chief of The Mathematical Intelligencer. At the Join Mathematics Meeting, AMS-MAA Special Session on the History of Mathematics, II, Room 9, Upper Level, San Diego Convention Center, she is speaking on 5:00 p.m. on Saturday, 12 January, on Biogeometry, 1941.

Subscribe to the OUPblog via email or RSS.

Subscribe to only articles about mathematics on the OUPblog via email or RSS.

The post The map she carried appeared first on OUPblog.

]]>In the last few years algorithmic thinking has become somewhat of a buzz word among computer science educators, and with some justice: ubiquity of computers in today's world does make algorithmic thinking a very important skill for almost any student. There are few colleges and universities that require non-computer science majors to take a course exposing them to important issues and methods of algorithmic problem solving.

The post Teaching algorithmic problem-solving with puzzles and games appeared first on OUPblog.

]]> ]]>

In the last few years algorithmic thinking has become somewhat of a buzzword among computer science educators, and with some justice: ubiquity of computers in today’s world does make algorithmic thinking a very important skill for almost any student. Although at the present time there are few colleges and universities that require non-computer science majors to take a course exposing them to important issues and methods of algorithmic problem solving, one should expect the number of such schools to grow significantly in the near future.

Algorithmic puzzles, i.e., puzzles that involve clearly defined procedures for solving problems, provide an ideal vehicle to introduce students to major ideas and methods of algorithmic problem solving:

- Algorithmic puzzles force students to think about algorithms on a more abstract level, divorced from programming and computer language minutiae. In fact, puzzles can be used to illustrate major strategies of the design and analysis of algorithms without any computer programming — an important point, especially for courses targeting non-CS majors.
- Solving puzzles helps in developing creativity and problem-solving skills — the qualities any student should strive to acquire.
- Puzzles are fun, and students are usually willing to put more effort into solving them than in doing routine exercises.
- Puzzles provide attractive topics for student research because many of them don’t require an extensive mathematical or computing background.

It’s important to stress that algorithmic puzzles is a serious topic. A few algorithmic puzzles such as Fibonacci’s Rabbits and Königsberg’s Bridges played an important role in history of mathematics. Such well-known and intriguing problems as the Traveling Salesman and the Knapsack Problem, which clearly have a puzzle flavor, lie at the heart of the so-called *P *≠ *NP* conjecture, the most important open question in modern computer science and mathematics.

So reader, I would like to challenge you to an algorithmic puzzle, *#136, “Catching a Spy”*:

In a computer game, a spy is located on a one-dimensional line. At time 0, the spy is at location *a*. With each time interval, the spy moves *b* units to the right if *b*≥0 and |*b*| units to the left if *b*<0. *a* and *b* are fixed integers, but they are unknown to you. Your goal is to identify the spy’s location by asking at each time interval (starting at time 0) whether the spy is currently at some location of your choosing. For example, you can ask whether the spy is currently at location 19, to which you will receive a truthful yes/no answer. If the answer is “yes,” you reach your goal; if the answer is “no,” you can ask the next time whether the spy is at the same or another location of your choice. Devise an algorithm that will find the spy after a finite number questions.

Leave the answer in the comments below.

Anany Levitin is a professor of Computing Sciences at Villanova University. He is the co-author of Algorithmic Puzzles with Maria Levitin. He is the author of Introduction to the

Design and Analysis of Algorithms, Third edition, a popular textbook on design and analysis of algorithms, which has been translated into Chinese, Greek, Korean, and Russian. He has also published papers on mathematical optimization theory, software engineering, data management, algorithm design, and computer science education.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credit: Leonardo da Pisa, Liber abbaci, Ms. Biblioteca Nazionale di Firenze, Codice magliabechiano cs cI, 2626, fol. 124r Source: Heinz Lüneburg, Leonardi Pisani Liber Abbaci oder Lesevergnügen eines Mathematikers, 2. überarb. und erw. Ausg., Mannheim et al.: BI Wissenschaftsverlag, 1993. Public domain via Wikimedia Commons. *

The post Teaching algorithmic problem-solving with puzzles and games appeared first on OUPblog.

]]>Writing in 1866, the British mathematician John Venn wrote, in reference to the branch of mathematics known as probability theory, “To many persons the mention of Probability suggests little else than the notion of a set of rules, very ingenious and profound rules no doubt, with which mathematicians amuse themselves by setting and solving puzzles.” I suspect many of my students would extend Venn’s quip to the entirety of mathematics.

The post What do mathematicians do? appeared first on OUPblog.

]]> ]]>

Writing in 1866, the British mathematician John Venn wrote, in reference to the branch of mathematics known as probability theory, “To many persons the mention of Probability suggests little else than the notion of a set of rules, very ingenious and profound rules no doubt, with which mathematicians amuse themselves by setting and solving puzzles.” I suspect many of my students would extend Venn’s quip to the entirety of mathematics. Often they seem to believe, upon entering my classroom for the first time, that a tacit agreement exists between us. They will dutifully memorize whatever rules I give them and apply them with machine-like accuracy at test-time, but to expect anything beyond that is considered a serious breach of etiquette.

I held such views myself, once upon a time. That is why my first visit to the annual Joint Mathematics Meetings, as an undergraduate student in the early nineties, was such an eye-opening experience. This is the largest mathematics conference of the year, held every January in a different city. Almost two decades later, I am still consistently amazed by the sheer variety of things that mathematicians study. Browsing through the program for this year’s edition, which is being held in San Diego, I notice that there are sessions on complex dynamics and celestial mechanics. Continued fractions get their own session, as do coverings of the integers, and frontiers in geomathematics. Financial mathematics gets a session. So does graph theory, and also the history of mathematics. If you prefer, you can go in for the real jawbreakers. They have titles like, “Advances in General Optimization and Global Optimality Conditions for Multiobjective Fractional Programming Based on Generalized Invexity.” For me, reading the program is like listening to opera. I may not understand all the words, but it sure sounds good!

This conference is called the Joint Mathematics Meetings, because it is held jointly between the two major mathematics organizations in the United States: The American Mathematical Society (AMS) and the Mathematical Association of America (MAA). The AMS generally concerns itself with the profession of mathematics and publishes several highly prestigious research journals. The MAA, by contrast, generally focuses on the educational aspects of mathematics. The sessions I listed above are directed towards researchers and are organized by the AMS. MAA sessions tend to have gentler titles. This year they are hosting a session on the beauty and power of number theory; another one on writing, talking, and sharing mathematics; still another on mathematics in industry; and, my personal favorite, a session called, “Where Have All the Zeros Gone?”

The sessions, however, are only the tip of the iceberg. There are also keynote talks featuring the alphas of our profession. In my experience, the main purpose of these talks is to remind you that, your PhD notwithstanding, there are mathematicians out there who are way smarter than you are. There is also the employment center, populated by eager job-seekers who stand out clearly from the other conference attendees, because they are well-dressed. There is also the exhibition center, in which every mathematical publisher on the planet shows off its latest books. For an impulse buyer like me, this is a dangerous place.

Which brings me back to the John Venn quote with which I started and the question at the top of this essay. Yes, I suppose we do spend a lot of time setting and solving puzzles. We dutifully apply the rules of proper inference to the abstract objects that have caught our fancy, thereby producing publishable theorems. That, however, is really a very small part of what mathematicians do.

You see, more than anything else, to be a mathematician is to be part of a community. Whatever else it is, mathematics is a social activity undertaken by human beings to further human goals and purposes. The main point of the conference is not to transact mathematical business, though that is certainly important. Rather, the point is to socialize, to renew old friendships, and to engage in casual conversations. The point is to remind you that mathematics is not about ivory tower theorizing, but about being part of a community that is united by its love for, and its belief in the importance of, mathematics. This applies whether your focus is on pure mathematics or applied mathematics. It does not matter whether you prefer teaching, research or community outreach. It includes elementary school teachers showing grade-schoolers the mechanics of basic arithmetic, high school teachers giving students their first taste of higher-level math, and graduate school professors at the frontiers of modern research. It also includes the students who will form the next generation not just of professional mathematicians, but of mathematically informed lay people as well.

All are part of the same community, and all are essential to the continued health of our discipline.

Jason Rosenhouse is Associate Professor of Mathematics at James Madison University. His most recent book is

Among The Creationists: Dispatches from the Anti-Evolutionist Front Lines. He is also the author of Taking Sudoku Seriously: The Math Behind the World’s Most Popular Pencil Puzzle with Laura Taalman and The Monty Hall Problem: The Remarkable Story of Math’s Most Contentious Brain Teaser. Read Jason Rosenhouse’s previous blog articles.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credit: John Venn. Public domain via Wikimedia Commons.*

The post What do mathematicians do? appeared first on OUPblog.

]]>This year, 2012, marks the 325th anniversary of the first publication of the legendary

The post Celebrating Newton, 325 years after Principia appeared first on OUPblog.

]]> ]]>** **

This year, 2012, marks the 325th anniversary of the first publication of the legendary *Principia *(*Mathematical Principles of Natural Philosophy*), the 500-page book in which Sir Isaac Newton presented the world with his theory of gravity. It was the first comprehensive scientific theory in history, and it’s withstood the test of time over the past three centuries.

Unfortunately, this superb legacy is often overshadowed, not just by Einstein’s achievement but also by Newton’s own secret obsession with Biblical prophecies and alchemy. Given these preoccupations, it’s reasonable to wonder if he was quite the modern scientific guru his legend suggests, but personally I’m all for celebrating him as one of the greatest geniuses ever. Although his private obsessions were excessive even for the seventeenth century, he was well aware that in eschewing metaphysical, alchemical, and mystical speculation in his *Principia*, he was creating a new way of thinking about the fundamental principles underlying the natural world. To paraphrase Newton himself, he changed the emphasis from metaphysics and mechanism to experiment and mathematical analogy. His method has proved astonishingly fruitful, but initially it was quite controversial.

He had developed his theory of gravity to explain the cause of the mysterious motion of the planets through the sky: in a nutshell, he derived a formula for the force needed to keep a planet moving in its observed elliptical orbit, and he connected this force with everyday gravity through the experimentally derived mathematics of falling motion. Ironically (in hindsight), some of his greatest peers, like Leibniz and Huygens, dismissed the theory of gravity as “mystical” because it was “too mathematical.” As far as they were concerned, the law of gravity may have been brilliant, but it didn’t explain how an invisible gravitational force could reach all the way from the sun to the earth without any apparent material mechanism. Consequently, they favoured the mainstream Cartesian “theory”, which held that the universe was filled with an invisible substance called* ether*, whose material nature was completely unknown, but which somehow formed into great swirling whirlpools that physically dragged the planets in their orbits.

The only evidence for this vortex “theory” was the physical fact of planetary motion, but this fact alone could lead to any number of causal hypotheses. By contrast, Newton explained the mystery of planetary motion in terms of a known physical phenomenon, gravity; he didn’t need to postulate the existence of fanciful ethereal whirlpools. As for the question of how gravity itself worked, Newton recognized this was beyond his scope — a challenge for posterity — but he knew that for the task at hand (explaining why the planets move) “it is enough that gravity really exists and acts according to the laws that we have set forth and is sufficient to explain all the motions of the heavenly bodies…”

What’s more, he found a way of testing his theory by using his formula for gravitational force to make quantitative predictions. For instance, he realized that comets were not random, unpredictable phenomena (which the superstitious had feared as fiery warnings from God), but small celestial bodies following well-defined orbits like the planets. His friend Halley famously used the theory of gravity to predict the date of return of the comet now named after him. As it turned out, Halley’s prediction was fairly good, although Clairaut — working half a century later but just before the predicted return of Halley’s comet — used more sophisticated mathematics to apply Newton’s laws to make an even more accurate prediction.

Clairaut’s calculations illustrate the fact that despite the phenomenal depth and breadth of *Principia*, it took a further century of effort by scores of mathematicians and physicists to build on Newton’s work and to create modern “Newtonian” physics in the form we know it today. But Newton had created the blueprint for this science, and its novelty can be seen from the fact that some of his most capable peers missed the point. After all, he had begun the radical process of transforming “natural philosophy” into theoretical physics — a transformation from traditional qualitative philosophical speculation about possible causes of physical phenomena, to a quantitative study of experimentally observed physical effects. (From this experimental study, mathematical propositions are deduced and then made general by induction, as he explained in *Principia*.)

Even the secular nature of Newton’s work was controversial (and under apparent pressure from critics, he did add a brief mention of God in an appendix to later editions of *Principia*). Although Leibniz was a brilliant philosopher (and he was also the co-inventor, with Newton, of calculus), one of his stated reasons for believing in the ether rather than the Newtonian vacuum was that God would show his omnipotence by creating something, like the ether, rather than leaving vast amounts of nothing. (At the quantum level, perhaps his conclusion, if not his reasoning, was right.) He also invoked God to reject Newton’s inspired (and correct) argument that gravitational interactions between the various planets themselves would eventually cause noticeable distortions in their orbits around the sun; Leibniz claimed God would have had the foresight to give the planets perfect, unchanging perpetual motion. But he was on much firmer ground when he questioned Newton’s (reluctant) assumption of absolute rather than relative motion, although it would take Einstein to come up with a relativistic theory of gravity.

Einstein’s theory is even more accurate than Newton’s, especially on a cosmic scale, but within its own terms — that is, describing the workings of our solar system (including, nowadays, the motion of our own satellites) — Newton’s law of gravity is accurate to within one part in ten million. As for his method of making scientific theories, it was so profound that it underlies all the theoretical physics that has followed over the past three centuries. It’s amazing: one of the most religious, most mystical men of his age put his personal beliefs aside and created the quintessential blueprint for our modern way of doing science in the most objective, detached way possible. Einstein agreed; he wrote a moving tribute in the London *Times *in 1919, shortly after astronomers had provided the first experimental confirmation of his theory of general relativity:

“Let no-one suppose, however, that the mighty work of Newton can really be superseded by [relativity] or any other theory. His great and lucid ideas will retain their unique significance for all time as the foundation of our modern conceptual structure in the sphere of [theoretical physics].”

Robyn Arianrhod is an Honorary Research Associate in the School of Mathematical Sciences at Monash University. She is the author of Seduced by Logic: Émilie Du Châtelet, Mary Somerville and the Newtonian Revolution and Einstein’s Heroes. Read her previous blog posts.

Subscribe to the OUPblog via email or RSS.

Subscribe to only science and medicine articles on the OUPblog via email or RSS.

The post Celebrating Newton, 325 years after Principia appeared first on OUPblog.

]]>29 November 2012 is the 140th anniversary of the death of mathematician Mary Somerville, the nineteenth century’s “Queen of Science”. Several years after her death, Oxford University’s Somerville College was named in her honor -- a poignant tribute because Mary Somerville had been completely self-taught. In 1868, when she was 87, she had signed J. S. Mill’s (unsuccessful) petition for female suffrage, but I think she’d be astonished that we’re still debating “the woman question” in science.

The post What sort of science do we want? appeared first on OUPblog.

]]> ]]>** **

29 November 2012 is the 140th anniversary of the death of mathematician Mary Somerville, the nineteenth century’s “Queen of Science”. Several years after her death, Oxford University’s Somerville College was named in her honor — a poignant tribute because Mary Somerville had been completely self-taught. In 1868, when she was 87, she had signed J. S. Mill’s (unsuccessful) petition for female suffrage, but I think she’d be astonished that we’re still debating “the woman question” in science. Physics, in particular — a subject she loved, especially mathematical physics — is still a very male-dominated discipline, and men as well as women are concerned about it.

Of course, science today is far more complex than it was in Somerville’s time, and for the past forty years feminist critics have been wondering if it’s the kind of science that women actually want; physics, in particular, has improved the lives of millions of people over the past 300 years, but it’s also created technologies and weapons that have caused massive human, social and environmental destruction. So I’d like to revisit an old debate: are science’s obstacles for women simply a matter of managing its applications in a more “female-friendly” way, or is there something about its exclusively male origins that has made science itself sexist?

To manage science in a more female-friendly way, it would be interesting to know if there’s any substance behind gender stereotypes such as that women prefer to solve immediate human problems, and are less interested than men in detached, increasingly expensive fundamental research, and in military and technological applications. Either way, though, it’s self-evident that women should have more say in how science is applied and funded, which means it’s important to have more women in decision-making positions — something we’re still far from achieving.

But could the scientific paradigm itself be alienating to women? Mary Somerville didn’t think so, but it’s often argued (most recently by some eco-feminist and post-colonial critics) that the seventeenth-century Scientific Revolution, which formed the template for modern science, was constructed by European men, and that consequently, the scientific method reflects a white, male way of thinking that inherently preferences white men’s interests and abilities over those of women and non-Westerners. It’s a problematic argument, but justification for it has included an important critique of reductionism — namely, that Western male experimental scientists have traditionally studied physical systems, plants, and even human bodies by dissecting them, studying their components separately and losing sight of the whole system or organism.

The limits of the reductionist philosophy were famously highlighted in biologist Rachel Carson’s book, *Silent Spring*, which showed that the post-War boom in chemical pest control didn’t take account of the whole food chain, of which insects are merely a part. Other dramatic illustrations are climate change, and medical disasters like the thalidomide tragedy: clearly, it’s no longer enough to focus selectively on specific problems such as the action of a drug on a particular symptom, or the local effectiveness of specific technologies; instead, scientists must consider the effect of a drug or medical procedure on the whole person, whilst new technological inventions shouldn’t be separated from their wider social and environmental ramifications.

In its proper place, however, reductionism in basic scientific research is important. (The recent infamous comment by American Republican Senate nominee Todd Akin — that women can “shut down” their bodies during a “legitimate rape”, in order not to become pregnant — illustrates the need for a basic understanding of how the various parts of the human body work.) I’m not sure if this kind of reductionism is a particularly male or particularly Western way of thinking, but either way there’s much more to the scientific method than this; it’s about developing testable hypotheses from observations (reductionist or holistic), and then testing those hypotheses in as objective a way as possible. The key thing in observing the world is curiosity, and this is a human trait, discernible in all children, regardless of race or gender. Of course, girls have traditionally faced more cultural restraints than boys, so perhaps we still need to encourage girls to be actively curious about the world around them. (For instance, it’s often suggested that women prefer biology to physics because they want to help people — and yet, many of the recent successes in medical and biological science would have been impossible without the technology provided by fundamental, curiosity-driven physics.)

Like Mary Somerville, I think the scientific method has universal appeal, but I also think feminist and other critics are right to question its patriarchal and capitalist origins. Although science at its best is value-free, it’s part of the broader community, whose values are absorbed by individual scientists. So much so that Yale researchers Moss-Racusin et al recently uncovered evidence that many scientists themselves, male and female, have an unconscious sexist bias. In their widely reported study, participants judged the same job application (for a lab manager position) to be less competent if it had a (randomly assigned) female name than if it had a male name.

In Mary Somerville’s day, such bias was overt, and it had the authority of science itself: women’s smaller brain size was considered sufficient to “prove” female intellectual inferiority. It was bad science, and it shows how patriarchal perceptions can skew the interpretation not just of women’s competence, but also of scientific data itself. (Without proper vigilance, this kind of subjectivity can slip through the safeguards of the scientific method because of other prejudices, too, such as racism, or even the agendas of funding bodies.) Of course, acknowledging the existence of patriarchal values in society isn’t about hating men or assuming men hate women. Mary Somerville met with “the utmost kindness” from individual scientific men, but that didn’t stop many of them from seeing her as the exception that proved the male-created rule of female inferiority. After all, it takes analysis and courage to step outside a long-accepted norm. And so, the “woman question” is still with us — but in trying to resolve it, we might not only find ways to remove existing gender biases, but also broaden the conversation about what sort of science we all want in the twenty-first century.

Robyn Arianrhod is an Honorary Research Associate in the School of Mathematical Sciences at Monash University. She is the author of Seduced by Logic: Émilie Du Châtelet, Mary Somerville and the Newtonian Revolution and Einstein’s Heroes.

Subscribe to the OUPblog via email or RSS.

Subscribe to only science and medicine articles on the OUPblog via email or RSS.

View more about this book on the _{ }

*Image credit: Mary Somerville. Public domain via Wikimedia Commons.*

The post What sort of science do we want? appeared first on OUPblog.

]]>Three words to sum up Alan Turing? Humour. He had an impish, irreverent and infectious sense of humour. Courage. Isolation. He loved to work alone. Reading his scientific papers, it is almost as though the rest of the world -- the busy community of human minds working away on the same or related problems -- simply did not exist. Turing was determined to do it his way.

The post Summing up Alan Turing appeared first on OUPblog.

]]> ]]>** **

Three words to sum up Alan Turing? Humour. He had an impish, irreverent and infectious sense of humour. Courage. Isolation. He loved to work alone. Reading his scientific papers, it is almost as though the rest of the world — the busy community of human minds working away on the same or related problems — simply did not exist. Turing was determined to do it his way. Three more words? A patriot. Unconventional — he was uncompromisingly unconventional, and he didn’t much care what other people thought about his unusual methods. A genius. Turing’s brilliant mind was sparsely furnished, though. He was a Spartan in all things, inner and outer, and had no time for pleasing décor, soft furnishings, superfluous embellishment, or unnecessary words. To him what mattered was the truth. Everything else was mere froth. He succeeded where a better furnished, wordier, more ornate mind might have failed. Alan Turing changed the world.

What would it have been like to meet him? Turing was tallish (5 feet 10 inches) and broadly built. He looked strong and fit. You might have mistaken his age, as he always seemed younger than he was. He was good looking, but strange. If you came across him at a party you would notice him all right. In fact you might turn round and say “Who on earth is that?” It wasn’t just his shabby clothes or dirty fingernails. It was the whole package. Part of it was the unusual noise he made. This has often been described as a stammer, but it wasn’t. It was his way of preventing people from interrupting him, while he thought out what he was trying to say. *Ah – Ah – Ah – Ah – Ah.* He did it loudly.

If you crossed the room to talk to him, you’d probably find him gauche and rather reserved. He was decidedly lah-di-dah, but the reserve wasn’t standoffishness. He was a man of few words, shy. Polite small talk did not come easily to him. He might if you were lucky smile engagingly, his blue eyes twinkling, and come out with something quirky that would make you laugh. If conversation developed you’d probably find him vivid and funny. He might ask you, in his rather high-pitched voice, whether you think a computer could ever enjoy strawberries and cream, or could make you fall in love with it. Or he might ask if you can say why a face is reversed left to right in a mirror but not top to bottom.

Once you got to know him Turing was fun — cheerful, lively, stimulating, comic, brimming with boyish enthusiasm. His raucous crow-like laugh pealed out boisterously. But he was also a loner. “Turing was always by himself,” said codebreaker Jerry Roberts: “He didn’t seem to talk to people a lot, although with his own circle he was sociable enough.” Like everyone else Turing craved affection and company, but he never seemed to quite fit in anywhere. He was bothered by his own social strangeness — although, like his hair, it was a force of nature he could do little about. Occasionally he could be very rude. If he thought that someone wasn’t listening to him with sufficient attention he would simply walk away. Turing was the sort of man who, usually unintentionally, ruffled people’s feathers — especially pompous people, people in authority, and scientific poseurs. He was moody too. His assistant at the National Physical Laboratory, Jim Wilkinson, recalled with amusement that there were days when it was best just to keep out of Turing’s way. Beneath the cranky, craggy, irreverent exterior there was an unworldly innocence though, as well as sensitivity and modesty.

Turing died at the age of only 41. His ideas lived on, however, and at the turn of the millennium *Time *magazine listed him among the twentieth century’s 100 greatest minds, alongside the Wright brothers, Albert Einstein, DNA busters Crick and Watson, and the discoverer of penicillin, Alexander Fleming. Turing’s achievements during his short life were legion. Best known as the man who broke some of Germany’s most secret codes during the war of 1939-45, Turing was also the father of the modern computer. Today, all who click, tap or touch to open are familiar with the impact of his ideas. To Turing we owe the brilliant innovation of storing applications, and all the other programs necessary for computers to do our bidding, inside the computer’s memory, ready to be opened when we wish. We take for granted that we use the same slab of hardware to shop, manage our finances, type our memoirs, play our favourite music and videos, and send instant messages across the street or around the world. Like many great ideas this one now seems as obvious as the wheel and the arch, but with this single invention — the stored-program universal computer — Turing changed the way we live. His universal machine caught on like wildfire; today personal computer sales hover around the million a day mark. In less than four decades, Turing’s ideas transported us from an era where ‘computer’ was the term for a human clerk who did the sums in the back office of an insurance company or science lab, into a world where many young people have never known life without the Internet.

B. Jack Copeland is the Director of the Turing Archive for the History of Computing, and author of Turing: Pioneer of the Information Age, Alan Turing’s Electronic Brain, and Colossus. He is the editor of The Essential Turing. Read the new revelations about Turing’s death after Copeland’s investigation into the inquest.

Visit the Turing hub on the Oxford University Press UK website for the latest news in theCentenary year. Read our previous posts on Alan Turing including: “Maurice Wilkes on Alan Turing” by Peter J. Bentley, “Turing : the irruption of Materialism into thought” by Paul Cockshott, “Alan Turing’s Cryptographic Legacy” by Keith M. Martin, and “Turing’s Grand Unification” by Cristopher Moore and Stephan Mertens, “Computers as authors and the Turing Test” by Kees van Deemter, and “Alan Turing, Code-Breaker” by Jack Copeland.

For more information about Turing’s codebreaking work, and to view digital facsimiles of declassified wartime ‘Ultra’ documents, visit The Turing Archive for the History of Computing. There is also an extensive photo gallery of Turing and his war at www.the-turing-web-book.com.

Subscribe to the OUPblog via email or RSS.

Subscribe to only British history articles on the OUPblog via email or RSS.

View more about this book on the _{ }

The post Summing up Alan Turing appeared first on OUPblog.

]]>As well as Halloween, Guy Fawkes, and All Saints’s day, this time of the year used to see another day of fun and frenzy. ‘Almanack Day’, towards the end of November, saw the next year’s almanacs go on sale. It generally came round on or about 22 November: St Cecilia’s Day. In London, Stationers’ Hall would be crammed to the rafters...

The post Is Almanac Day in your calendar? appeared first on OUPblog.

]]> ]]>

As well as Halloween, Guy Fawkes, and All Saints’s day, this time of the year used to see another day of fun and frenzy. ‘Almanac Day’, towards the end of November, saw the next year’s almanacs go on sale. It generally came round on or about 22 November: St Cecilia’s Day. In London, Stationers’ Hall would be crammed to the rafters:

The clock strikes, wide asunder start the gates, and in they come, a whole army of porters, darting hither and thither, and seizing the said bags, in many instances as big as themselves. Before we can well understand what is the matter, men and bags have alike vanished – the hall is clear … they will be dispersed through every city and town, and parish, and hamlet of England; the curate will be glancing over the pages of his little book to see what promotions have taken place in the church, and sigh as he thinks of rectories, and deaneries, and bishoprics; the sailor will be deep in the mysteries of tides and new moons that are learnedly expatiated upon in the pages of his; the believer in the stars will be finding new draughts made upon that Bank of Faith impossible to be broken or made bankrupt — his superstition, as he turns over the pages of his Moore — but we have let out our secret. Yes, they are all almanacks — those bags contained nothing but almanacks.

Two hundred or three hundred years ago you could choose from twenty or more almanacs every year. Unlike most of the modern ones they were slim things, with a couple of dozen pages. There were almanacs for Whigs, almanacs for Tories, almanacs for people who believed in astrology and almanacs for those who didn’t, almanacs for farmers, sailors, merchants.

My own journey into the wonderful world of early modern almanacs began with *Poor Robin’s Almanac*. Robin was a fictional character, invented in the 1660s as a way to lampoon astrologers and their almanacs. He went on to write a long-running spoof almanac, clocking up 164 annual issues. He did prognostication –

If on the second of February, thou go either to Fair or Market with store of money in thy pocket, and there have thy purse picked of it all, then that is an unfortunate day.

and history —

1367 BC: Women first invented kissing

and the year’s calendar —

23 June: Friar Tuck’s Day.

Poor Robin’s intellectual descendants included *Punch* (it copied part of his title page) and *Poor Richard*, pseudonym of Benjamin Franklin and author of *The Way to Wealth*. In his day he was loved and very widely read, but he was killed off in the 1820s by a combination of mismanagement, waning popularity, and attacks from the *Society for the Diffusion of Useful Knowledge*.

Others were less uproarious, but just as much fun. *The Ladies’ Diary*, or *Woman’s Almanack* specialized in genteel mathematical puzzles. ‘If I’m a year younger than one-twentieth the square of my age, how old am I?’ ‘If the sun takes four minutes to cross the horizon on New Year’s Day’, where am I? It attracted questions and answers sent in from all over Britain, and gave prizes for the best ones. It ran for over 130 years.

*Old Moore* provided predictions political, social, and meteorological based on the movements of the heavens.

Let my Muse raise, and tell what News she hears

Amongst the Stars, and Motions of the Spheres.

But it combined them with some remarkable popular science writing, on subjects ranging from astronomy to ancient history, compiled by authors who had one eye on the Philosophical Transactions and the other on the public’s taste for sensationalism.

Another scientifically-minded production was the *Nautical Almanac*, started in the 1760s by Longitude’s villain Nevile Maskelyne (he was actually rather a pleasant chap). It gave the moon’s position at three-hour intervals for the whole year, and instructions for working out your longitude from an observation. At two shillings and sixpence, plus the price of a sextant, it came in a good bit cheaper than a Harrison chronometer.

At times nearly one Briton in six was buying an almanac: ‘the greatest triumph of journalism until modern times’ according to historian Bernard Capp. Almanac day may be no more, but almanacs have been circulating for nearly as long as calendars, and if the genre has waxed and waned over the years it seems in no danger of extinction. Partly eclipsed in the early nineteenth century by other forms of popular instruction, the almanac blazed forth again from the 1830s, with sales rising to a million a year for the most popular. Today, *Old Moore* is still with us, though somewhat transformed; so is the *Nautical Almanac*. Whitaker and Schott have given almanacs a new lease of life as annual reference books. Their survival seems a safe prediction.

Benjamin Wardhaughis a historian and fellow of Wolfson College, Oxford. His book, Poor Robin’s Prophecies: A curious Almanac, and the everyday mathematics of Georgian Britain, publishes this month.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

Subscribe to only history articles on the OUPblog via email or RSS.

View more about this book on the _{ }

The post Is Almanac Day in your calendar? appeared first on OUPblog.

]]>