The post Making sense of mathematics appeared first on OUPblog.

]]>Initially ‘making sense of mathematics’ means what it says, namely to use our senses to organize the patterns we see and to make sense of the operations we perform in arithmetic. As we grow more sophisticated we use language to become more precise about the properties of geometrical figures and of numbers in arithmetic that lead on to algebra and beyond. Making sense of mathematics builds on our experiences and can take us on into a variety of different contexts in adult life.

Sometimes this means making sense of a particular situation and making a mathematical model by formulating principles that arise from the nature of the situation. At the very highest level, Newton thought very deeply about moving bodies and homed in on simple properties that led to his laws of motion. Einstein imagined a thought experiment sitting on a train moving at nearly the speed of light to produce his theory of special relativity. Stephen Hawking thought about the expanding universe to think back in time to when the universe began with a big bang.

For most of us, as our understanding of mathematics grows, we begin with everyday ideas and notice patterns that can be understood in mathematical ways. For instance in the arithmetic of whole numbers, we might multiply the same number together several times, say 2x2x2 and write it down as ‘two to the power of three’, symbolized as 2^{3}. Then we see that multiplying 2^{3} by 2^{2} is (2x2x2)x(2×2), so that 2^{3+2} = 2^{5} and we recognize the pattern that may be expressed algebraically as *x ^{m}*

Then we make a leap: what happens if we use this observation in the case where *m*, *n* are negative numbers or fractions? This gives new possibilities such as *x*^{1/2+1/2} = *x*^{1} suggesting that *x*^{1/2} is the square root of *x*. It leads to a more powerful development in mathematics that is valuable for some but can cause serious problems for those who think of *x ^{n}* as multiplying

Such changes in meaning happen more often than we may realize. For example, in dealing with simple arithmetic, taking something away gives a smaller answer. But when negative numbers are introduced, taking away a negative number gives a bigger result. Squaring a non-zero number gives a positive result, but introducing complex numbers gives i^{2} =–1.

As mathematics becomes more sophisticated, natural mathematics, based on our human experience and imagination needs to take account of new ways of thinking. A new kind of formal mathematics evolves that is encountered by students in pure mathematics at university. Mathematics in a particular context is presented in terms of assumed properties (axioms) from which other properties are deduced (as theorems). Many such theories have already been invented: group theory, vector spaces, mathematical analysis, algebraic number theory and so on. Undergraduate pure mathematics introduces these theories and involves the student in making sense of the deductive practices to build a coherent theory in each topic. This has the advantage that properties proved as theorems now remain true not only in familiar situations but also in any new situation where the axioms are satisfied.

Making sense of formal mathematics is not just a one-way process that starts with axioms and proves theorems that can be used in applications. It also works in the reverse direction. Special theorems (called structure theorems) may be proved to show that a given axiomatic system has properties that can be sensed visually through drawing pictures and operationally using operations formulated in the axioms. This links formal theories back to natural ways of using our human senses and operations, now operating at a more sophisticated level supported by the formal theory.

Making sense of mathematics in applications — in physics, engineering, economics, business studies, weather prediction, and so on — involves translating the particular characteristics of the context and formulating mathematical models to solve problems and to construct more sophisticated theories with new applications.

Currently we are experiencing an amazing explosion of technology that grows in sophistication at an enormous pace. Pure mathematics builds ever broadening formal theories, requiring more subtle theoretical foundations to support ever-widening branches of theory and practice. As the tree of mathematical knowledge grows greater superstructure, it also needs to strengthen its foundational roots.

Sense making in mathematics as a whole is therefore not a static state of understanding. Each of us needs to find our own way of progressing in mathematics for our own purposes. Sometimes this may involve learning what to do to cope with a given situation, however, in the longer term it is more profitable to make an effort to make sense of mathematics in successive new contexts. This may involve sufficient insight to operate in a given social environment, to deal with a particular topic in a technical context or a formal interest in pure mathematics. Mathematics as a whole builds from our human perception and operation, becoming more sophisticated through the development of language and formal theories that evolve in both theory and practice.

The post Making sense of mathematics appeared first on OUPblog.

]]>The post The Erdős number appeared first on OUPblog.

]]>The mathematical equivalent of the Bacon number is the “Erdős number”. Paul Erdős (1913-1996) was the most prolific mathematician of recent times with more than 1,500 papers, including more than 500 co-authors. The Erdős number now describes how close you are to Paul Erdős in terms of mathematical publications. So, for example, Robin Wilson has an Erdős number of 1 because he co-authored a paper with Erdős, whereas John Watkins has an Erdős number of 2 because he co-authored a paper with Robin Wilson (incidentally he co-authored a paper with Peter Cameron who also has an Erdős number of 1). Even the physicist Albert Einstein had an Erdős number of 2, though this is hardly his greatest claim to fame.

Having a low Erdős number is a matter of great pride for mathematicians. The highest known is 7, although there are also mathematicians who had no connection with Erdős and whose Erdős number is defined as “infinity”. Erdős was so open to working with other mathematicians that it will forever be a deep regret for those of us whose Erdős number is greater than 1 that we never collaborated on a paper with him. Even the great Giancarlo Rota shared this same regret and recalled an evening when he mentioned to Erdős a problem he was working on and Paul provided a hint that eventually led to a complete solution. While Erdős was appropriately thanked in the paper’s introduction, Rota always regretted that he did not include Erdős as a co-author.

Erdős was indeed a genuine mathematical prodigy, and at the age of 19 gave a new and gorgeously simple proof for a well-known theorem about numbers: between any number *n* and its double 2*n* there is a prime number. This was his very first mathematical paper.

His supposed obsession with mathematics to the exclusion of anything else in life is now legendary. With no real home base, he traveled the world, staying with friends, visiting math departments, and attending mathematical conferences. He always looked the same, in a suit and a white shirt with an open collar. But, inevitably, he could always be found seated on a couch talking to someone about a mathematical problem.

At conferences he almost always gave a version of a one hour talk he called “open problems” in which, without notes, he would discuss in great detail the current open mathematical problems he was interested in. For many of these problems he would offer monetary rewards for solutions, $100 for a fairly routine problem or perhaps $1,000 or more for a problem he considered especially difficult or important. He knew he could never be able to pay for solutions for *all* of these problems if they were actually solved, but he also knew that most of them would remain unsolved during his lifetime.

There are countless anecdotes that capture the spirit of Paul Erdős. He could be whimsical: at one conference he announced that he was 81 and most likely a square for the last time. Another time he visited a friend at Santa Clara University in California and upon arrival asked his host “what was the temperature in this valley during the Ice Age?” But, the best stories are from his closest mathematical friends with whom he stayed throughout the years. Many of these have a central theme: at some point in the early morning, about 2 or 3 am, Paul would wander into their bedroom, and with no preamble whatsoever, say something like “about the problem we were discussing last night, what if …”.

The famous neurologist and writer Oliver Sacks said of Paul Erdős: “a mathematical genius of the first order, Paul Erdős was totally obsessed with his subject — he thought and wrote mathematics for nineteen hours a day until the day he died.”

*Featured image credit: At the math grad house by kimmanleyort. CC by ND 2.0 via Flickr.*

The post The Erdős number appeared first on OUPblog.

]]>The post Putting two and two together appeared first on OUPblog.

]]>Let’s start with an easy one. It doesn’t take a mathematical whiz to know that 2 + 2 = 4 and that’s indeed the heart of this expression. To *put two and two together* is used to mean ‘draw an obvious conclusion from what is known or evident’. Conversely, if you say that somebody might *put two and two together and make five*, you’re suggesting that they are attempting to draw a plausible conclusion from what is known and evident, but that their conclusion is ultimately incorrect. *2 + 2 = 5* was famously used in George Orwell’s *Ninteen Eighty-Four* as an example of a dogma that seems obviously false, but which the totalitarian Party of the novel may require the population to believe: ‘In the end the Party would announce that two and two made five, and you would have to believe it.’

I remember a moment of surprise in the middle of one of my mathematics A-Level classes. It was a nice change from the almost unbroken moments of bewilderment that characterized the experience. It was when we were looking at the equations relevant to what happens when something rotating on an axis suddenly stopped. Well, guess what? It would go off at a tangent.

So, what is a tangent? It’s a straight line that touches a curve at a point, but (when extended) does not cross it at that point. (It’s also apparently ‘the trigonometric function that is equal to the ratio of the sides [other than the hypotenuse] opposite and adjacent to an angle in a right-angled triangle’, but the less said about that the better.)

In common parlance, of course, it simply means ‘a completely different line of thought or action’. While we’re mentioning the *hypotenuse*, you may well recall that it is ‘the longest side of a right-angled triangle, opposite the right angle’, but may not know the word’s origin: it ultimately comes from the Greek verb *hupoteinein*, from *hupo *‘under’ + *teinein *‘stretch’.

In a fit of pique, you might have described a person or a group as *the lowest common denominator*. It is often said in a derogatory way to mean ‘the level of the least discriminating audience’; for example, ‘they were accused of pandering to the lowest common denominator of public taste’. But what actually *is* a denominator?

Cast your mind back to the world of fractions–specifically vulgar fractions, or those that are expressed by one number over another, rather than decimally. The number above the line is the *numerator* and the number below the line is the *denominator*. In ½, for instance, the numerator is 1 and the denominator is 2. In mathematics, the *lowest common denominator* is ‘the lowest common multiple of the denominators of several vulgar fractions’. For instance, the lowest common denominator of 2/5 and 1/3 is 15, as that is the lowest common multiple of the denominators 5 and 3; the fractions would become 6/15 and 5/15 respectively. It isn’t entirely clear how this sense transferred to the broader, non-mathematical sense.

*Image Credit: “Math Castle” by Gabriel Molina. CC by 2.0 via Flickr.*

A version of this blog post first appeared on the OxfordWords blog.

The post Putting two and two together appeared first on OUPblog.

]]>The post How do we protect ourselves from cybercrime? appeared first on OUPblog.

]]>We seem to see new reports of hacking every week, ranging from the social media profiles of Taylor Swift and Centcom, to the email accounts of Sony, to at-home security gadgets such as baby monitors. Certain types of hacking, such as using a public hotspot to access information on someone else’s computer, are mere child’s play, as demonstrated by this 7 year-old. Such public hotspots are used in hundreds of thousands of restaurants, hotels, and other locations throughout the UK. So how do we – companies, institutions, and individuals – protect ourselves from cybercrimes?

Computer-based systems that store and process confidential, sensitive, and private information are vulnerable to attacks exploiting weaknesses at the technical, social, and policy level. Attacks may seek to compromise the confidentiality, integrity, or availability of the information, as well as violate the privacy of the information’s owners and stakeholders.

One reason why achieving cybersecurity is so hard in practice is that systems are often designed in isolation, but operate as parts of a broader ecosystem. In such an environment, delivering complex sets of services, the defenders may be less interested in the security of a particular system and more in the overall sustainability and resilience of the ecosystem. Systems across sectors – financial, transport, retail, health, communications, etc – are massively interconnected. Vulnerabilities in systems in one sector – that may be exploited by criminals, terrorists, nation-states – may lead to critical failures in others.

The extent of the threat to the information ecosystems upon which modern societies depend, and the scale of the required response, is increasingly being recognised by major governments, with substantial research and development funds being made available. Moreover, the solutions to cybersecurity problems also span the technical and policy layers.

Understanding how these ecosystems operate requires an interdisciplinary approach: computer scientists to design the software and networks; cryptographers to protect confidentiality of communications; economists to explain how the competing incentives of stakeholders might play out; anthropologists to explain cultural contexts and how they impact solutions; psychologists to explain how decisions are made and the impact on system design; the legal and policy scholars to set out regulatory constraints; criminologists and crime scientists to explain the motivation of perpetrators; and experts in strategy to frame the international context. Consequently, cybersecurity research cannot remain siloed. Instead, rigorous, interdisciplinary scholarship that incorporates multiple perspectives is required.

Future successes in cybersecurity policy and practice will depend on dialogue, knowledge transfer, and collaboration.

*Image credit: Security. Public Domain via Pixabay. *

The post How do we protect ourselves from cybercrime? appeared first on OUPblog.

]]>Still with me? Excellent. Some of you may know that Sir Roger developed much of modern black hole theory with his collaborator, Stephen Hawking, and at the heart of Interstellar lies a very unusual black hole. Straightaway, I asked Sir Roger if he’d seen the film. What’s unusual about Gargantua, the black hole in Interstellar, is that it’s scientifically accurate.

The post That’s relativity appeared first on OUPblog.

]]>Still with me? Excellent.

Some of you may know that Sir Roger developed much of modern black hole theory with his collaborator, Stephen Hawking, and at the heart of *Interstellar* lies a very unusual black hole. Straightaway, I asked Sir Roger if he’d seen the film. What’s unusual about Gargantua, the black hole in *Interstellar*, is that it’s scientifically accurate, computer-modeled using Einstein’s field equations from General Relativity.

Scientists reckon they spend far too much time applying for funding and far too little thinking about their research as a consequence. And, generally, scientific budgets are dwarfed by those of Hollywood movies. To give you an idea, Alfonso Cuarón actually told me he briefly considered filming *Gravity* in space, and that was what’s officially classed as an “independent” movie. For big-budget studio blockbuster *Interstellar*, Kip Thorne, scientific advisor to Nolan and Caltech’s “Feynman Professor of Theoretical Physics”, seized his opportunity, making use of Nolan’s millions to see what a real black hole actually looks like. He wasn’t disappointed and neither was the director who decided to use the real thing in his movie without tweaks.

Black holes are so called because their gravitational fields are so strong that not even light can escape them. Originally, we thought these would be dark areas of the sky, blacker than space itself, meaning future starship captains might fall into them unawares. Nowadays we know the opposite is true – gravitational forces acting on the material spiralling into the black hole heat it to such high temperatures that it shines super-bright, forming a glowing “accretion disk”.

The computer program the visual effects team created revealed a curious rainbowed halo surrounding Gargantua’s accretion disk. At first they and Thorne presumed it was a glitch, but careful analysis revealed it was behavior buried in Einstein’s equations all along – the result of gravitational lensing. The movie had discovered a new scientific phenomenon and at least two academic papers will result: one aimed at the computer graphics community and the other for astrophysicists.

I knew Sir Roger would want to see the movie because there’s a long scene where you, the viewer, fly over the accretion disk–not something made up to look good for the IMAX audience (you *have* to see this in full IMAX) but our very best prediction of what a real black hole should look like. I was blown away.

Some parts of the movie are a little cringeworthy, not least the oft-repeated line, “that’s relativity”. But there’s a reason for the characters spelling this out. As well as accurately modeling the black hole, the plot requires relativistic “time dilation”. Even though every physicist has known how to travel in time for over a century (go very fast or enter a very strong gravitational field) the general public don’t seem to have cottoned on.

Most people don’t understand relativity, but they’re not alone. As a science editor, I’m privileged to meet many of the world’s most brilliant people. Early in my publishing career I was befriended by Subramanian Chandrasekhar, after whom the Chandra space telescope is now named. Penrose and Hawking built on Chandra’s groundbreaking work for which he received the Nobel Prize; his *The Mathematical Theory of Black Holes* (1983) is still in print and going strong.

When visiting Oxford from Chicago in the 1990s, Chandra and his wife Lalitha would come to my apartment for tea and we’d talk physics and cosmology. In one of my favorite memories he leant across the table and said, “Keith – Einstein never actually understood relativity”. Quite a bold statement and remarkably, one that Chandra’s own brilliance could end up rebutting.

Space is big – mind-bogglingly so once you start to think about it, but we only know how big because of Chandra. When a giant sun ends its life, it goes supernova – an explosion so bright it outshines all the billions of stars in its home galaxy combined. Chandra deduced that certain supernovae (called “type 1a”) will blaze with near identical brightness. Comparing the actual brightness with however bright it appears through our telescopes tells us how far away it is. Measuring distances is one of the hardest things in astronomy, but Chandra gave us an ingenious yardstick for the Universe.

In 1998, astrophysicists were observing type 1a supernovae that were a *very* long way away. Everyone’s heard of the Big Bang, the moment of creation of the Universe; even today, more than 13 billion years later, galaxies continue to rush apart from each other. The purpose of this experiment was to determine how much this rate of expansion was slowing down, due to gravity pulling the Universe back together. It turns out that the expansion’s speeding up. The results stunned the scientific world, led to Nobel Prizes, and gave us an anti-gravitational “force” christened “dark energy”. It also proved Einstein right (sort of) and, perhaps for the only time in his life, Chandra wrong.

Why Chandra told me Einstein was wrong was because of something Einstein himself called his “greatest mistake”. When relativity was first conceived, it was before Edwin Hubble (after whom another space telescope is named) had discovered space itself was expanding. Seeing that the stable solution of his equations would inevitably mean the collapse of everything in the Universe into some “big crunch”, Einstein devised the “cosmological constant” to prevent this from happening – an anti-gravitational force to maintain the presumed status quo.

Once Hubble released his findings, Einstein felt he’d made a dreadful error, as did most astrophysicists. However, the discovery of dark energy has changed all that and Einstein’s greatest mistake could yet prove an accidental triumph.

Of course Chandra knew Einstein understood relativity better than almost anyone on the planet, but it frustrates me that many people have such little grasp of this most beautiful and brilliant temple of science. Well done Christopher Nolan for trying to put that right.

*Interstellar* is an ambitious movie – I’d call it “Nolan’s *2001*” – and it educates as well as entertains. While Matthew McConaughey barely ages in the movie, his young daughter lives to a ripe old age, all based on what we know to be true. Some reviewers have criticized the ending – something I thought I wouldn’t spoil for Sir Roger. Can you get useful information back out of a black hole? Hawking has changed his mind, now believing such a thing is possible, whereas Penrose remains convinced it cannot be done.

We don’t have all the answers, but whichever one of these giants of the field is right, Nolan has produced a thought-provoking and visually spectacular film.

*Image Credit: “Best-Ever Snapshot of a Black Hole’s Jets.” Photo by NASA Goddard Space Flight Center. CC by 2.0 via Flickr.*

The post That’s relativity appeared first on OUPblog.

]]>The post Five tips for women and girls pursuing STEM careers appeared first on OUPblog.

]]>**(1) Be open to discussing your research with interested people.**

From in-depth discussions at conferences in your field to a quick catch up with a passing colleague, it can be endlessly beneficial to bounce your ideas off a range of people. New insights can help you to better understand your own ideas.

**(2) Explore research problems outside of your own. **

Looking at problems from multiple viewpoints can add huge value to your original work. Explore peripheral work, look into the work of your colleagues, and read about the achievements of people whose work has influenced your own. New information has never been so discoverable and accessible as it is today. So, go forth and hunt!

**(3) Collaborate with people from different backgrounds.**

The chance of two people having read exactly the same works in their lifetime is nominal, so teaming up with others is guaranteed to bring you new ideas and perspectives you might never have found alone.

**(4) Make sure your research is fun and fulfilling.**

As with any line of work, if it stops being enjoyable, your performance can be at risk. Even highly self-motivated people have off days, so look for new ways to motivate yourself and drive your work forward. Sometimes this means taking some time to investigate a new perspective or angle from which to look at what you are doing. Sometimes this means allowing yourself time and distance from your work, so you can return with a fresh eye and a fresh mind!

**(5) Surround yourself with friends who understand your passion for scientific research.**

The life of a researcher can be lonely, particularly if you are working in a niche or emerging field. Choose your company wisely, ensuring your valuable time is spent with friends and family who support and respect your work.

*Image Credit: “Board” by blickpixel. Public domain via Pixabay. *

The post Five tips for women and girls pursuing STEM careers appeared first on OUPblog.

]]>The post Celebrating Women in STEM appeared first on OUPblog.

]]>From astronomer Caroline Herschel to the first female winner of the Fields Medal, Maryam Mirzakhani, you can use our interactive timeline to learn more about the both famous and forgotten women whose works in STEM fields have changed our world.

*Featured image credit: Microscope. Public Domain via Pixabay.*

The post Celebrating Women in STEM appeared first on OUPblog.

]]>The post Why causality now? appeared first on OUPblog.

]]>Causality has been a headache for scholars since ancient times. The oldest extensive writings may have been Aristotle, who made causality a central part of his worldview. Then we jump 2,000 years until causality again became a prominent topic with Hume, who was a skeptic, in the sense that he believed we cannot think of causal relationships as logically necessary, nor can we establish them with certainty.

The next major philosophical figure after Hume was probably David Lewis, who proposed quite a controversial account saying roughly that something was a cause of an effect in this world if, *in other nearby possible worlds* where that cause didn’t happen, the effect didn’t happen either. Currently, we come to work in computer science originated by Judea Pearl and by Spirtes, Glymour and Scheines and collaborators.

All of this is highly theoretical and formal. Can we reconstruct philosophical theorizing about causality in the sciences in simpler terms than this? Sure we can!

One way is to start from scientific practice. Even though scientists often don’t talk explicitly about causality, it *is *there. Causality is an integral part of the scientific enterprise. Scientists don’t worry too much about what causality is – a chiefly metaphysical question – but are instead concerned with a number of activities that, one way or another, bear on causal notions. These are what we call the five scientific problems of causality:

- Inference: Does C cause E? To what extent?
- Explanation: How does C cause or prevent E?
- Prediction: What can we expect if C does (or does not) occur?
- Control: What factors should we hold ﬁxed to understand better the relation between C and E? More generally, how do we control the world or an experimental setting?
- Reasoning: What considerations enter into establishing whether/how/to what extent C causes E?

This does not mean that metaphysical questions cease to be interesting. Quite the contrary! But by engaging with scientific practice, we can work towards a *timely* and solid philosophy of causality.

The traditional philosophical treatment of causality is to give a single conceptualization, an account of the concept of causality, which may also tell us what causality in the world is, and may then help us understand causal methods and scientific questions.

Our aim, instead, is to focus on the scientific questions, bearing in mind that there are five of them, and build a more pluralist view of causality, enriched by attention to the diversity of scientific practices. We think that many existing approaches to causality, such as mechanism, manipulationism, inferentialism, capacities and processes can be used together, as tiles in a causal mosaic that can be created to help you assess, develop, and criticize a scientific endeavour.

In this spirit we are attempting to develop, in collaboration, complementary ideas of causality as information (Illari) and variation (Russo). The idea is that we can conceptualize in general terms the causal linking or production of effect by the cause as the transmission of information between cause and effect (following Salmon); while variation is the most general conceptualization of the patterns of difference-making we can detect in populations where a cause is acting (following Mill). The thought is that we can use these complementary ideas to address the scientific problems.

For example, we can think about how we use complementary evidence in causal inference, tracking information transmission, and combining that with studies of variation in populations. Alternatively, we can think about how measuring variation may help us formulate policy decisions, as might seeking to block possible avenues of information transmission. Having both concepts available assists in describing this, and reasoning well – and they will also be combined with other concepts that have been made more precise in the philosophical literature, such as capacities and mechanisms.

Ultimately, the hope is that sharpening up the reasoning will assist in the conceptual enterprise that lies at the intersection of philosophy and science. And help decide whether to encourage sport, mobile phones, homeopathy and solar panels aboard the mission to Mars!

The post Why causality now? appeared first on OUPblog.

]]>The post Accusation breeds guilt appeared first on OUPblog.

]]>The guilty party – let’s call her Annette – can try to convince us of her trustworthiness by only saying things that are true, insofar as such truthfulness doesn’t incriminate her (the old adage of making one’s lies as close to the truth as possible applies here). But this is not the only strategy available. In addition, Annette can attempt to deflect suspicion away from herself by questioning the trustworthiness of others – in short, she can say something like:

“I’m not a liar, Betty is!”

However, accusations of untrustworthiness of this sort are peculiar. The point of Annette’s pronouncement is to affirm her innocence, but such protestations rarely increase our overall level of trust. Either we don’t believe Annette, in which case our trust in Annette is likely to drop (without affecting how much we trust Betty), or we do believe Annette, in which case our trust in Betty is likely to decrease (without necessarily increasing our overall trust in Annette).

Thus, accusations of untrustworthiness tend to decrease the overall level of trust we place in those involved. But is this reflective of an actual increase in the number of lies told? In other words, does the logic of such accusations makes it the case that, the higher the number of accusations, the higher the number of characters that *must *be lying?

Consider a group of people *G*, and imagine that, simultaneously, each person in the group accuses one, some, or all of the other people in the group of lying right at this minute. For example, if our group consists of three people:

*G* = {Annette, Betty, Charlotte}

then Betty can make one of three distinct accusations:

“Annette is lying.”

“Charlotte is lying.”

“Both Annette and Charlotte are lying.”

Likewise, Annette and Charlotte each have three choices regarding their accusations. We can then ask which members of the group could be, or which must be, telling the truth, and which could be, or which must be, lying by examining the logical relations between the accusations made by each member of the group. For example, if Annette accuses both Betty and Charlotte of lying, then either (i) Annette is telling the truth, in which case both Betty and Charlotte’s accusations must be false, or (ii) Annette is lying, in which case either Betty is telling the truth or Charlotte is telling the truth (or both).

This set-up allows for cases that are paradoxical. If:

Annette says “Betty is lying.”

Betty says “Charlotte is lying.”

Charlotte says “Annette is lying.”

then there is no coherent way to assign the labels “liar” and “truth-teller” to the three in such a way as to make sense. Since we are here interested in investigating results regarding how many lies are told (rather than scenarios in which the notion of lying versus telling the truth breaks down), we shall restrict our attention to those groups, and their accusations, that are not paradoxical.

The following are two simple results that constraint the number of liars, and the number of truth-tellers, in any such group (I’ll provide proofs of these results in the comments after a few days).

“Accusations of untrustworthiness tend to decrease the overall level of trust we place in those involved”

Result 1: If, for some number *m*, each person in the group accuses at least *m* other people in the group of lying (and there is no paradox) then there are at least *m* liars in the group.

Result 2: If, for any two people in the group *p*_{1} and *p*_{2}, either *p*_{1} accuses *p*_{2} of lying, or *p*_{2 }accuses *p*_{1} of lying (and there is no paradox), then exactly one person in the group is telling the truth, and everyone else is lying.

These results support an affirmative answer to our question: Given a group of people, the more accusations of untrustworthiness (i.e., of lying) are made, the higher the minimum number of people in the group that must be lying. If there are enough accusations to guarantee that each person accuses at least *n* people, then there are at least *n* liars, and if there are enough to guarantee that there is an accusation between each pair of people, then all but one person is lying. (Exercise for the reader: show that there is no situation of this sort where everyone is lying).

Of course, the set-up just examined is extremely simple, and rather artificial. Conversations (or mystery novels, or court cases, etc.) in real life develop over time, involve all sorts of claims other than accusations, and can involve accusations of many different forms not included above, including:

“Everything Annette says is a lie!”

“Betty said something false yesterday!”

“What Charlotte is about to say is a lie!”

Nevertheless, with a bit more work (which I won’t do here) we can show that, the more accusations of untrustworthiness are made in a particular situation, the more of the claims made in that situation must be lies (of course, the details will depend both on the number of accusations and the kind of accusations). Thus, it’s as the title says: accusation breeds guilt!

**Note:** The inspiration for this blog post, as well as the phrase “Accusation breeds guilt” comes from a brief discussion of this phenomenon – in particular, of ‘Result 2′ above – in ‘Propositional Discourse Logic’, by S. Dyrkolbotn & M. Walicki, Synthese 191: 863 – 899.

The post Accusation breeds guilt appeared first on OUPblog.

]]>The post A very short trivia quiz appeared first on OUPblog.

]]>We hope you enjoyed testing your trivia knowledge in this very short quiz.

*Headline image credit: Pondering Away. © GlobalStock via iStock Photo.*

The post A very short trivia quiz appeared first on OUPblog.

]]>The post Celebrating Alan Turing appeared first on OUPblog.

]]>We live in an age that Turing both predicted and defined. His life and achievements are starting to be celebrated in popular culture, largely with the help of the newly released film *The Imitation Game*, starring Benedict Cumberbatch as Turing and Keira Knightley as Joan Clarke. We’re proud to publish some of Turing’s own work in mathematics, computing, and artificial intelligence, as well as numerous explorations of his life and work. Use our interactive Enigma Machine below to learn more about Turing’s extraordinary achievements.

*Image credits: (1) Bletchley Park Bombe by Antoine Taveneaux. CC-BY-SA-3.0 via Wikimedia Commons. (2) Alan Turing Aged 16, Unknown Artist. Public domain via Wikimedia Commons. (3) Good question by Garrett Coakley. CC-BY-SA 2.0 via Flickr. *

The post Celebrating Alan Turing appeared first on OUPblog.

]]>The post What do rumors, diseases, and memes have in common? appeared first on OUPblog.

]]>Diseases, rumors, memes, and other information all spread over networks. A lot of research has explored the effects of network structure on such spreading. Unfortunately, most of this research has a major issue: it considers networks that are not realistic enough, and this can lead to incorrect predictions of transmission speeds, which people are most important in a network, and so on. So how does one address this problem?

Traditionally, most studies of propagation on networks assume a very simple network structure that is static and only includes one type of connection between people. By contrast, real networks change in time — one contacts different people during weekdays and on weekends, one (hopefully) stays home when one is sick, new University students arrive from all parts of the world every autumn to settle into new cities. They also include multiple types of social ties (Facebook, Twitter, and – gasp – even face-to-face friendships), multiple modes of transportation, and so on. That is, we consume and communicate information through all sorts of channels. To consider a network with only one type of social tie ignores these facts and can potentially lead to incorrect predictions of which memes go viral and how fast information spreads. It also fails to allow differentiation between people who are important in one medium from people who are important in a different medium (or across multiple media). In fact, most real networks include a far richer “multilayer” structure. Collapsing such structures to obtain and then study a simpler network representation can yield incorrect answers for how fast diseases or ideas spread, the robustness level of infrastructures, how long it takes for interacting oscillators to synchronize, and more.

Recently, an increasingly large number of researchers are studying mathematical objects called “multilayer networks”. These generalize ordinary networks and allow one to incorporate time-dependence, multiple modes of connection, and other complexities. Work on multilayer networks dates back many decades in fields like sociology and engineering, and of course it is well-known that networks don’t exist in isolation but rather are coupled to other networks. The last few years have seen a rapid explosion of new theoretical tools to study multilayer networks.

And what types of things do researchers need to figure out? For one thing, it is known that multilayer structures induce correlations that are invisible if one collapses multilayer networks into simpler representations, so it is essential to figure out when and by how much such correlations increase or decrease the propagation of diseases and information, how they change the ability of oscillators to synchronize, and so on. From the standpoint of theory, it is necessary to develop better methods to measure multilayer structures, as a large majority of the tools that have been used thus far to study multilayer networks are mostly just more complicated versions of existing diagnostic and models. We need to do better. It is also necessary to systematically examine the effects of multilayer structures, such as correlations between different layers (e.g., perhaps a person who is important for the social network that is encapsulated in one layer also tends to be important in other layers?), on different types of dynamical processes. In these efforts, it is crucial to consider not only simplistic (“toy”) models — as in most of the work on multilayer networks thus far — but to move the field towards the examination of ever more realistic and diverse models and to estimate the parameters of these models from empirical data. As our review article illustrates, multilayer networks are both exciting and important to study, but the increasingly large community that is studying them still has a long way to go. We hope that our article will help steer these efforts, which promise to be very fruitful.

The post What do rumors, diseases, and memes have in common? appeared first on OUPblog.

]]>The post The deconstruction of paradoxes in epidemiology appeared first on OUPblog.

]]>I think a methodological “revolution” is probably going on in the science of epidemiology, but I’m not totally sure. Of course, in science not being sure is part of our normal state. And we mostly like it. I had the feeling that a revolution was ongoing in epidemiology many times. While reading scientific articles, for example. And I saw signs of it, which I think are clear, when reading the latest draft of the forthcoming book *Causal Inference* by M.A. Hernán and J.M. Robins from Harvard (Chapman & Hall / CRC, 2015). I think the “revolution” — or should we just call it a “renewal”? — is deeply changing how epidemiological and clinical research is conceived, how causal inferences are made, and how we assess the validity and relevance of epidemiological findings. I suspect it may be having an immense impact on the production of scientific evidence in the health, life, and social sciences. If this were so, then the impact would also be large on most policies, programs, services, and products in which such evidence is used. And it would be affecting thousands of institutions, organizations and companies, millions of people.

One example: at present, in clinical and epidemiological research, every week “paradoxes” are being deconstructed. Apparent paradoxes that have long been observed, and whose causal interpretation was at best dubious, are now shown to have little or no causal significance. For example, while obesity is a well-established risk factor for type 2 diabetes (T2D), among people who already developed T2D the obese fare better than T2D individuals with normal weight. Obese diabetics appear to survive longer and to have a milder clinical course than non-obese diabetics. But it is now being shown that the observation lacks causal significance. (Yes, indeed, an observation may be real and yet lack causal meaning.) The demonstration comes from physicians, epidemiologists, and mathematicians like Robins, Hernán, and colleagues as diverse as S. Greenland, J. Pearl, A. Wilcox, C. Weinberg, S. Hernández-Díaz, N. Pearce, C. Poole, T. Lash , J. Ioannidis, P. Rosenbaum, D. Lawlor, J. Vandenbroucke, G. Davey Smith, T. VanderWeele, or E. Tchetgen, among others. They are building methodological knowledge upon knowledge and methods generated by graph theory, computer science, or artificial intelligence. Perhaps one way to explain the main reason to argue that observations as the mentioned “obesity paradox” lack causal significance, is that “conditioning on a collider” (in our example, focusing only on individuals who developed T2D) creates a spurious association between obesity and survival.

The “revolution” is partly founded on complex mathematics, and concepts as “counterfactuals,” as well as on attractive “causal diagrams” like Directed Acyclic Graphs (DAGs). Causal diagrams are a simple way to encode our subject-matter knowledge, and our assumptions, about the qualitative causal structure of a problem. Causal diagrams also encode information about potential associations between the variables in the causal network. DAGs must be drawn following rules much more strict than the informal, heuristic graphs that we all use intuitively. Amazingly, but not surprisingly, the new approaches provide insights that are beyond most methods in current use. In particular, the new methods go far deeper and beyond the methods of “modern epidemiology,” a methodological, conceptual, and partly ideological current whose main eclosion took place in the 1980s lead by statisticians and epidemiologists as O. Miettinen, B. MacMahon, K. Rothman, S. Greenland, S. Lemeshow, D. Hosmer, P. Armitage, J. Fleiss, D. Clayton, M. Susser, D. Rubin, G. Guyatt, D. Altman, J. Kalbfleisch, R. Prentice, N. Breslow, N. Day, D. Kleinbaum, and others.

We live exciting days of paradox deconstruction. It is probably part of a wider cultural phenomenon, if you think of the “deconstruction of the Spanish omelette” authored by Ferran Adrià when he was the world-famous chef at the elBulli restaurant. Yes, just kidding.

Right now I cannot find a better or easier way to document the possible “revolution” in epidemiological and clinical research. Worse, I cannot find a firm way to assess whether my impressions are true. No doubt this is partly due to my ignorance in the social sciences. Actually, I don’t know much about social studies of science, epistemic communities, or knowledge construction. Maybe this is why I claimed that a sociology of epidemiology is much needed. A sociology of epidemiology would apply the scientific principles and methods of sociology to the science, discipline, and profession of epidemiology in order to improve understanding of the wider social causes and consequences of epidemiologists’ professional and scientific organization, patterns of practice, ideas, knowledge, and cultures (e.g., institutional arrangements, academic norms, scientific discourses, defense of identity, and epistemic authority). It could also address the patterns of interaction of epidemiologists with other branches of science and professions (e.g. clinical medicine, public health, the other health, life, and social sciences), and with social agents, organizations, and systems (e.g. the economic, political, and legal systems). I believe the tradition of sociology in epidemiology is rich, while the sociology of epidemiology is virtually uncharted (in the sense of not mapped neither surveyed) and unchartered (i.e. not furnished with a charter or constitution).

Another way I can suggest to look at what may be happening with clinical and epidemiological research methods is to read the changes that we are witnessing in the definitions of basic concepts as risk, rate, risk ratio, attributable fraction, bias, selection bias, confounding, residual confounding, interaction, cumulative and density sampling, open population, test hypothesis, null hypothesis, causal null, causal inference, Berkson’s bias, Simpson’s paradox, frequentist statistics, generalizability, representativeness, missing data, standardization, or overadjustment. The possible existence of a “revolution” might also be assessed in recent and new terms as collider, M-bias, causal diagram, backdoor (biasing path), instrumental variable, negative controls, inverse probability weighting, identifiability, transportability, positivity, ignorability, collapsibility, exchangeable, g-estimation, marginal structural models, risk set, immortal time bias, Mendelian randomization, nonmonotonic, counterfactual outcome, potential outcome, sample space, or false discovery rate.

You may say: “And what about textbooks? Are they changing dramatically? Has one changed the rules?” Well, the new generation of textbooks is just emerging, and very few people have yet read them. Two good examples are the already mentioned text by Hernán and Robins, and the soon to be published by T. VanderWeele, *Explanation in causal inference: Methods for mediation and interaction* (Oxford University Press, 2015). Clues can also be found in widely used textbooks by K. Rothman et al. (*Modern Epidemiology*, Lippincott-Raven, 2008), M. Szklo and J Nieto (*Epidemiology: Beyond the Basics*, Jones & Bartlett, 2014), or L. Gordis (*Epidemiology*, Elsevier, 2009). Above all, the foundations of the current revolution can be seen in “Causality: Models, Reasoning and Inference” by Judea Pearl (2nd. edition, Cambridge University Press, 2009).

Finally, another good way to assess what might be changing is to read what gets published in top journals as *Epidemiology*, the *International Journal of Epidemiology*, the *American Journal of Epidemiology*, or the *Journal of Clinical Epidemiology*. Pick up any issue of the main epidemiologic journals and you will find several examples of what I suspect is going on. If you feel like it, look for the DAGs. I recently saw a tweet saying “A DAG in The Lancet!”. It was a surprise: major clinical journals are lagging behind. But they will soon follow and adopt the new methods: the clinical relevance of the latter is huge. Or is it not such a big deal? If no “revolution” is going on, how are we to know?

*Feature image credit: Test tubes by PublicDomainPictures. Public Domain via Pixabay.
*

The post The deconstruction of paradoxes in epidemiology appeared first on OUPblog.

]]>The post Recurring decimals, proof, and ice floes appeared first on OUPblog.

]]>Partly, of course, so they develop thinking skills to use on questions whose truth-status they won’t know in advance. Another part, however, concerns the dialogue nature of proof: a proof must be not only correct, but also persuasive: and persuasiveness is not objective and absolute, it’s a two-body problem. Not only to tango does one need two.

The statements — (1) ice floats on water, (2) ice is less dense than water — are widely acknowledged as facts and, usually, as interchangeable facts. But although rooted in everyday experience, they are not that experience. We have firstly represented stuffs of experience by sounds English speakers use to stand for them, then represented these sounds by word-processor symbols that, by common agreement, stand for them. Two steps away from reality already! This is what humans do: we invent symbols for perceived realities and, eventually, evolve procedures for manipulating them in ways that mirror how their real-world origins behave. Virtually no communication between two persons, and possibly not much internal dialogue within one mind, can proceed without this. Man is a symbol-using animal.

Statement (1) counts as fact because folk living in cooler climates have directly observed it throughout history (and conflicting evidence is lacking). Statement (2) is factual in a significantly different sense, arising by further abstraction from (1) and from a million similar experiential observations. Partly to explain (1) and its many cousins, we have conceived ideas like mass, volume, ratio of mass to volume, and explored for generations towards the conclusion that mass-to-volume works out the same for similar materials under similar conditions, and that the comparison of mass-to-volume ratios predicts which materials will float upon others.

Statement (3): 19 is a prime number. In what sense is this a fact? Its roots are deep in direct experience: the hunter-gatherer wishing to share nineteen apples equally with his two brothers or his three sons or his five children must have discovered that he couldn’t without extending his circle of acquaintance so far that each got only one, long before he had a name for what we call ‘nineteen’. But (3) is many steps away from the experience where it is grounded. It involves conceptualisation of numerical measurements of sets one encounters, and millennia of thought to acquire symbols for these and codify procedures for manipulating them in ways that mirror how reality functions. We’ve done this so successfully that it’s easy to forget how far from the tangibles of experience they stand.

Statement (4): √2 is not exactly the ratio of two whole numbers. Most first-year mathematics students know this. But by this stage of abstraction, separating its fact-ness from its demonstration is impossible: the property of being exactly a fraction is not detectable by physical experience. It is a property of how we abstracted and systematised the numbers that proved useful in modelling reality, not of our hands-on experience of reality. The reason we regard √2’s irrationality as factual is precisely because we can give a demonstration within an accepted logical framework.

What then about recurring decimals? For persuasive argument, first ascertain the distance from reality at which the question arises: not, in this case, the rarified atmosphere of undergraduate mathematics but the primary school classroom. Once a child has learned rituals for dividing whole numbers and the convenience of decimal notation, she will try to divide, say, 2 by 3 and will hit a problem. The decimal representation of the answer does not cease to spew out digits of lesser and lesser significance no matter how long she keeps turning the handle. What should we reply when she asks whether zero point infinitely many 6s is or is not two thirds, or even — as a thoughtful child should — whether zero point infinitely many 6s is a legitimate symbol at all?

The answer must be tailored to the questioner’s needs, but the natural way forward — though it took us centuries to make it logically watertight! — is the nineteenth-century definition of sum of an infinite series. For the primary school kid it may suffice to say that, by writing down enough 6s, we’d get as close to 2/3 as we’d need for any practical purpose. For differential calculus we’d need something better, and for model-theoretic discourse involving infinitesimals something better again. Yet the underpinning mathematics for equalities like 0.6666••• = 2/3 where the question arises is the nineteenth-century one. Its fact-ness therefore resembles that of ice being less dense than water, of 19 being prime or of √2 being irrational. It can be demonstrated within a logical framework that systematises our observations of real-world experiences. So it is a fact not about reality but about the models we build to explain reality. Demonstration is the only tool available for establishing its truth.

Mathematics without proof is not like an omelette without salt and pepper; it is like an omelette without egg.

*Headline image credit: Floating ice sheets in Antarctica. CC0 via Pixabay. *

The post Recurring decimals, proof, and ice floes appeared first on OUPblog.

]]>The post Why study paradoxes? appeared first on OUPblog.

]]>In 2002 I was attending a conference on self-reference in Copenhagen, Denmark. During one of the breaks I got a chance to chat with Raymond Smullyan, who is amongst other things an accomplished magician, a distinguished mathematical logician, and perhaps the most well-known popularizer of `Knight and Knave’ (K&K) puzzles.

K&K puzzles involve an imaginary island populated by two tribes: the Knights and the Knaves. Knights always tell the truth, and Knaves always lie (further, members of both tribes are forbidden to engage in activities that might lead to paradoxes or situations that break these rules). Other than their linguistic behavior, there is nothing that distinguishes Knights from Knaves.

Typically, K&K puzzles involve trying to answer questions based on assertions made by, or questions answered by, an inhabitant of the island. For example, a classic K&K puzzle involves meeting an islander at a fork in the road, where one path leads to riches and success and the other leads to pain and ruin. You are allowed to ask the islander one question, after which you must pick a path. Not knowing to which tribe the islander belongs, and hence whether she will lie or tell the truth, what question should you ask?

(Answer: You should ask “Which path would someone from the other tribe say was the one leading to riches and success?”, and then take the path *not* indicated by the islander).

Back to Copenhagen in 2002: Seizing my chance, I challenged Smullyan with the following K&K puzzle, of my own devising:

There is a nightclub on the island of Knights and Knaves, known as the Prime Club. The Prime Club has one strict rule: the number of occupants in the club must be a prime number at all times.

The Prime Club also has strict bouncers (who stand outside the doors and do not count as occupants) enforcing this rule. In addition, a strange tradition has become customary at the Prime Club: Every so often the occupants form a conga line, and sing a song. The first lyric of the song is:

“At least one of us in the club is a Knave.”

and is sung by the first person in the line. The second lyric of the song is:

“At least two of us in the club are Knaves.”

and is sung by the second person in the line. The third person (if there is one) sings:

“At least three of us in the club are Knaves.”

And so on down the line, until everyone has sung a verse.

One day you walk by the club, and hear the song being sung. How many people are in the club?

Smullyan’s immediate response to this puzzle was something like “That can’t be solved – there isn’t enough information”. But he then stood alone in the corner of the reception area for about five minutes, thinking, before returning to confidently (and correctly, of course) answer “Two!”

I won’t spoil things by giving away the solution – I’ll leave that mystery for interested readers to solve on their own. (Hint: if the song is sung with any other prime number of islanders in the club, a paradox results!) I will note that the song is equivalent to a more formal construction involving a list of sentences of the form:

At least one of sentences S_{1} – S_{n} is false.

At least two of sentences S_{1} – S_{n} is false.

⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮

At least n of sentences S_{1} – S_{n} is false.

The point of this story isn’t to brag about having stumped a famous logician (even for a mere five minutes), although I admit that this episode (not only stumping Smullyan, but meeting him in the first place) is still one of the highlights of my academic career.

Instead, the story, and the puzzle at the center of it, illustrates the reasons why I find paradoxes so fascinating and worthy of serious intellectual effort. The standard story regarding why paradoxes are so important is that, although they are sometimes silly in-and-of-themselves, paradoxes indicate that there is something deeply flawed in our understanding of some basic philosophical notion (truth, in the case of the semantic paradoxes linked to K&K puzzles).

Another reason for their popularity is that they are a lot of fun. Both of these are really good reasons for thinking deeply about paradoxes. But neither is the real reason why I find them so fascinating. The real reason I find paradoxes so captivating is that they are much more mathematically complicated, and as a result much more mathematically interesting, than standard accounts (which typically equate paradoxes with the presence of some sort of circularity) might have you believe.

The Prime Club puzzle demonstrates that whether a particular collection of sentences is or is not paradoxical can depend on all sorts of surprising mathematical properties, such as whether there is an even or odd number of sentences in the collection, or whether the number of sentences in the collection is prime or composite, or all sorts of even weirder and more surprising conditions.

Other examples demonstrate that whether a construction (or, equivalently, a K&K story) is paradoxical can depend on whether the referential relation involved in the construction (i.e. the relation that holds between two sentences if one refers to the other) is symmetric, or is transitive.

The paradoxicality of still another type of construction, involving infinitely many sentences, depends on whether cofinitely many of the sentences each refer to cofinitely many of the other sentences in the construction (a set is cofinite if its complement is finite). And this only scratches the surface!

The more I think about and work on paradoxes, the more I marvel at how complicated the mathematical conditions for generating paradoxes are: it takes a lot more than the mere presence of circularity to generate a mathematical or semantic paradox, and stating exactly what is minimally required is still too difficult a question to answer precisely. And that’s why I work on paradoxes: their surprising mathematical complexity and mathematical beauty. Fortunately for me, there is still a lot of work that remains to be done, and a lot of complexity and beauty remaining to be discovered.

The post Why study paradoxes? appeared first on OUPblog.

]]>The post Special events and the dynamical statistics of Twitter appeared first on OUPblog.

]]>One main idea to derive warning signs is to monitor the fluctuations of the dynamical process by calculating the variance of a suitable monitoring variable. When the tipping point is approached via a slowly-drifting parameter, the stabilizing effects of the system slowly diminish and the noisy fluctuations increase via certain well-defined scaling laws.

Based upon these observations, it is natural to ask, whether these scaling laws are also present in human social networks and can allow us to make predictions about future events. This is an exciting open problem, to which at present only highly speculative answers can be given. It is indeed to predict *a priori* unknown events in a social system. Therefore, as an initial step, we try to reduce the problem to a much simpler problem to understand whether the same mechanisms, which have been observed in the context of natural sciences and engineering, could also be present in sociological domains.

In our work, we provide a very first step towards tackling a substantially simpler question by focusing on *a priori *known events. We analyse a social media data set with a focus on classical variance and autocorrelation scaling law warning signs. In particular, we consider a few events, which are known to occur on a specific time of the year, e.g., Christmas, Halloween, and Thanksgiving. Then we consider time series of the frequency of Twitter hashtags related to the considered events a few weeks before the actual event, but excluding the event date itself and some time period before it.

Now suppose we do not know that a dramatic spike in the number of Twitter hashtags, such as #xmas or #thanksgiving, will occur on the actual event date. Are there signs of the same stochastic scaling laws observed in other dynamical systems visible some time before the event? The more fundamental question is: Are there similarities to known warning signs from other areas also present in social media data?

We answer this question affirmatively as we find that the *a priori* known events mentioned above are preceded by variance and autocorrelation growth (see Figure). Nevertheless, we are still very far from actually using social networks to predict the occurrence of many other drastic events. For example, it can also be shown that many spikes in Twitter activity are not predictable through variance and autocorrelation growth. Hence, a lot more research is needed to distinguish different dynamical processes that lead to large outburst of activity on social media.

The findings suggest that further investigations of dynamical processes in social media would be worthwhile. Currently, a main focus in the research on social networks lies on structural questions, such as: Who connects to whom? How many connections do we have on average? Who are the hubs in social media? However, if one takes dynamical processes on the network, as well as the changing dynamics of the network topology, into account, one may obtain a much clearer picture, how social systems compare and relate to classical problems in physics, chemistry, biology and engineering.

The post Special events and the dynamical statistics of Twitter appeared first on OUPblog.

]]>The post A Fields Medal reading list appeared first on OUPblog.

]]>This year sees the first ever female recipient of the Fields Medal, Maryam Mirzakhani, recognised for her highly original contributions to geometry and dynamical systems. Her work bridges several mathematic disciplines – hyperbolic geometry, complex analysis, topology, and dynamics – and influences them in return.

We’re absolutely delighted for Professor Mirzakhani, who serves on the editorial board for *International Mathematics Research Notices*. To celebrate the achievements of all of the winners, we’ve put together a reading list of free materials relating to their work and to fellow speakers at the International Congress of Mathematicians.

**“Ergodic Theory of the Earthquake Flow” by Maryam Mirzakhani, published in International Mathematics Research Notices**

Noted by the International Mathematical Union as work contributing to Mirzakhani’s achievement, this paper investigates the dynamics of the earthquake flow defined by Thurston on the bundle *PMg *of geodesic measured laminations.

**“Ergodic Theory of the Space of Measured Laminations” by Elon Lindenstrauss and Maryam Mirzakhani, published in International Mathematics Research Notices**

A classification of locally finite invariant measures and orbit closure for the action of the mapping class group on the space of measured laminations on a surface.

**“Mass Forumlae for Extensions of Local Fields, and Conjectures on the Density of Number Field Discriminants” by Majul Bhargava, published in International Mathematics Research Notices**

Manjul Bhargava joins Maryam Mirzakhani amongst this year’s winners of the Fields Medal. Here he uses Serre’s mass formula for totally ramified extensions to derive a mass formula that counts all étale algebra extentions of a local field *F* having a given degree *n.*

**“Model theory of operator algebras” by Ilijas Farah, Bradd Hart, and David Sherman, published in International Mathematics Research Notices**

Several authors, some of whom speaking at the International Congress of Mathematicians, have considered whether the ultrapower and the relative commutant of a C*-algebra or II1 factor depend on the choice of the ultrafilter.

**“Small gaps between products of two primes” by D. A. Goldston, S. W. Graham, J. Pintz, and C. Y. Yildrim, published in Proceedings of the London Mathematical Society**

Speaking on the subject at the International Congress, Dan Goldston and colleagues prove several results relating to the representation of numbers with exactly two prime factors by linear forms.

**“On Waring’s problem: some consequences of Golubeva’s method” by Trevor D. Wooley, published in the Journal of the London Mathematical Society**

Wooley’s paper, as well as his talk at the congress, investigates sums of mixed powers involving two squares, two cubes, and various higher powers concentrating on situations inaccessible to the Hardy-Littlewood method.

*Image credit: (1) Inner life of human mind and maths, © agsandrew, via iStock Photo. (2) Maryam Mirzakhani 2014. Photo by International Mathematical Union. Public Domain via Wikimedia Commons.*

The post A Fields Medal reading list appeared first on OUPblog.

]]>Philosophy is a bit like a computer with a memory leak. It starts well, dealing with significant and serious issues that matter to anyone. Yet, in time, its very success slows it down. Philosophy begins to care more about philosophers’ questions than philosophical ones, consuming increasing amount of intellectual attention.

The post Rebooting Philosophy appeared first on OUPblog.

]]>** **

When we use a computer, its performance seems to degrade progressively. This is not a mere impression. An old version of Firefox, the free Web browser, was infamous for its “memory leaks”: it would consume increasing amounts of memory to the detriment of other programs. Bugs in the software actually do slow down the system. We all know what the solution is: reboot. We restart the computer, the memory is reset, and the performance is restored, until the bugs slow it down again.

Philosophy is a bit like a computer with a memory leak. It starts well, dealing with significant and serious issues that matter to anyone. Yet, in time, its very success slows it down. Philosophy begins to care more about philosophers’ questions than philosophical ones, consuming increasing amount of intellectual attention. Scholasticism is the ultimate freezing of the system, the equivalent of Windows’ “blue screen of death”; so many resources are devoted to internal issues that no external input can be processed anymore, and the system stops. The world may be undergoing a revolution, but the philosophical discourse remains detached and utterly oblivious. Time to reboot the system.

Philosophical “rebooting” moments are rare. They are usually prompted by major transformations in the surrounding reality. Since the nineties, I have been arguing that we are witnessing one of those moments. It now seems obvious, even to the most conservative person, that we are experiencing a turning point in our history. The information revolution is profoundly changing every aspect of our lives, quickly and relentlessly. The list is known but worth recalling: education and entertainment, communication and commerce, love and hate, politics and conflicts, culture and health, … feel free to add your preferred topics; they are all transformed by technologies that have the recording and processing of information as their core functions. Meanwhile, philosophy is degrading into self-referential discussions on irrelevancies.

The result of a philosophical rebooting today can only be beneficial. Digital technologies are not just tools merely modifying how we deal with the world, like the wheel or the engine. They are above all formatting systems, which increasingly affect how we understand the world, how we relate to it, how we see ourselves, and how we interact with each other.

The ‘Fourth Revolution’ betrays what I believe to be one of the topics that deserves our full intellectual attention today. The idea is quite simple. Three scientific revolutions have had great impact on how we see ourselves. In changing our understanding of the external world they also modified our self-understanding. After the Copernican revolution, the heliocentric cosmology displaced the Earth and hence humanity from the centre of the universe. The Darwinian revolution showed that all species of life have evolved over time from common ancestors through natural selection, thus displacing humanity from the centre of the biological kingdom. And following Freud, we acknowledge nowadays that the mind is also unconscious. So we are not immobile, at the centre of the universe, we are not unnaturally separate and diverse from the rest of the animal kingdom, and we are very far from being minds entirely transparent to ourselves. One may easily question the value of this classic picture. After all, Freud was the first to interpret these three revolutions as part of a single process of reassessment of human nature and his perspective was blatantly self-serving. But replace Freud with cognitive science or neuroscience, and we can still find the framework useful to explain our strong impression that something very significant and profound has recently happened to our self-understanding.

Since the fifties, computer science and digital technologies have been changing our conception of who we are. In many respects, we are discovering that we are not standalone entities, but rather interconnected informational agents, sharing with other biological agents and engineered artefacts a global environment ultimately made of information, the infosphere. If we need a champion for the fourth revolution this should definitely be Alan Turing.

The fourth revolution offers a historical opportunity to rethink our exceptionalism in at least two ways. Our intelligent behaviour is confronted by the smart behaviour of engineered artefacts, which can be adaptively more successful in the infosphere. Our free behaviour is confronted by the predictability and manipulability of our choices, and by the development of artificial autonomy. Digital technologies sometimes seem to know more about our wishes than we do. We need philosophy to make sense of the radical changes brought about by the information revolution. And we need it to be at its best, for the difficulties we are facing are challenging. Clearly, we need to reboot philosophy now.

Luciano Floridi is Professor of Philosophy and Ethics of Information at the University of Oxford, Senior Research Fellow at the Oxford Internet Institute, and Fellow of St Cross College, Oxford. He was recently appointed as ethics advisor to Google. His most recent book is

The Fourth Revolution: How the Infosphere is Reshaping Human Reality.

Subscribe to the OUPblog via email or RSS.

Subscribe to only philosophy articles on the OUPblog via email or RSS.

*Image credit: Alan Turing Statue at Bletchley Park. By Ian Petticrew. CC-BY-SA-2.0 via Wikimedia Commons.*

The post Rebooting Philosophy appeared first on OUPblog.

]]>Suppose you are watching a tennis match between Novak Djokovic and Rafael Nadal. The commentator says: “Djokovic serves first in the set, so he has an advantage.” Why would this be the case? Perhaps because he is then ‘always’ one game ahead, thus serving under less pressure. But does it actually influence him and, if so, how?

The post Does the “serving-first advantage” actually exist? appeared first on OUPblog.

]]>

Suppose you are watching a tennis match between Novak Djokovic and Rafael Nadal. The commentator says: “Djokovic serves first in the set, so he has an advantage.” Why would this be the case? Perhaps because he is then ‘always’ one game ahead, thus serving under less pressure. But does it actually influence him and, if so, how?

Now we come to the seventh game, which some consider to be the most important game in the set. But is it? Nadal serves an ace at break point down (30-40). Of course! Real champions win the big points, but they win most points on service anyway. At first, it may appear that real champions outperform on big points, but it turns out that weaker players underperform, so that it only seems that the champions outperform. And Nadal goes on to win three consecutive games. He is in a winning mood, the momentum is on his side. But does a ‘winning mood’ actually exist in tennis? (*Spoiler*: It does, but it is smaller than many expect.)

To figure out whether the “serving-first advantage” actually exists, we can use data on more than one thousand sets played at Wimbledon in order to calculate how often the player who served first also won the set. This statistic shows that for the men there is a slight advantage in the first set, but no advantage in the other sets.

On the contrary, in the other sets, there is actually a disadvantage: the player who serves first in the set is more likely to lose the set than to win it. This is surprising. Perhaps it is different for the women? But no, the same pattern occurs in the women’s singles.

It so happens that the player who serves first in a set (if it is not the first set) is usually the weaker player. This is so, because (a) the stronger player is more likely to win the previous set, and (b) the previous set is more likely won by serving the set out rather than by breaking serve. Therefore, the stronger player typically wins the previous set on service, so that the weaker player serves first in the next set. The weaker player is more likely to lose the current set as well, not because of a service (dis)advantage, but because he or she is the weaker player.

This example shows that we must be careful when we try to draw conclusions based on simple statistics. The fact that the player who serves first in the second and subsequent sets often loses the set is true, but this primarily concerns weaker players, while the original hypothesis includes all players. Therefore, we must control for quality differences, and statistical models enable us to do that properly. It then becomes clear that there is no advantage or disadvantage for the player who serves first in the second or subsequent sets; but it does matter in the first set, so it is wise to elect to serve after winning the toss.

Franc Klaassenis Professor of International Economics at University of Amsterdam.Jan R. Magnusis Emeritus Professor at Tilburg University and Visiting Professor of Econometrics at the Vrije Universiteit Amsterdam. They are the co-authors of.Analyzing Wimbledon: The Power of Statistics

Subscribe to the OUPblog via email or RSS.

Subscribe to only business and economics articles on the OUPblog via email or RSS.

*Image Credit: “Wimbledon Centre Court Panoramic: Rafael Nadal vs Del Potro” (2011) by Rian (Ree) Saunders. CC BY 2.0 via 58996719@N07 Flickr*

The post Does the “serving-first advantage” actually exist? appeared first on OUPblog.

]]>Nowadays it appears impossible to open a newspaper or switch on the television without hearing about “big data”. Big data, it sometimes seems, will provide answers to all the world’s problems. Management consulting company McKinsey, for example, promises “a tremendous wave of innovation, productivity, and growth … all driven by big data”.

The post Statistics and big data appeared first on OUPblog.

]]>** **

Nowadays it appears impossible to open a newspaper or switch on the television without hearing about “big data”. Big data, it sometimes seems, will provide answers to all the world’s problems. Management consulting company McKinsey, for example, promises “a tremendous wave of innovation, productivity, and growth … all driven by big data”.

An alien observer visiting the Earth might think it represents a major scientific breakthrough. Google Trends shows references to the phrase bobbing along at about one per week until 2011, at which point there began a dramatic, steep, and almost linear increase in references to the phrase. It’s as if no one had thought of it until 2011. Which is odd because data mining, the technology of extracting valuable, useful, or interesting information from large data sets, has been around for some 20 years. And statistics, which lies at the heart of all of this, has been around as a formal discipline for a century or more.

Or perhaps it’s not so odd. If you look back to the beginning of data mining, you find a very similar media enthusiasm for the advances it was going to bring, the breakthroughs in understanding, the sudden discoveries, the deep insights. In fact it almost looks as if we have been here before. All of this leads one to suspect that there’s less to the big data enthusiasm than meets the eye. That it’s not so much a sudden change in our technical abilities as a sudden media recognition of what data scientists, and especially statisticians, are capable.

Of course, I’m not saying that the increasing size of data sets does not lead to promising new opportunities – though I would question whether it’s the “large” that really matters as much as the novelty of the data sets. The tremendous economic impact of GPS data (estimated to be $150-270bn per year), retail transaction data, or genomic and bioinformatics data arise not from the size of these data sets, but from the fact that they provide new kinds of information. And while it’s true that a massive mountain of data needed to be explored to detect the Higgs boson, the core aspect was the nature of the data rather than its amount.

Moreover, if I’m honest, I also have to admit that it’s not solely statistics which leads to the extraction of value from these massive data sets. Often it’s a combination of statistical inferential methods (e.g. determining an accurate geographical location from satellite signals) along with data manipulation algorithms for search, matching, sorting and so on. How these two aspects are balanced depends on the particular application. Locating a shop which stocks that out of print book is less of an inferential statistical problem and more of a search issue. Determining the riskiness of a company seeking a loan owes little to search but much to statistics.

Some time after the phrase “data mining” hit the media, it suffered a backlash. Predictably enough, much of this was based around privacy concerns. A paradigmatic illustration was the *Total Information Awareness* project in the United States. Its basic aim was to search for suspicious behaviour patterns within vast amounts of personal data, to identify individuals likely to commit crimes, especially terrorist offences. It included data on web browsing, credit card transactions, driving licences, court records, passport details, and so on. After concerns were raised, it was suspended in 2003 (though it is claimed that the software continued to be used by various agencies). As will be evident from recent events, concerns about the security agencies monitoring of the public continues.

The key question is whether proponents of the huge potential of big data and its allied notion of open data are learning from the past. Recent media concern in the UK about merging of family doctor records with hospital records, leading to a six-month delay in the launch of the project, illustrates the danger. Properly informed debate about the promise and the risks is vital.

Technology is amoral — neither intrinsically moral nor immoral. Morality lies in the hands of those who wield it. This is as true of big data technology as it is of nuclear technology and biotechnology. It is abundantly clear — if only from the examples we have already seen — that massive data sets do hold substantial promise for enhancing the well-being of mankind, but we must be aware of the risks. A suitable balance must be struck.

It’s also important to note that the mere existence of huge data files is of itself of no benefit to anyone. For these data sets to be beneficial, it’s necessary to be able to use the data to build models, to estimate effect sizes, to determine if an observed effect should be regarded as mere chance variation, to be sure it’s not a data quality issue, and so on. That is, statistical skills are critical to making use of the big data resources. In just the same way that vast underground oil reserves were useless without the technology to turn them into motive power, so the vast collections of data are useless without the technology to analyse them. Or, as I sometimes put it, *people don’t want data, what they want are answers*. And statistics provides the tools for finding those answers.

David J. Handis Professor of Statistics at Imperial College, London and author of Statistics: A Very Short Introduction

The Very Short Introductions (VSI) series combines a small format with authoritative analysis and big ideas for hundreds of topic areas. Written by our expert authors, these books can change the way you think about the things that interest you and are the perfect introduction to subjects you previously knew nothing about. Grow your knowledge with OUPblog and the VSI series every Friday and like Very Short Introductions on Facebook. Subscribe to on Very Short Introductions articles on the OUPblog via email or RSS.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS

*Image credit: Diagram of Total Information Awareness system designed by the Information Awareness Office. Public domain via Wikimedia Commons*

The post Statistics and big data appeared first on OUPblog.

]]>Politically, socially, and culturally, the 1960s were tumultuous times. But tucked away amidst the folds of the Cold War, civil rights activism, anti-war demonstrations, the feminist movement, revolts of students and workers, flower power, sit-ins, Marxist and Maoist revolutions – almost unnoticed -- a new science was born in university campuses across North America, Britain, Europe and even, albeit tentatively, certain non-Western parts of the world.

The post The genesis of computer science appeared first on OUPblog.

]]>

Politically, socially, and culturally, the 1960s were tumultuous times. But tucked away amidst the folds of the Cold War, civil rights activism, anti-war demonstrations, the feminist movement, revolts of students and workers, flower power, sit-ins, Marxist and Maoist revolutions — almost unnoticed — a new science was born in university campuses across North America, Britain, Europe. and even, albeit tentatively, certain non-Western parts of the world. This new science acquired a name of its own: *computer science* (or some variations thereof, ‘computing science’, ‘informatique’, ‘informatik’).

At the heart of this new science was the process by which symbols, representing information, could be automatically (or with minimal human intervention) transformed into other symbols (representing other kinds or new information). This process was called, variously, *automatic computation, information processing, *or *symbol processing*. The agent of this process was the artifact named, generically, *computer.*

The computer is an *automaton*. In the past, this word, ‘automation’ (coined in the 17th century) was used to mean an artifact which, largely driven by its own source of motive power, performs certain repetitive patterns of movement and action without any external influences. Often, these actions imitated those of humans and animals. Ingenious mechanical automata had been invented since antiquity, largely for the amusement of the wealthy though some were of a more utilitarian nature (such as the water clock, said to be invented in the 1st century CE by the engineer/inventor Hero of Alexandria).

So mechanical automata that carry out physical actions of one sort or another form a venerable tradition. But the automatic electronic digital computer marked the birth of a whole new genus of automata, for this artifact was designed or intended to imitate human thinking; and, indeed, to extend or even replace humans in some of their highest cognitive capacities. Such was the power and scope of this artifact, it became the fount of a socio-technological revolution now commonly referred to as the Information Revolution, and a brand new science, computer science.

But computer science is not a *natural* science. It is not of the same kind as, say, physics, chemistry, biology, or astronomy. The gazes of these sciences are directed toward the natural world, inorganic and organic. The domain of computer science is the artificial world, the world of made objects, of artifacts — in particular, *computational artifacts*. Computer science is a *science of the artificial*, to use a term coined by Nobel laureate polymath scientist Herbert Simon.

A fundamental difference between a natural science like physics and an artificial science such as computer science relates to the age old philosophical distinction between *is* and *ought*. The natural scientist is concerned with the world *as it is*; she is not in the business of deliberately changing the natural world. Thus, the astronomer peering at the cosmos does not desire to change it but to understand it; the paleontologist examining rock layers in search of fossils is doing this to learn more about the history of life on earth, not to change the earth (or life) itself. For the natural scientist, understanding the natural world is an end in itself.

The scientist of the artificial also wishes to understand, not nature but artifacts. However that desire is a means to an end, for the scientist of the artificial, ultimately, wishes to *alter *the world in some respect. Thus the computer scientist wants to alter some aspect of the world by creating computational artifacts as improvements on existing one, or by creating new computational artifacts that have never existed before. If the natural scientist is concerned with the world *as it is*, the computer scientist obsesses with the world as she thinks *it ought to be*. For computer scientists, like other scientists of the artificial (such as engineering scientists) their domain comprises of artifacts that are intended to serve some purpose. An astronomer does not ask what a particular galaxy or planet is *for*; it just *is*. A computer scientist, striving to understand a particular computational artifact begins with the purpose for which it was created. Artifacts are imbued with purpose, reflecting the purposes or goals imagined for them by their human creators.

So how was this science of the artificial called computer science born? Where, when, and how did it begin? Who were its creators? What kinds of purposes drove the birth of this science? What were its seminal ideas? What makes it distinct from other, more venerable, sciences of the artificial? Was the genesis of computer science evolutionary or revolutionary? A ‘big bang’ or a ‘steady state’ birth? These are the kinds of questions that interest historians of science peering into the origins of what is one of the youngest artificial sciences of the 20th century.

Subrata Dasgupta is the Computer Science Trust Fund Eminent Scholar Chair in the School of Computing & Informatics at the University of Louisiana at Lafayette, where he is also a professor in the Department of History. Dasgupta has written fourteen books, most recently

It Began with Babbage: The Genesis of Computer Science.

Subscribe to the OUPblog via email or RSS.

Subscribe to only technology articles on the OUPblog via email or RSS.

*Image Credit: A reflection of a man typing on a laptop computer. Photo by Matthew Roth. CC-BY-SA-3.0 via Wikimedia Commons.*

The post The genesis of computer science appeared first on OUPblog.

]]>Fractal shapes, as visualizations of mathematical equations, are astounding to look at. But fractals look even more amazing in their natural element—and that happens to be in more places than you might think.

The post Fractal shapes and the natural world appeared first on OUPblog.

]]>

Fractal shapes, as visualizations of mathematical equations, are astounding to look at. But fractals look even more amazing in their natural element—and that happens to be in more places than you might think.

Kenneth Falconer is a mathematician who specializes in Fractal Geometry and related topics. He is Professor of Pure Mathematics at the University of St. Andrews and a member of the Analysis Research Group of the School of Mathematics and Statistics. Kenneth’s main research interests are in fractal and multifractal geometry, geometric measure theory and related areas. He has published over 100 papers in mathematical journals. He is author of

Fractals: A Very Short Introduction.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credits:
*

The post Fractal shapes and the natural world appeared first on OUPblog.

]]>With the arrival of the new year, you can be certain that the annual extravaganza known as the Joint Mathematics Meetings cannot be far behind. This year’s conference is taking place in Baltimore, Maryland. It is perhaps more accurate to say that it is a conference of conferences, since much of the business to be transacted will take place in smaller sessions devoted to this or that branch of mathematics

The post The *real* unsolved problems of mathematics appeared first on OUPblog.

With the arrival of the new year, you can be certain that the annual extravaganza known as the Joint Mathematics Meetings cannot be far behind. This year’s conference is taking place in Baltimore, Maryland. It is perhaps more accurate to say that it is a conference of conferences, since much of the business to be transacted will take place in smaller sessions devoted to this or that branch of mathematics. In these sessions, researchers at the cutting edge of the discipline will discuss the most recent developments on the biggest open problems in the business. It will all be terribly clever and largely impenetrable. You can be certain, however, that the real open questions of mathematics will barely be addressed.

It is hardly a secret that large conferences like this are as much about socialization as they are about research. This presents some problems, since the Joint Meetings can be a minefield of social awkwardness and ambiguous etiquette.

For example, imagine that you are walking across the lobby and you notice someone you know slightly coming the other way. Should you stop and chat? Or is a nod of acknowledgement sufficient? If you do stop, what sort of greeting is appropriate? A handshake? A hug? And how do you exit the conversation once the idle chit chat runs out? Sometimes you stop and chat, and then someone friendlier with the other person arrives to interrupt. One minute you’re making small talk about your recent job histories, and the next you’re just standing there watching your conversation partner make dinner plans with someone who just appeared. Now what do you do? Usually your only course is to mutter something about being late for a talk and then slink off with whatever dignity you can muster.

The exhibition center presents its own problems. How long can you stand in one place perusing a book before it becomes rude? Quite a while, apparently, if we are to judge from some of the stingier characters we inevitably meet. If the book is that interesting just buy it and be done with it. Come to think of it, when you are standing there looking through books, what is the maximum allowable angle to which you can separate the covers? Cracking the spine is definitely frowned upon. How many Hershey’s miniatures can you reasonably pilfer from the MAA booth? Which book should you buy to burn up your AMS points? Let me suggest that the answer to that one depends on which book will look best on your shelf, since you know full-well you are never going to read it.

Actually presenting a talk brings with it some challenges of its own. Perhaps you are giving a contributed talk, and you get the first slot after lunch. So it’s just you, the person speaking after you, and whoever drew the short straw for moderation duty. Do you acknowledge the lack of an audience? Or do you go through the motions like you’re keynoting? After giving your talk, is it acceptable simply to leave? Or are you ethically obligated to stay for the talk right after yours? What do you do if you notice an error in someone else’s talk? Should you expose it to the world during the question period, or just discuss it privately with the speaker afterward?

Perhaps we need a special session to discuss these questions. That, at least would be a session where everyone could understand what was being said. On the other hand, given the occasionally strained relationship between mathematicians and social graces, perhaps I should not be so cavalier about that.

Jason Rosenhouse is Associate Professor of Mathematics at James Madison University. He is the author of Taking Sudoku Seriously: The Math Behind the World’s Most Popular Pencil Puzzle with Laura Taalman; The Monty Hall Problem: The Remarkable Story of Math’s Most Contentious Brain Teaser; and Among The Creationists: Dispatches from the Anti-Evolutionist Front Lines. Read Jason Rosenhouse’s previous blog articles.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credit: Complex formulae on a blackboard. © JordiDelgado via iStockphoto. *

The post The *real* unsolved problems of mathematics appeared first on OUPblog.

Almost exactly twenty years ago, on 19 October 1993, the US House of Representatives voted 264 to 159 to reject further financing for the Superconducting Super Collider (SSC), the particle accelerator being built under Texas. $2bn had already been spent on the Collider, and its estimated total cost had grown from $4.4bn to $11bn; a budget saving of $9bn beckoned. Later that month President Clinton signed the bill officially terminating the project.

The post The legacy of the Superconducting Super Collider appeared first on OUPblog.

]]>

Almost exactly 20 years ago, on 19 October 1993, the US House of Representatives voted 264 to 159 to reject further financing for the Superconducting Super Collider (SSC), the particle accelerator being built under Texas. Two billion dollars had already been spent on the Collider, and its estimated total cost had grown from $4.4bn to $11bn; a budget saving of $9bn beckoned. Later that month President Clinton signed the bill officially terminating the project.

This was not good news for two of my Harvard roommates, PhD students in theoretical physics. Seeing the academic job market for physicists collapsing around them, they both found employment at a large investment bank in New York in the nascent field of quantitative finance. It was their assertion that derivative markets, whatever in fact they were, seemed mathematically challenging that catalyzed my own move to Wall Street from an academic career.

The cohort of PhDs in science, technology, engineering, and mathematics that moved to finance from academia in the early 1990s (a cohort I have called the “SSC generation”) sparked a remarkable growth in the sophistication and complexity of financial markets. They built models which enabled banks and hedge funds to price and trade complex financial instruments called derivatives, contracts whose value derives from the levels of other financial variables, such as the price of the Japanese Yen or a collection of mortgages on apartments in Florida. They created a new subject, known as financial engineering or quantitative finance, and a brand new career path, that of quantitative analyst (“quant”), a vocation that became so popular — for its monetary rewards certainly, but also for its dynamism and innovation — that by June 2008, 28% of graduating Harvard seniors going into full time employment were heading to finance.

However, just as some investors in 2007-2008 were questioning the inexorable rise in house prices and the potential for a market bubble, so too were many students questioning their own career choices, sensing the possibility of a career bubble. As Harvard University President Drew Faust said in her first address to the senior class in June 2008, “You repeatedly asked me: Why are so many of us going to Wall Street?”

Three months later, both market and career bubbles collapsed as Lehman Brothers filed for bankruptcy. In the midst of the financial crisis, on 3 October 2008, the House of Representatives voted 263 to 171 to pass the Emergency Economic Stabilization Act, authorizing the Treasury secretary to spend $700bn — roughly 65 Super Colliders — to purchase distressed assets.

What went wrong? While the causes of the financial crisis have been widely debated, it is clear that many financial engineers were caught in what I have termed the “quant delusion,” an over-confidence in and over-reliance on mathematical models. The edifice of quantitative finance built over 15 years by the SSC generation was dramatically rocked by the events of 2008. Fundamental logical arguments that practitioners had taken for granted were shown not to hold. Decades of modeling advances were revealed to be invalid or thrown into question.

It is hard to prove a direct causal link between the cancellation of the SSC, the rise of financial engineering, and the chaos of 2008. However, if some roots of the financial crisis can be traced, however distantly, to October 1993, might one consequence of the financial crisis itself be a healthy reassessment of career choices amongst graduates?

I encounter evolving attitudes among students in the class that I teach at Harvard, Statistics 123, “Applied Quantitative Finance”. Many still plan a future on Wall Street, and are motivated by the mathematical challenges and dynamic environment ahead of them. Some are interested in the elegant mathematical and probabilistic theory that underlies derivatives markets, and are keen to understand the way of thinking that exists on Wall Street. Others appreciate that they have a broad range of equally compelling career options, whether in technology, life sciences, climate science, or fundamental research, and take my course simply because they have enjoyed their introduction to probability and want to experience one of its most compelling applications.

Stephen Blyth is Professor of the Practice of Statistics at Harvard University, and Managing Director at the Harvard Management Company. His book, An Introduction to Quantitative Finance, was published by Oxford University Press in November 2013.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credit: graphs and charts. © studiocasper via iStockphoto. *

The post The legacy of the Superconducting Super Collider appeared first on OUPblog.

]]>“This is not maths – maths is about doing calculations, not proving theorems!” So wrote a disaffected student at the end of my recent pure maths lecture course. Theorems, along with their proofs, have gotten a bad name.

The post Let them eat theorems! appeared first on OUPblog.

]]>** **

“This is not maths – maths is about doing calculations, not proving theorems!” So wrote a disaffected student at the end of my recent pure maths lecture course. Theorems, along with their proofs, have gotten a bad name.

The first (and often only) theorem most people encounter is Pythagoras Theorem, discovered over 2500 years ago; that if you square the lengths of the two perpendicular sides of a right-angled triangle and add these numbers together then you get the square of the length of the third side. To many, the name Pythagoras conjures up memories of eccentric maths teachers enthusing over spiders webs of lines. Yet, if the writer of the software underlying your computer had not known their Pythagoras and other such theorems, you would not now be viewing this neatly aligned text or navigating around your screen at the touch of a mouse.

A theorem is the name for an incontrovertible mathematical fact, a statement that is an unavoidable consequence of precisely defined terms or facts that have already been established. Pythagoras’ Theorem follows inexorably from the notions of a straight line, a right-angle and length. A couple of hundred years later, Euclid formulated his theorems or ‘Propositions’ of geometry which became the foundation of western mathematical education for the next 2000 years. My favourite is the Intersecting Chord Theorem: if you draw two intersecting straight lines across a circle and multiply together the lengths of the parts of the chords on either side of the intersection point then you get the same answer for both chords (see diagram). This is a remarkable statement: there seems no obvious reason why it should be so. Yet it is an inevitable consequence of the definition of a circle. Sadly, learning the formal propositions of Euclid by rote, as they were often taught in the past, may have hidden their substance and elegance and turned off many budding mathematicians.

Many further geometrical theorems have been established since Euclid’s days, some with evocative names. The Ham Sandwich Theorem says that given three objects there is always a plane that simultaneously divides each object into two parts of equal volume; thus a sandwich can always be divided by a straight slice so that the bread, butter, and ham are all equally divided between the two portions. Then, according to the Hairy Ball Theorem, it is impossible to comb a sphere covered with hair or fur in such a way that the hairs lie down smoothly everywhere on the sphere. One consequence, perhaps reassuring at times of extreme weather, is that at any instant there is somewhere on the earth’s surface where there is no wind.

The Mandelbrot set has become an icon recognised by many with little or no mathematical knowledge but who have been fascinated by its intriguing beauty. The Fundamental Theorem of the Mandelbrot Set, as it is sometimes called, relates geometrical aspects of this extraordinarily complicated object to the simple formula *z*^{2} + *c*. The theorem was contained in the writings of Pierre Fatou and Gaston Julia back in 1919, but was virtually forgotten until in the mid-1970s Mandelbrot’s computer images revealed the set’s intricate detail. A picture can bring a theorem to life!

Of course, not all theorems are about geometry. Some concern properties of numbers; perhaps the most famous is Fermat’s Last Theorem, that the equation *x ^{n}* +

Theorems are the pillars of mathematics. New theorems, often building on the foundations of earlier ones, are continually being proved. Yes, some may be esoteric, but others have been fundamental in the development of things that we take for granted, such as Stokes’ Theorem for electronic communication and fluid flow. And, though I obviously failed to convince my student, they are the basis for many of the calculations undertaken daily by scientists and engineers.

Kenneth Falconer is author of Fractals: A Very Short Introduction and Fractal Geometry: Mathematical Foundations and Applications (Wiley, 2014). He has been Professor of Pure Mathematics at the University of St Andrews since 1993.

The Very Short Introductions (VSI) series combines a small format with authoritative analysis and big ideas for hundreds of topic areas. Written by our expert authors, these books can change the way you think about the things that interest you and are the perfect introduction to subjects you previously knew nothing about. Grow your knowledge with OUPblog and the VSI series every Friday and like Very Short Introductions on Facebook.

Subscribe to the OUPblog via email or RSS.

Subscribe to Very Short Introductions articles on the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credits: 1) Figure drawn by author; 2) Image computed by Ben Falconer*

The post Let them eat theorems! appeared first on OUPblog.

]]>