The post Diamonds are forever, and so are mathematical truths? appeared first on OUPblog.

]]>Over the next few weeks, Leo Corry, author of *A Brief History of Numbers*, helps us understand how the history of mathematics can be harnessed to develop modern-day applications today. In this first post, he explores how the differences between perceptions of truth in maths and history affect the study of those subjects.

Try googling ‘mathematical gem.’ I just got 465,000 results. Quite a lot. Indeed, the metaphor of mathematical ideas as precious little gems is an old one, and it is well known to anyone with a zest for mathematics. A diamond is a little, fully transparent structure, all of whose parts can be observed with awe from any angle. It is breathtaking in its beauty, yet at the same time powerful and virtually indestructible. This description applies equally well to so many pieces of mathematical knowledge: proofs, formulae, and algorithms.

Leonhard Euler, for instance, was the greatest mathematician of the eighteenth century and we associate his name nowadays with many beautiful mathematical gems. Think of the so-called Euler Formula: *V – E + F *= 2. This concise expression embodies a surprising property of any convex polyhedron, of which *V* represents the numbers of its vertices, *E* of its edges, and *F* of its faces. But probably the most famous gem associated with his name is the so-called ‘Euler identity':

*e ^{i π}* + 1 = 0.

Beyond the mathematical importance of this identity it is remarkable how often it is known and praised, above all, for its beauty: “the most beautiful equation of maths”, we read in various places. A most impressive diamond!

But we can compare mathematical ideas to diamonds not only in terms of beauty. Diamonds are also, as you surely remember from the James Bond film, forever. And so are proved mathematical results. Indeed, the theme song of the Bond film defines very aptly, I think, the way in which mathematicians relate to those ideas with which they become involved and invest their best efforts for long periods of time:

Diamonds are forever,

Hold one up and then caress it,

Touch it, stroke it and undress it,

I can see every part,

Nothing hides in the heart to hurt me.

— Shirley Bassey, *Diamonds Are Forever*

Of course, before reaching the point where mathematical ideas become diamonds, likely to remain forever, there is a period of groping in the dark. This period may sometimes be long and the dark may be deep, before light is finally turned on and the diamond becomes transparent. You can then touch it, stroke it and undress it, and you will truly understand the necessary interconnection between all of its parts.

In a recent TED video, the Spanish mathematician Eduardo Sáenz de Cabezón tells his audience that “if you want to tell someone that you will love her forever you can give her a diamond. But if you want to tell her that you’ll love her *forever and ever*, give her a theorem!” (Unfortunately, in spite of the accompanying English subtitles, his most successful jokes are lost in translation from Spanish.)

And so, it is the eternal character of mathematical truths and the unanimity of mathematicians about them that sets mathematics apart from almost all other endeavors of human knowledge. This unique character of mathematics as a system of knowledge may be stressed even more sharply by comparison to another discipline, like history for example. At its core, mathematical knowledge deals with certain, necessary, and universal truths. True mathematical statements do not depend on contextual considerations, either in time or in geographical location. Generally speaking, established mathematical statements are considered to be beyond dispute or interpretation.

The discipline of history, on the contrary, deals with the particular, the contingent, and the idiosyncratic. It deals with events that happened in a particular location at a particular point in time, and events that happened in a certain way but could have happened otherwise. Historical statements are always partial, debatable, and open to interpretation. Arguments put forward by historians keep changing with time. ‘Thinking historically’ and ‘thinking mathematically’, then, are clearly two different things.

For historians of mathematics, the comparison between mathematics and history, as two different ways of thought and as two different kinds of academic disciplines, is an important issue. Historians in general do not see just providing an account of “one damn thing after the other” (as the phrase often attributed to Arnold Toynbee goes) as the aim of their intellectual pursuit. Historians of mathematics, in turn, do not see the aims of their pursuits as just providing a chronology of discoveries. What does it mean, then, to think historically about the ways in which people have been ‘thinking mathematically’ throughout history, and about the processes of change that have affected these ways of thinking? If mathematics deals with universal truths, how can we speak about mathematics from a historical perspective (other than to establish the chronology of certain discoveries)? What is it that changes through time in a discipline whose truths are, apparently, eternal?

“Who was the first to discover the formula for the quadratic equation?” That’s not really the kind of question that historians of mathematics try to be involved with. In fact, rather than “Who was the first to discover X?” we may find it more interesting to investigate a question such as “Who was the *last* person to discover X?” This latter question involves the understanding that in spite of the eternal character of mathematical results, there is still a lot to be said about the way in which mathematical ideas develop and are understood throughout history. It suggests that something that was mathematically proved at some point was not considered to be so at a later time, that mathematicians are not always aware or do not always care about the existence of a proof that later becomes interesting and relevant. It also suggests that it makes historical sense to try and understand the circumstances of this change in mathematical values. The question also implies that only at a certain point in time a mathematical proof became so fundamentally convincing that it impressed upon that result a stamp of *eternal* validity. Or, at least, temporarily so.

*Featured image: Isfahan Lotfollah mosque ceiling symmetric by Phillip Maiwald. CC BY-SA 3.0 via Wikimedia Commons.*

The post Diamonds are forever, and so are mathematical truths? appeared first on OUPblog.

]]>The post Yes, maths can be for the amateur too appeared first on OUPblog.

]]>Over the past weeks Snezana Lawrence, co-author of *Mathematicians and their Gods*, introduced us into a summer journey around the beauty of mathematics; including secret maths witches and wizards, and trying to answer the question: will we ever need maths after school? In this last post, Snezana tells us the story of bright amateurs in mathematics that had great influence on scientific discoveries, from multidimensionality to the fourth dimension.

A friend of mine picked an argument with me the other day about how people go on about the beauty of mathematics, but this is only not obvious to non-mathematicians, it cannot be accessed by those outside the field. Unlike, for example, the modern art, which is also not always obvious, mathematical beauty is elusive to all but the mathematicians. Or so he said. He mused further that a non-mathematician can never bring anything new to mathematics, unlike art: in art, from time to time you get shifts, paradigm changes, contributions from people who don’t necessarily belong to the old establishment, who bring a new insight, that may change art and influence further developments at a profound level. This is how movements in art happen and so, the thinking goes, there is a possibility of an outsider making a contribution to bring about such a change or a shift. Furthermore, this in effect means that people generally engage with art on a much greater scale, as there is always a potential of making a contribution to it.

Is this really the case? Is mathematics really a discipline so insular that no amateur or admirer can never play a role in its development? I wrecked my brains to come up with a counter-example, hoping that, at least, I would be able to persuade him of the beauty of mathematical techniques. My master plan was to use the argument of the form *reductio ad absurdum*, an old trick that would finish by ‘aha!’ on my part.

But I think it ended up many dimensions more nicely. I came up with the concept that I have investigated recently, of the amateurs making not only a huge contribution to the field, but actually enriching a view of mathematics itself. There was one development, that took place at the turn of the 20^{th} century, that involved mathematicians and non-mathematicians alike in the development of the concept of multidimensionality. I am talking about, on one side, mathematicians such as Bernhard Riemann and Hermann Günther. Their work and lives were devoted to the development of this concept. On the other hand though, there were others whose work included philosophizing on the multidimensionality. People such as Edwin Abbott Abbott, the Shakespearean scholar and the London schoolmaster who wrote one of the most famous and popular novellas of all time, the *Flatland*, and Alicia Boole-Stott, whose three-dimensional models of four-dimensional polytopes contributed to the development of mathematics immensely for example. Alicia had no official mathematical education, apart from being the daughter of the famous George Boole (but earnt her honorary doctorate at the University of Gro¨ningenn in 1914).

How did this happen? By very unorthodox means in fact. Abbott wrote *Flatland* at the time when he was still working at the City of London School and lived in Marylebone. Mary, Alicia’s mother, also lived in Marylebone and was writing mathematics books for children, but also contributed to theology and science. Her interests extended to Darwinian theory, philosophy and psychology, organizing discussion groups on all of these from time to time. Mary Boole was also at one time a personal secretary to James Hinton, a famous spiritualist, whose son Charles wrote some very interesting books on the fourth dimension, invented the term ‘tesseract’, and also married Mary’s oldest daughter, called also Mary. Charles apparently believed in multi-dimensionality of time too, which may explain his bigamous marriage within three years of his marriage to Mary (but to whom he later returned) and taught Alicia to visualize four dimensional polytopes (polytope is an object in different dimensions – for example in two dimensions polytopes are triangles, squares etc., in three polytopes are cube, octahedron, etc. and so on) as they would pass through the third dimension. There is strong circumstantial evidence to show that there was a strong link with spiritualism that linked all of these people together, the belief that there is some other, higher dimension, from which the dimensions of our world could be seen at once. Mary Boole and Edwin Abbott Abbott even wrote apologies about spiritualism, none of course forthcoming from Charles Hinton.

So there! I managed to come across a group of people who were not mathematicians – they had some links to it, in their various ways, but none,were actually mathematicians, apart from the weirdest of them all, Charles Hinton, who ended his life as a mathematics instructor at Princeton University. Yet their work, both in terms of writing and their social involvement and communication, made a lasting influence on the development of the concept of the fourth dimension in mathematics, and from thereon, on the concept of multidimensionality.

Perhaps this type of mathematics comes very close to abstract art, but so be it. We can all enjoy the many representations of the tesseract, the word Charles Hinton coined, and Alicia Boole-Stott so beautifully represented with her many models. And we can certainly attempt to venture from the world of *Flatland* to the world of the fourth, and many more dimensions.

*Featured image credit: Math Castle by Gabriel Molina. CC-BY-2.0 via Flickr.*

The post Yes, maths can be for the amateur too appeared first on OUPblog.

]]>The post Sects, witches, and wizards-from Pythagoreans to Kepler appeared first on OUPblog.

]]>Over the next few weeks, Snezana Lawrence, co-author of *Mathematicians and their Gods*, introduced us into a Summer journey around the beauty of mathematics, trying to answer the question: will we ever need maths after school? In this second post, Snezana discusses popular perceptions of mathematics, from Pythagorean sect to the various interpretations of the supposed numerical values and hidden messages in the Bible.

As the summer is safely on its way (certainly in Sicily, where I write this on a terrace of one of its grand hotels) I think of the topics to discuss with friends and acquaintances over a glass of Prosecco when the rest of the party joins me in a week. To start a conversation based on mathematics may seem to some to be one of the tasks inevitably converging towards the plot-line of *Mission Impossible*. Well, certainly there are more pressing things that would occupy people’s minds, concerning international politics, the future of Europe, and the future of the Middle East. What’s new? These topics have occupied people’s minds for centuries. And inevitably there will be calls (as the amount of Prosecco increases) to discuss plots, pinpoint to patterns, make assumptions.

But there is unfortunately, in this particular sense, a similarity also with some popular perceptions of mathematics. From the Pythagorean sect, via various interpretations of the supposed numerical values and hidden messages in the Bible and other major religions’ sacred texts, to the phenomenon of ‘sacred geometry’ and the patterns upon which cities and institutions are built – mathematics history too has some questions that may be of interest in exploring the similar type of sentiment.

One response that answers all those questions wouldn’t satisfy anyone. So what examples can one come up with? Without going into much detail, one can mention that mathematicians, too, have been at times wrong, not completely, but a bit. Like for example Johannes Kepler whose mathematical model of the universe was, for a want of a better word, perfected over a period of time. One of his most surprising inventions came about when he taught mathematics at a school in Graz in his early career. He pondered: what if the five Platonic solids, are indeed some kind of blueprint upon which the universe is made as Plato suggested centuries earlier?

Plato, who discussed these solids in *Timaeus *(c. 360BC) associated them with four classical elements – Earth was represented by cube, air by octahedron, water by icosahedron, and fire with tetrahedron. The fifth element (yes just like the *Fifth Element*, the film by the French director Luc Besson, made in 1997) Plato thought, the dodecahedron, must have been used for arranging the constellations of the heavens. Furthermore, in the history of mathematics and philosophy, it was often identified as the element denoting divine spark, the principle of attraction, and the force that made all other elements come to life.

Forward to 1595 when Kepler worked on Platonic solids and used them to make a model of the universe, in his now famous book *Mysterium cosmographicum *(1596), illustrating the work with one of the most famous images of the history of science and mathematics. The image shows each Platonic solid encased in a sphere, inscribed in a further solid, encased in a sphere, which Kepler identified with the then six known planets: Mercury, Venus, Earth, Mars, Jupiter, and Saturn. Kepler described that the spheres containing the solids are placed at intervals corresponding to the sizes of each planet’s paths (as they were then known), assuming that they circled around the Sun.

Of course, later, whilst he was living in Prague, Kepler found that the orbital paths of planets of the Solar System were not circular but elliptical, but this beautiful model, even though not perfectly accurate, gave him an impetus for further research.

Kepler’s work on planetary motions and modeling the Solar System were based on his deep religiosity and theological convictions, connecting the spiritual and the physical ideas and imagery in his work. Kepler’s less known work is his *Somnium*, a novel about an imaginary journey that depicts his flight around the Solar System, guided by his mother. This novel was posthumously published, and not surprisingly, as his own mother Katharina Kepler, was accused of witchcraft in 1617. She was imprisoned and released in 1621, thanks partly to Kepler’s own efforts and involvement in the trial.

So what is one to make with this information? Was Kepler’s mother a witch, and was he a wizard of a kind? Is that how he worked out his laws of planetary motion? Of course our common sense takes over at this point. However, it is worth pointing out that it is much easier to do so with some time between us and Kepler in between, and during which the witches and wizards have safely made a transition from reality into fiction.

So, to get back to Sicily, my grand hotel’s balcony and Prosecco reception. And revisit the potential mathematical conversation, compare it with possible scheming and plotting that can be projected from the example of mathematics, from the secrecy of the Pythagorean sect, to the occult knowledge of Kepler, or the numerology of Newton to what is currently happening in Europe, the Middle East and generally in the world… Safe bet would be to get to know the details, actually know the real maths, and make cautious calculations. History of mathematics teaches us that models are prone to improvements over a period of time, just like Kepler’s model of the universe.

*Featured image credit: books, bookshelf, read. CC0 Public domain via Pixabay. *

The post Sects, witches, and wizards-from Pythagoreans to Kepler appeared first on OUPblog.

]]>The post Will we ever need maths after school? appeared first on OUPblog.

]]>Over the next few weeks, Snezana Lawrence, co-author of *Mathematicians and their Gods*, will be taking us into a Summer journey around the beauty of mathematics. In this first post, Snezana gives her personal and professional insight to the long time unanswered question: will we ever need maths after school?

What is the purpose of mathematics? Or, as many a pupil would ask the teacher on a daily basis: “When are we going to need this?” There is a considerably ruder version of the question posed by Billy Connolly on the internet, but let’s not go there.

When I was a teacher some years ago I tired from the question to such an extent that I bought myself a crystal ball to keep on my desk in the classroom in case someone would dare pose this question again. If that happened, I would put my hands on the ball, look at the ceiling, and very studiously wait for everyone to get quiet for a few moments, to exclaim some time and date in the future far enough for pupils to not be able to test it… It worked actually, but they kept asking just for the sake of the spectacle.

Nevertheless, this is a very serious question in fact. With the students who actually asked this type of question, not in order to avoid doing mathematics, but out of real curiosity, I spoke at length about it. Different societies, cultures, and people, have debated the very same question and given mathematics very different meanings. Plato for example, discussed at length how important mathematics is to teach the principles of thinking. His dialogue with Meno, leading the slave boy to ‘find’ the knowledge and understanding within himself is a case in point.

In the middle ages, questions such as “how many angels can dance on the head of a pin?” (or, as another version has it, how many can sit), was not only theological but also included considerations relating to mathematics in terms of space, dimensions, and extension. In this case, mathematics was given the role of rational explanation.

The 16^{th} century scholar, mathematician, philosopher, magus, spy, and one of the most famous intellectuals of English Renaissance, gave a different role to mathematics. John Dee’s immortal fame is earnt by his Preface to the first English language edition of Euclid’s Elements in 1570, in which he boldly states that:

“Many other arts also there are which beautify the mind of man: but of all other none do more garnish and beautify it than those arts which are called Mathematical. Unto the knowledge of which no man can attain, without the perfect knowledge and instruction of the principles, grounds, and Elements of Geometry. But perfectly to be instructed in them, requires diligent study and reading of old ancient authors…”

In the same vein, some years later, in 1612, Piticus, who is credited with the invention of the term trigonometry, states that:

“Nothing makes men more gentle than the cultivation of that heavenly philosophy (mathematics). But, dear God, how rarely is this gentleness a quality of theologians! And how desirable it would be in this century if all theologians were mathematicians, that is, gentle and manageable men!”

Nevertheless, the more utilitarian view of mathematics remained and is recorded in various places. During the French Revolution, one of the most revolutionary of the mathematicians, the one who is given the title of the father of École Polytechnique, Gaspard Monge, saw mathematics as a way of training the minds of young people for the prestige of his nation:

“In order to raise the French nation from the position of dependence on foreign industry, in which it has continued to the present time, it is necessary in the first place to direct national education towards an acquaintance with matters which demand exactness, a study which hitherto has been totally neglected…”

A 18^{th} century Italian mathematician, Milanese Maria Gaetana Agnesi, saw mathematics as a way of training the mind to concentrate. By sharpening it as a mental tool for learning, the mathematical mind also becomes an instrument through which spiritual enlightenment becomes possible.

So you see, mathematics can be given different roles and is assumed to beautify, free, and educate the mind. In these roles, mathematics is also considered to be dangerous; it can train people to think for themselves. Perhaps the most succinct expression I have ever come across in this respect is given by the Christian Orthodox Archibishop Gritorios V, who in 1819 issued a warning to all students and teachers of mathematics: “cubes and triangles, logarithms and symbolic calculus . . . bring apathy . . . jeopardizing our irreproachable faith…”.

So what is mathematics? It assumes different roles and meanings in different cultures, places, and times. The common factor to them all is that this is a universal tradition of abstract thinking which, with study and perhaps some adaptations, can be understood and contributed to universally. As such, it is the international language that, whilst still conditioned on locality, all of humanity is able to, if not necessarily speak, then understand. To study is to link, via mathematics, to that tradition of humanity.

*Featured image credit: Apis florea nest closeup by Sean Hoyland. Public domain via Wikimedia Commons.*

The post Will we ever need maths after school? appeared first on OUPblog.

]]>The post Making sense of mathematics appeared first on OUPblog.

]]>Initially ‘making sense of mathematics’ means what it says, namely to use our senses to organize the patterns we see and to make sense of the operations we perform in arithmetic. As we grow more sophisticated we use language to become more precise about the properties of geometrical figures and of numbers in arithmetic that lead on to algebra and beyond. Making sense of mathematics builds on our experiences and can take us on into a variety of different contexts in adult life.

Sometimes this means making sense of a particular situation and making a mathematical model by formulating principles that arise from the nature of the situation. At the very highest level, Newton thought very deeply about moving bodies and homed in on simple properties that led to his laws of motion. Einstein imagined a thought experiment sitting on a train moving at nearly the speed of light to produce his theory of special relativity. Stephen Hawking thought about the expanding universe to think back in time to when the universe began with a big bang.

For most of us, as our understanding of mathematics grows, we begin with everyday ideas and notice patterns that can be understood in mathematical ways. For instance in the arithmetic of whole numbers, we might multiply the same number together several times, say 2x2x2 and write it down as ‘two to the power of three’, symbolized as 2^{3}. Then we see that multiplying 2^{3} by 2^{2} is (2x2x2)x(2×2), so that 2^{3+2} = 2^{5} and we recognize the pattern that may be expressed algebraically as *x ^{m}*

Then we make a leap: what happens if we use this observation in the case where *m*, *n* are negative numbers or fractions? This gives new possibilities such as *x*^{1/2+1/2} = *x*^{1} suggesting that *x*^{1/2} is the square root of *x*. It leads to a more powerful development in mathematics that is valuable for some but can cause serious problems for those who think of *x ^{n}* as multiplying

Such changes in meaning happen more often than we may realize. For example, in dealing with simple arithmetic, taking something away gives a smaller answer. But when negative numbers are introduced, taking away a negative number gives a bigger result. Squaring a non-zero number gives a positive result, but introducing complex numbers gives i^{2} =–1.

As mathematics becomes more sophisticated, natural mathematics, based on our human experience and imagination needs to take account of new ways of thinking. A new kind of formal mathematics evolves that is encountered by students in pure mathematics at university. Mathematics in a particular context is presented in terms of assumed properties (axioms) from which other properties are deduced (as theorems). Many such theories have already been invented: group theory, vector spaces, mathematical analysis, algebraic number theory and so on. Undergraduate pure mathematics introduces these theories and involves the student in making sense of the deductive practices to build a coherent theory in each topic. This has the advantage that properties proved as theorems now remain true not only in familiar situations but also in any new situation where the axioms are satisfied.

Making sense of formal mathematics is not just a one-way process that starts with axioms and proves theorems that can be used in applications. It also works in the reverse direction. Special theorems (called structure theorems) may be proved to show that a given axiomatic system has properties that can be sensed visually through drawing pictures and operationally using operations formulated in the axioms. This links formal theories back to natural ways of using our human senses and operations, now operating at a more sophisticated level supported by the formal theory.

Making sense of mathematics in applications — in physics, engineering, economics, business studies, weather prediction, and so on — involves translating the particular characteristics of the context and formulating mathematical models to solve problems and to construct more sophisticated theories with new applications.

Currently we are experiencing an amazing explosion of technology that grows in sophistication at an enormous pace. Pure mathematics builds ever broadening formal theories, requiring more subtle theoretical foundations to support ever-widening branches of theory and practice. As the tree of mathematical knowledge grows greater superstructure, it also needs to strengthen its foundational roots.

Sense making in mathematics as a whole is therefore not a static state of understanding. Each of us needs to find our own way of progressing in mathematics for our own purposes. Sometimes this may involve learning what to do to cope with a given situation, however, in the longer term it is more profitable to make an effort to make sense of mathematics in successive new contexts. This may involve sufficient insight to operate in a given social environment, to deal with a particular topic in a technical context or a formal interest in pure mathematics. Mathematics as a whole builds from our human perception and operation, becoming more sophisticated through the development of language and formal theories that evolve in both theory and practice.

The post Making sense of mathematics appeared first on OUPblog.

]]>The post The Erdős number appeared first on OUPblog.

]]>The mathematical equivalent of the Bacon number is the “Erdős number”. Paul Erdős (1913-1996) was the most prolific mathematician of recent times with more than 1,500 papers, including more than 500 co-authors. The Erdős number now describes how close you are to Paul Erdős in terms of mathematical publications. So, for example, Robin Wilson has an Erdős number of 1 because he co-authored a paper with Erdős, whereas John Watkins has an Erdős number of 2 because he co-authored a paper with Robin Wilson (incidentally he co-authored a paper with Peter Cameron who also has an Erdős number of 1). Even the physicist Albert Einstein had an Erdős number of 2, though this is hardly his greatest claim to fame.

Having a low Erdős number is a matter of great pride for mathematicians. The highest known is 7, although there are also mathematicians who had no connection with Erdős and whose Erdős number is defined as “infinity”. Erdős was so open to working with other mathematicians that it will forever be a deep regret for those of us whose Erdős number is greater than 1 that we never collaborated on a paper with him. Even the great Giancarlo Rota shared this same regret and recalled an evening when he mentioned to Erdős a problem he was working on and Paul provided a hint that eventually led to a complete solution. While Erdős was appropriately thanked in the paper’s introduction, Rota always regretted that he did not include Erdős as a co-author.

Erdős was indeed a genuine mathematical prodigy, and at the age of 19 gave a new and gorgeously simple proof for a well-known theorem about numbers: between any number *n* and its double 2*n* there is a prime number. This was his very first mathematical paper.

His supposed obsession with mathematics to the exclusion of anything else in life is now legendary. With no real home base, he traveled the world, staying with friends, visiting math departments, and attending mathematical conferences. He always looked the same, in a suit and a white shirt with an open collar. But, inevitably, he could always be found seated on a couch talking to someone about a mathematical problem.

At conferences he almost always gave a version of a one hour talk he called “open problems” in which, without notes, he would discuss in great detail the current open mathematical problems he was interested in. For many of these problems he would offer monetary rewards for solutions, $100 for a fairly routine problem or perhaps $1,000 or more for a problem he considered especially difficult or important. He knew he could never be able to pay for solutions for *all* of these problems if they were actually solved, but he also knew that most of them would remain unsolved during his lifetime.

There are countless anecdotes that capture the spirit of Paul Erdős. He could be whimsical: at one conference he announced that he was 81 and most likely a square for the last time. Another time he visited a friend at Santa Clara University in California and upon arrival asked his host “what was the temperature in this valley during the Ice Age?” But, the best stories are from his closest mathematical friends with whom he stayed throughout the years. Many of these have a central theme: at some point in the early morning, about 2 or 3 am, Paul would wander into their bedroom, and with no preamble whatsoever, say something like “about the problem we were discussing last night, what if …”.

The famous neurologist and writer Oliver Sacks said of Paul Erdős: “a mathematical genius of the first order, Paul Erdős was totally obsessed with his subject — he thought and wrote mathematics for nineteen hours a day until the day he died.”

*Featured image credit: At the math grad house by kimmanleyort. CC by ND 2.0 via Flickr.*

The post The Erdős number appeared first on OUPblog.

]]>The post Putting two and two together appeared first on OUPblog.

]]>Let’s start with an easy one. It doesn’t take a mathematical whiz to know that 2 + 2 = 4 and that’s indeed the heart of this expression. To *put two and two together* is used to mean ‘draw an obvious conclusion from what is known or evident’. Conversely, if you say that somebody might *put two and two together and make five*, you’re suggesting that they are attempting to draw a plausible conclusion from what is known and evident, but that their conclusion is ultimately incorrect. *2 + 2 = 5* was famously used in George Orwell’s *Ninteen Eighty-Four* as an example of a dogma that seems obviously false, but which the totalitarian Party of the novel may require the population to believe: ‘In the end the Party would announce that two and two made five, and you would have to believe it.’

I remember a moment of surprise in the middle of one of my mathematics A-Level classes. It was a nice change from the almost unbroken moments of bewilderment that characterized the experience. It was when we were looking at the equations relevant to what happens when something rotating on an axis suddenly stopped. Well, guess what? It would go off at a tangent.

So, what is a tangent? It’s a straight line that touches a curve at a point, but (when extended) does not cross it at that point. (It’s also apparently ‘the trigonometric function that is equal to the ratio of the sides [other than the hypotenuse] opposite and adjacent to an angle in a right-angled triangle’, but the less said about that the better.)

In common parlance, of course, it simply means ‘a completely different line of thought or action’. While we’re mentioning the *hypotenuse*, you may well recall that it is ‘the longest side of a right-angled triangle, opposite the right angle’, but may not know the word’s origin: it ultimately comes from the Greek verb *hupoteinein*, from *hupo *‘under’ + *teinein *‘stretch’.

In a fit of pique, you might have described a person or a group as *the lowest common denominator*. It is often said in a derogatory way to mean ‘the level of the least discriminating audience’; for example, ‘they were accused of pandering to the lowest common denominator of public taste’. But what actually *is* a denominator?

Cast your mind back to the world of fractions–specifically vulgar fractions, or those that are expressed by one number over another, rather than decimally. The number above the line is the *numerator* and the number below the line is the *denominator*. In ½, for instance, the numerator is 1 and the denominator is 2. In mathematics, the *lowest common denominator* is ‘the lowest common multiple of the denominators of several vulgar fractions’. For instance, the lowest common denominator of 2/5 and 1/3 is 15, as that is the lowest common multiple of the denominators 5 and 3; the fractions would become 6/15 and 5/15 respectively. It isn’t entirely clear how this sense transferred to the broader, non-mathematical sense.

*Image Credit: “Math Castle” by Gabriel Molina. CC by 2.0 via Flickr.*

A version of this blog post first appeared on the OxfordWords blog.

The post Putting two and two together appeared first on OUPblog.

]]>The post How do we protect ourselves from cybercrime? appeared first on OUPblog.

]]>We seem to see new reports of hacking every week, ranging from the social media profiles of Taylor Swift and Centcom, to the email accounts of Sony, to at-home security gadgets such as baby monitors. Certain types of hacking, such as using a public hotspot to access information on someone else’s computer, are mere child’s play, as demonstrated by this 7 year-old. Such public hotspots are used in hundreds of thousands of restaurants, hotels, and other locations throughout the UK. So how do we – companies, institutions, and individuals – protect ourselves from cybercrimes?

Computer-based systems that store and process confidential, sensitive, and private information are vulnerable to attacks exploiting weaknesses at the technical, social, and policy level. Attacks may seek to compromise the confidentiality, integrity, or availability of the information, as well as violate the privacy of the information’s owners and stakeholders.

One reason why achieving cybersecurity is so hard in practice is that systems are often designed in isolation, but operate as parts of a broader ecosystem. In such an environment, delivering complex sets of services, the defenders may be less interested in the security of a particular system and more in the overall sustainability and resilience of the ecosystem. Systems across sectors – financial, transport, retail, health, communications, etc – are massively interconnected. Vulnerabilities in systems in one sector – that may be exploited by criminals, terrorists, nation-states – may lead to critical failures in others.

The extent of the threat to the information ecosystems upon which modern societies depend, and the scale of the required response, is increasingly being recognised by major governments, with substantial research and development funds being made available. Moreover, the solutions to cybersecurity problems also span the technical and policy layers.

Understanding how these ecosystems operate requires an interdisciplinary approach: computer scientists to design the software and networks; cryptographers to protect confidentiality of communications; economists to explain how the competing incentives of stakeholders might play out; anthropologists to explain cultural contexts and how they impact solutions; psychologists to explain how decisions are made and the impact on system design; the legal and policy scholars to set out regulatory constraints; criminologists and crime scientists to explain the motivation of perpetrators; and experts in strategy to frame the international context. Consequently, cybersecurity research cannot remain siloed. Instead, rigorous, interdisciplinary scholarship that incorporates multiple perspectives is required.

Future successes in cybersecurity policy and practice will depend on dialogue, knowledge transfer, and collaboration.

*Image credit: Security. Public Domain via Pixabay. *

The post How do we protect ourselves from cybercrime? appeared first on OUPblog.

]]>Still with me? Excellent. Some of you may know that Sir Roger developed much of modern black hole theory with his collaborator, Stephen Hawking, and at the heart of Interstellar lies a very unusual black hole. Straightaway, I asked Sir Roger if he’d seen the film. What’s unusual about Gargantua, the black hole in Interstellar, is that it’s scientifically accurate.

The post That’s relativity appeared first on OUPblog.

]]>Still with me? Excellent.

Some of you may know that Sir Roger developed much of modern black hole theory with his collaborator, Stephen Hawking, and at the heart of *Interstellar* lies a very unusual black hole. Straightaway, I asked Sir Roger if he’d seen the film. What’s unusual about Gargantua, the black hole in *Interstellar*, is that it’s scientifically accurate, computer-modeled using Einstein’s field equations from General Relativity.

Scientists reckon they spend far too much time applying for funding and far too little thinking about their research as a consequence. And, generally, scientific budgets are dwarfed by those of Hollywood movies. To give you an idea, Alfonso Cuarón actually told me he briefly considered filming *Gravity* in space, and that was what’s officially classed as an “independent” movie. For big-budget studio blockbuster *Interstellar*, Kip Thorne, scientific advisor to Nolan and Caltech’s “Feynman Professor of Theoretical Physics”, seized his opportunity, making use of Nolan’s millions to see what a real black hole actually looks like. He wasn’t disappointed and neither was the director who decided to use the real thing in his movie without tweaks.

Black holes are so called because their gravitational fields are so strong that not even light can escape them. Originally, we thought these would be dark areas of the sky, blacker than space itself, meaning future starship captains might fall into them unawares. Nowadays we know the opposite is true – gravitational forces acting on the material spiralling into the black hole heat it to such high temperatures that it shines super-bright, forming a glowing “accretion disk”.

The computer program the visual effects team created revealed a curious rainbowed halo surrounding Gargantua’s accretion disk. At first they and Thorne presumed it was a glitch, but careful analysis revealed it was behavior buried in Einstein’s equations all along – the result of gravitational lensing. The movie had discovered a new scientific phenomenon and at least two academic papers will result: one aimed at the computer graphics community and the other for astrophysicists.

I knew Sir Roger would want to see the movie because there’s a long scene where you, the viewer, fly over the accretion disk–not something made up to look good for the IMAX audience (you *have* to see this in full IMAX) but our very best prediction of what a real black hole should look like. I was blown away.

Some parts of the movie are a little cringeworthy, not least the oft-repeated line, “that’s relativity”. But there’s a reason for the characters spelling this out. As well as accurately modeling the black hole, the plot requires relativistic “time dilation”. Even though every physicist has known how to travel in time for over a century (go very fast or enter a very strong gravitational field) the general public don’t seem to have cottoned on.

Most people don’t understand relativity, but they’re not alone. As a science editor, I’m privileged to meet many of the world’s most brilliant people. Early in my publishing career I was befriended by Subramanian Chandrasekhar, after whom the Chandra space telescope is now named. Penrose and Hawking built on Chandra’s groundbreaking work for which he received the Nobel Prize; his *The Mathematical Theory of Black Holes* (1983) is still in print and going strong.

When visiting Oxford from Chicago in the 1990s, Chandra and his wife Lalitha would come to my apartment for tea and we’d talk physics and cosmology. In one of my favorite memories he leant across the table and said, “Keith – Einstein never actually understood relativity”. Quite a bold statement and remarkably, one that Chandra’s own brilliance could end up rebutting.

Space is big – mind-bogglingly so once you start to think about it, but we only know how big because of Chandra. When a giant sun ends its life, it goes supernova – an explosion so bright it outshines all the billions of stars in its home galaxy combined. Chandra deduced that certain supernovae (called “type 1a”) will blaze with near identical brightness. Comparing the actual brightness with however bright it appears through our telescopes tells us how far away it is. Measuring distances is one of the hardest things in astronomy, but Chandra gave us an ingenious yardstick for the Universe.

In 1998, astrophysicists were observing type 1a supernovae that were a *very* long way away. Everyone’s heard of the Big Bang, the moment of creation of the Universe; even today, more than 13 billion years later, galaxies continue to rush apart from each other. The purpose of this experiment was to determine how much this rate of expansion was slowing down, due to gravity pulling the Universe back together. It turns out that the expansion’s speeding up. The results stunned the scientific world, led to Nobel Prizes, and gave us an anti-gravitational “force” christened “dark energy”. It also proved Einstein right (sort of) and, perhaps for the only time in his life, Chandra wrong.

Why Chandra told me Einstein was wrong was because of something Einstein himself called his “greatest mistake”. When relativity was first conceived, it was before Edwin Hubble (after whom another space telescope is named) had discovered space itself was expanding. Seeing that the stable solution of his equations would inevitably mean the collapse of everything in the Universe into some “big crunch”, Einstein devised the “cosmological constant” to prevent this from happening – an anti-gravitational force to maintain the presumed status quo.

Once Hubble released his findings, Einstein felt he’d made a dreadful error, as did most astrophysicists. However, the discovery of dark energy has changed all that and Einstein’s greatest mistake could yet prove an accidental triumph.

Of course Chandra knew Einstein understood relativity better than almost anyone on the planet, but it frustrates me that many people have such little grasp of this most beautiful and brilliant temple of science. Well done Christopher Nolan for trying to put that right.

*Interstellar* is an ambitious movie – I’d call it “Nolan’s *2001*” – and it educates as well as entertains. While Matthew McConaughey barely ages in the movie, his young daughter lives to a ripe old age, all based on what we know to be true. Some reviewers have criticized the ending – something I thought I wouldn’t spoil for Sir Roger. Can you get useful information back out of a black hole? Hawking has changed his mind, now believing such a thing is possible, whereas Penrose remains convinced it cannot be done.

We don’t have all the answers, but whichever one of these giants of the field is right, Nolan has produced a thought-provoking and visually spectacular film.

*Image Credit: “Best-Ever Snapshot of a Black Hole’s Jets.” Photo by NASA Goddard Space Flight Center. CC by 2.0 via Flickr.*

The post That’s relativity appeared first on OUPblog.

]]>The post Five tips for women and girls pursuing STEM careers appeared first on OUPblog.

]]>**(1) Be open to discussing your research with interested people.**

From in-depth discussions at conferences in your field to a quick catch up with a passing colleague, it can be endlessly beneficial to bounce your ideas off a range of people. New insights can help you to better understand your own ideas.

**(2) Explore research problems outside of your own. **

Looking at problems from multiple viewpoints can add huge value to your original work. Explore peripheral work, look into the work of your colleagues, and read about the achievements of people whose work has influenced your own. New information has never been so discoverable and accessible as it is today. So, go forth and hunt!

**(3) Collaborate with people from different backgrounds.**

The chance of two people having read exactly the same works in their lifetime is nominal, so teaming up with others is guaranteed to bring you new ideas and perspectives you might never have found alone.

**(4) Make sure your research is fun and fulfilling.**

As with any line of work, if it stops being enjoyable, your performance can be at risk. Even highly self-motivated people have off days, so look for new ways to motivate yourself and drive your work forward. Sometimes this means taking some time to investigate a new perspective or angle from which to look at what you are doing. Sometimes this means allowing yourself time and distance from your work, so you can return with a fresh eye and a fresh mind!

**(5) Surround yourself with friends who understand your passion for scientific research.**

The life of a researcher can be lonely, particularly if you are working in a niche or emerging field. Choose your company wisely, ensuring your valuable time is spent with friends and family who support and respect your work.

*Image Credit: “Board” by blickpixel. Public domain via Pixabay. *

The post Five tips for women and girls pursuing STEM careers appeared first on OUPblog.

]]>The post Celebrating Women in STEM appeared first on OUPblog.

]]>From astronomer Caroline Herschel to the first female winner of the Fields Medal, Maryam Mirzakhani, you can use our interactive timeline to learn more about the both famous and forgotten women whose works in STEM fields have changed our world.

*Featured image credit: Microscope. Public Domain via Pixabay.*

The post Celebrating Women in STEM appeared first on OUPblog.

]]>The post Why causality now? appeared first on OUPblog.

]]>Causality has been a headache for scholars since ancient times. The oldest extensive writings may have been Aristotle, who made causality a central part of his worldview. Then we jump 2,000 years until causality again became a prominent topic with Hume, who was a skeptic, in the sense that he believed we cannot think of causal relationships as logically necessary, nor can we establish them with certainty.

The next major philosophical figure after Hume was probably David Lewis, who proposed quite a controversial account saying roughly that something was a cause of an effect in this world if, *in other nearby possible worlds* where that cause didn’t happen, the effect didn’t happen either. Currently, we come to work in computer science originated by Judea Pearl and by Spirtes, Glymour and Scheines and collaborators.

All of this is highly theoretical and formal. Can we reconstruct philosophical theorizing about causality in the sciences in simpler terms than this? Sure we can!

One way is to start from scientific practice. Even though scientists often don’t talk explicitly about causality, it *is *there. Causality is an integral part of the scientific enterprise. Scientists don’t worry too much about what causality is – a chiefly metaphysical question – but are instead concerned with a number of activities that, one way or another, bear on causal notions. These are what we call the five scientific problems of causality:

- Inference: Does C cause E? To what extent?
- Explanation: How does C cause or prevent E?
- Prediction: What can we expect if C does (or does not) occur?
- Control: What factors should we hold ﬁxed to understand better the relation between C and E? More generally, how do we control the world or an experimental setting?
- Reasoning: What considerations enter into establishing whether/how/to what extent C causes E?

This does not mean that metaphysical questions cease to be interesting. Quite the contrary! But by engaging with scientific practice, we can work towards a *timely* and solid philosophy of causality.

The traditional philosophical treatment of causality is to give a single conceptualization, an account of the concept of causality, which may also tell us what causality in the world is, and may then help us understand causal methods and scientific questions.

Our aim, instead, is to focus on the scientific questions, bearing in mind that there are five of them, and build a more pluralist view of causality, enriched by attention to the diversity of scientific practices. We think that many existing approaches to causality, such as mechanism, manipulationism, inferentialism, capacities and processes can be used together, as tiles in a causal mosaic that can be created to help you assess, develop, and criticize a scientific endeavour.

In this spirit we are attempting to develop, in collaboration, complementary ideas of causality as information (Illari) and variation (Russo). The idea is that we can conceptualize in general terms the causal linking or production of effect by the cause as the transmission of information between cause and effect (following Salmon); while variation is the most general conceptualization of the patterns of difference-making we can detect in populations where a cause is acting (following Mill). The thought is that we can use these complementary ideas to address the scientific problems.

For example, we can think about how we use complementary evidence in causal inference, tracking information transmission, and combining that with studies of variation in populations. Alternatively, we can think about how measuring variation may help us formulate policy decisions, as might seeking to block possible avenues of information transmission. Having both concepts available assists in describing this, and reasoning well – and they will also be combined with other concepts that have been made more precise in the philosophical literature, such as capacities and mechanisms.

Ultimately, the hope is that sharpening up the reasoning will assist in the conceptual enterprise that lies at the intersection of philosophy and science. And help decide whether to encourage sport, mobile phones, homeopathy and solar panels aboard the mission to Mars!

The post Why causality now? appeared first on OUPblog.

]]>The post Accusation breeds guilt appeared first on OUPblog.

]]>The guilty party – let’s call her Annette – can try to convince us of her trustworthiness by only saying things that are true, insofar as such truthfulness doesn’t incriminate her (the old adage of making one’s lies as close to the truth as possible applies here). But this is not the only strategy available. In addition, Annette can attempt to deflect suspicion away from herself by questioning the trustworthiness of others – in short, she can say something like:

“I’m not a liar, Betty is!”

However, accusations of untrustworthiness of this sort are peculiar. The point of Annette’s pronouncement is to affirm her innocence, but such protestations rarely increase our overall level of trust. Either we don’t believe Annette, in which case our trust in Annette is likely to drop (without affecting how much we trust Betty), or we do believe Annette, in which case our trust in Betty is likely to decrease (without necessarily increasing our overall trust in Annette).

Thus, accusations of untrustworthiness tend to decrease the overall level of trust we place in those involved. But is this reflective of an actual increase in the number of lies told? In other words, does the logic of such accusations makes it the case that, the higher the number of accusations, the higher the number of characters that *must *be lying?

Consider a group of people *G*, and imagine that, simultaneously, each person in the group accuses one, some, or all of the other people in the group of lying right at this minute. For example, if our group consists of three people:

*G* = {Annette, Betty, Charlotte}

then Betty can make one of three distinct accusations:

“Annette is lying.”

“Charlotte is lying.”

“Both Annette and Charlotte are lying.”

Likewise, Annette and Charlotte each have three choices regarding their accusations. We can then ask which members of the group could be, or which must be, telling the truth, and which could be, or which must be, lying by examining the logical relations between the accusations made by each member of the group. For example, if Annette accuses both Betty and Charlotte of lying, then either (i) Annette is telling the truth, in which case both Betty and Charlotte’s accusations must be false, or (ii) Annette is lying, in which case either Betty is telling the truth or Charlotte is telling the truth (or both).

This set-up allows for cases that are paradoxical. If:

Annette says “Betty is lying.”

Betty says “Charlotte is lying.”

Charlotte says “Annette is lying.”

then there is no coherent way to assign the labels “liar” and “truth-teller” to the three in such a way as to make sense. Since we are here interested in investigating results regarding how many lies are told (rather than scenarios in which the notion of lying versus telling the truth breaks down), we shall restrict our attention to those groups, and their accusations, that are not paradoxical.

The following are two simple results that constraint the number of liars, and the number of truth-tellers, in any such group (I’ll provide proofs of these results in the comments after a few days).

“Accusations of untrustworthiness tend to decrease the overall level of trust we place in those involved”

Result 1: If, for some number *m*, each person in the group accuses at least *m* other people in the group of lying (and there is no paradox) then there are at least *m* liars in the group.

Result 2: If, for any two people in the group *p*_{1} and *p*_{2}, either *p*_{1} accuses *p*_{2} of lying, or *p*_{2 }accuses *p*_{1} of lying (and there is no paradox), then exactly one person in the group is telling the truth, and everyone else is lying.

These results support an affirmative answer to our question: Given a group of people, the more accusations of untrustworthiness (i.e., of lying) are made, the higher the minimum number of people in the group that must be lying. If there are enough accusations to guarantee that each person accuses at least *n* people, then there are at least *n* liars, and if there are enough to guarantee that there is an accusation between each pair of people, then all but one person is lying. (Exercise for the reader: show that there is no situation of this sort where everyone is lying).

Of course, the set-up just examined is extremely simple, and rather artificial. Conversations (or mystery novels, or court cases, etc.) in real life develop over time, involve all sorts of claims other than accusations, and can involve accusations of many different forms not included above, including:

“Everything Annette says is a lie!”

“Betty said something false yesterday!”

“What Charlotte is about to say is a lie!”

Nevertheless, with a bit more work (which I won’t do here) we can show that, the more accusations of untrustworthiness are made in a particular situation, the more of the claims made in that situation must be lies (of course, the details will depend both on the number of accusations and the kind of accusations). Thus, it’s as the title says: accusation breeds guilt!

**Note:** The inspiration for this blog post, as well as the phrase “Accusation breeds guilt” comes from a brief discussion of this phenomenon – in particular, of ‘Result 2′ above – in ‘Propositional Discourse Logic’, by S. Dyrkolbotn & M. Walicki, Synthese 191: 863 – 899.

The post Accusation breeds guilt appeared first on OUPblog.

]]>The post A very short trivia quiz appeared first on OUPblog.

]]>We hope you enjoyed testing your trivia knowledge in this very short quiz.

*Headline image credit: Pondering Away. © GlobalStock via iStock Photo.*

The post A very short trivia quiz appeared first on OUPblog.

]]>The post Celebrating Alan Turing appeared first on OUPblog.

]]>We live in an age that Turing both predicted and defined. His life and achievements are starting to be celebrated in popular culture, largely with the help of the newly released film *The Imitation Game*, starring Benedict Cumberbatch as Turing and Keira Knightley as Joan Clarke. We’re proud to publish some of Turing’s own work in mathematics, computing, and artificial intelligence, as well as numerous explorations of his life and work. Use our interactive Enigma Machine below to learn more about Turing’s extraordinary achievements.

*Image credits: (1) Bletchley Park Bombe by Antoine Taveneaux. CC-BY-SA-3.0 via Wikimedia Commons. (2) Alan Turing Aged 16, Unknown Artist. Public domain via Wikimedia Commons. (3) Good question by Garrett Coakley. CC-BY-SA 2.0 via Flickr. *

The post Celebrating Alan Turing appeared first on OUPblog.

]]>The post What do rumors, diseases, and memes have in common? appeared first on OUPblog.

]]>Diseases, rumors, memes, and other information all spread over networks. A lot of research has explored the effects of network structure on such spreading. Unfortunately, most of this research has a major issue: it considers networks that are not realistic enough, and this can lead to incorrect predictions of transmission speeds, which people are most important in a network, and so on. So how does one address this problem?

Traditionally, most studies of propagation on networks assume a very simple network structure that is static and only includes one type of connection between people. By contrast, real networks change in time — one contacts different people during weekdays and on weekends, one (hopefully) stays home when one is sick, new University students arrive from all parts of the world every autumn to settle into new cities. They also include multiple types of social ties (Facebook, Twitter, and – gasp – even face-to-face friendships), multiple modes of transportation, and so on. That is, we consume and communicate information through all sorts of channels. To consider a network with only one type of social tie ignores these facts and can potentially lead to incorrect predictions of which memes go viral and how fast information spreads. It also fails to allow differentiation between people who are important in one medium from people who are important in a different medium (or across multiple media). In fact, most real networks include a far richer “multilayer” structure. Collapsing such structures to obtain and then study a simpler network representation can yield incorrect answers for how fast diseases or ideas spread, the robustness level of infrastructures, how long it takes for interacting oscillators to synchronize, and more.

Recently, an increasingly large number of researchers are studying mathematical objects called “multilayer networks”. These generalize ordinary networks and allow one to incorporate time-dependence, multiple modes of connection, and other complexities. Work on multilayer networks dates back many decades in fields like sociology and engineering, and of course it is well-known that networks don’t exist in isolation but rather are coupled to other networks. The last few years have seen a rapid explosion of new theoretical tools to study multilayer networks.

And what types of things do researchers need to figure out? For one thing, it is known that multilayer structures induce correlations that are invisible if one collapses multilayer networks into simpler representations, so it is essential to figure out when and by how much such correlations increase or decrease the propagation of diseases and information, how they change the ability of oscillators to synchronize, and so on. From the standpoint of theory, it is necessary to develop better methods to measure multilayer structures, as a large majority of the tools that have been used thus far to study multilayer networks are mostly just more complicated versions of existing diagnostic and models. We need to do better. It is also necessary to systematically examine the effects of multilayer structures, such as correlations between different layers (e.g., perhaps a person who is important for the social network that is encapsulated in one layer also tends to be important in other layers?), on different types of dynamical processes. In these efforts, it is crucial to consider not only simplistic (“toy”) models — as in most of the work on multilayer networks thus far — but to move the field towards the examination of ever more realistic and diverse models and to estimate the parameters of these models from empirical data. As our review article illustrates, multilayer networks are both exciting and important to study, but the increasingly large community that is studying them still has a long way to go. We hope that our article will help steer these efforts, which promise to be very fruitful.

The post What do rumors, diseases, and memes have in common? appeared first on OUPblog.

]]>The post The deconstruction of paradoxes in epidemiology appeared first on OUPblog.

]]>I think a methodological “revolution” is probably going on in the science of epidemiology, but I’m not totally sure. Of course, in science not being sure is part of our normal state. And we mostly like it. I had the feeling that a revolution was ongoing in epidemiology many times. While reading scientific articles, for example. And I saw signs of it, which I think are clear, when reading the latest draft of the forthcoming book *Causal Inference* by M.A. Hernán and J.M. Robins from Harvard (Chapman & Hall / CRC, 2015). I think the “revolution” — or should we just call it a “renewal”? — is deeply changing how epidemiological and clinical research is conceived, how causal inferences are made, and how we assess the validity and relevance of epidemiological findings. I suspect it may be having an immense impact on the production of scientific evidence in the health, life, and social sciences. If this were so, then the impact would also be large on most policies, programs, services, and products in which such evidence is used. And it would be affecting thousands of institutions, organizations and companies, millions of people.

One example: at present, in clinical and epidemiological research, every week “paradoxes” are being deconstructed. Apparent paradoxes that have long been observed, and whose causal interpretation was at best dubious, are now shown to have little or no causal significance. For example, while obesity is a well-established risk factor for type 2 diabetes (T2D), among people who already developed T2D the obese fare better than T2D individuals with normal weight. Obese diabetics appear to survive longer and to have a milder clinical course than non-obese diabetics. But it is now being shown that the observation lacks causal significance. (Yes, indeed, an observation may be real and yet lack causal meaning.) The demonstration comes from physicians, epidemiologists, and mathematicians like Robins, Hernán, and colleagues as diverse as S. Greenland, J. Pearl, A. Wilcox, C. Weinberg, S. Hernández-Díaz, N. Pearce, C. Poole, T. Lash , J. Ioannidis, P. Rosenbaum, D. Lawlor, J. Vandenbroucke, G. Davey Smith, T. VanderWeele, or E. Tchetgen, among others. They are building methodological knowledge upon knowledge and methods generated by graph theory, computer science, or artificial intelligence. Perhaps one way to explain the main reason to argue that observations as the mentioned “obesity paradox” lack causal significance, is that “conditioning on a collider” (in our example, focusing only on individuals who developed T2D) creates a spurious association between obesity and survival.

The “revolution” is partly founded on complex mathematics, and concepts as “counterfactuals,” as well as on attractive “causal diagrams” like Directed Acyclic Graphs (DAGs). Causal diagrams are a simple way to encode our subject-matter knowledge, and our assumptions, about the qualitative causal structure of a problem. Causal diagrams also encode information about potential associations between the variables in the causal network. DAGs must be drawn following rules much more strict than the informal, heuristic graphs that we all use intuitively. Amazingly, but not surprisingly, the new approaches provide insights that are beyond most methods in current use. In particular, the new methods go far deeper and beyond the methods of “modern epidemiology,” a methodological, conceptual, and partly ideological current whose main eclosion took place in the 1980s lead by statisticians and epidemiologists as O. Miettinen, B. MacMahon, K. Rothman, S. Greenland, S. Lemeshow, D. Hosmer, P. Armitage, J. Fleiss, D. Clayton, M. Susser, D. Rubin, G. Guyatt, D. Altman, J. Kalbfleisch, R. Prentice, N. Breslow, N. Day, D. Kleinbaum, and others.

We live exciting days of paradox deconstruction. It is probably part of a wider cultural phenomenon, if you think of the “deconstruction of the Spanish omelette” authored by Ferran Adrià when he was the world-famous chef at the elBulli restaurant. Yes, just kidding.

Right now I cannot find a better or easier way to document the possible “revolution” in epidemiological and clinical research. Worse, I cannot find a firm way to assess whether my impressions are true. No doubt this is partly due to my ignorance in the social sciences. Actually, I don’t know much about social studies of science, epistemic communities, or knowledge construction. Maybe this is why I claimed that a sociology of epidemiology is much needed. A sociology of epidemiology would apply the scientific principles and methods of sociology to the science, discipline, and profession of epidemiology in order to improve understanding of the wider social causes and consequences of epidemiologists’ professional and scientific organization, patterns of practice, ideas, knowledge, and cultures (e.g., institutional arrangements, academic norms, scientific discourses, defense of identity, and epistemic authority). It could also address the patterns of interaction of epidemiologists with other branches of science and professions (e.g. clinical medicine, public health, the other health, life, and social sciences), and with social agents, organizations, and systems (e.g. the economic, political, and legal systems). I believe the tradition of sociology in epidemiology is rich, while the sociology of epidemiology is virtually uncharted (in the sense of not mapped neither surveyed) and unchartered (i.e. not furnished with a charter or constitution).

Another way I can suggest to look at what may be happening with clinical and epidemiological research methods is to read the changes that we are witnessing in the definitions of basic concepts as risk, rate, risk ratio, attributable fraction, bias, selection bias, confounding, residual confounding, interaction, cumulative and density sampling, open population, test hypothesis, null hypothesis, causal null, causal inference, Berkson’s bias, Simpson’s paradox, frequentist statistics, generalizability, representativeness, missing data, standardization, or overadjustment. The possible existence of a “revolution” might also be assessed in recent and new terms as collider, M-bias, causal diagram, backdoor (biasing path), instrumental variable, negative controls, inverse probability weighting, identifiability, transportability, positivity, ignorability, collapsibility, exchangeable, g-estimation, marginal structural models, risk set, immortal time bias, Mendelian randomization, nonmonotonic, counterfactual outcome, potential outcome, sample space, or false discovery rate.

You may say: “And what about textbooks? Are they changing dramatically? Has one changed the rules?” Well, the new generation of textbooks is just emerging, and very few people have yet read them. Two good examples are the already mentioned text by Hernán and Robins, and the soon to be published by T. VanderWeele, *Explanation in causal inference: Methods for mediation and interaction* (Oxford University Press, 2015). Clues can also be found in widely used textbooks by K. Rothman et al. (*Modern Epidemiology*, Lippincott-Raven, 2008), M. Szklo and J Nieto (*Epidemiology: Beyond the Basics*, Jones & Bartlett, 2014), or L. Gordis (*Epidemiology*, Elsevier, 2009). Above all, the foundations of the current revolution can be seen in “Causality: Models, Reasoning and Inference” by Judea Pearl (2nd. edition, Cambridge University Press, 2009).

Finally, another good way to assess what might be changing is to read what gets published in top journals as *Epidemiology*, the *International Journal of Epidemiology*, the *American Journal of Epidemiology*, or the *Journal of Clinical Epidemiology*. Pick up any issue of the main epidemiologic journals and you will find several examples of what I suspect is going on. If you feel like it, look for the DAGs. I recently saw a tweet saying “A DAG in The Lancet!”. It was a surprise: major clinical journals are lagging behind. But they will soon follow and adopt the new methods: the clinical relevance of the latter is huge. Or is it not such a big deal? If no “revolution” is going on, how are we to know?

*Feature image credit: Test tubes by PublicDomainPictures. Public Domain via Pixabay.
*

The post The deconstruction of paradoxes in epidemiology appeared first on OUPblog.

]]>The post Recurring decimals, proof, and ice floes appeared first on OUPblog.

]]>Partly, of course, so they develop thinking skills to use on questions whose truth-status they won’t know in advance. Another part, however, concerns the dialogue nature of proof: a proof must be not only correct, but also persuasive: and persuasiveness is not objective and absolute, it’s a two-body problem. Not only to tango does one need two.

The statements — (1) ice floats on water, (2) ice is less dense than water — are widely acknowledged as facts and, usually, as interchangeable facts. But although rooted in everyday experience, they are not that experience. We have firstly represented stuffs of experience by sounds English speakers use to stand for them, then represented these sounds by word-processor symbols that, by common agreement, stand for them. Two steps away from reality already! This is what humans do: we invent symbols for perceived realities and, eventually, evolve procedures for manipulating them in ways that mirror how their real-world origins behave. Virtually no communication between two persons, and possibly not much internal dialogue within one mind, can proceed without this. Man is a symbol-using animal.

Statement (1) counts as fact because folk living in cooler climates have directly observed it throughout history (and conflicting evidence is lacking). Statement (2) is factual in a significantly different sense, arising by further abstraction from (1) and from a million similar experiential observations. Partly to explain (1) and its many cousins, we have conceived ideas like mass, volume, ratio of mass to volume, and explored for generations towards the conclusion that mass-to-volume works out the same for similar materials under similar conditions, and that the comparison of mass-to-volume ratios predicts which materials will float upon others.

Statement (3): 19 is a prime number. In what sense is this a fact? Its roots are deep in direct experience: the hunter-gatherer wishing to share nineteen apples equally with his two brothers or his three sons or his five children must have discovered that he couldn’t without extending his circle of acquaintance so far that each got only one, long before he had a name for what we call ‘nineteen’. But (3) is many steps away from the experience where it is grounded. It involves conceptualisation of numerical measurements of sets one encounters, and millennia of thought to acquire symbols for these and codify procedures for manipulating them in ways that mirror how reality functions. We’ve done this so successfully that it’s easy to forget how far from the tangibles of experience they stand.

Statement (4): √2 is not exactly the ratio of two whole numbers. Most first-year mathematics students know this. But by this stage of abstraction, separating its fact-ness from its demonstration is impossible: the property of being exactly a fraction is not detectable by physical experience. It is a property of how we abstracted and systematised the numbers that proved useful in modelling reality, not of our hands-on experience of reality. The reason we regard √2’s irrationality as factual is precisely because we can give a demonstration within an accepted logical framework.

What then about recurring decimals? For persuasive argument, first ascertain the distance from reality at which the question arises: not, in this case, the rarified atmosphere of undergraduate mathematics but the primary school classroom. Once a child has learned rituals for dividing whole numbers and the convenience of decimal notation, she will try to divide, say, 2 by 3 and will hit a problem. The decimal representation of the answer does not cease to spew out digits of lesser and lesser significance no matter how long she keeps turning the handle. What should we reply when she asks whether zero point infinitely many 6s is or is not two thirds, or even — as a thoughtful child should — whether zero point infinitely many 6s is a legitimate symbol at all?

The answer must be tailored to the questioner’s needs, but the natural way forward — though it took us centuries to make it logically watertight! — is the nineteenth-century definition of sum of an infinite series. For the primary school kid it may suffice to say that, by writing down enough 6s, we’d get as close to 2/3 as we’d need for any practical purpose. For differential calculus we’d need something better, and for model-theoretic discourse involving infinitesimals something better again. Yet the underpinning mathematics for equalities like 0.6666••• = 2/3 where the question arises is the nineteenth-century one. Its fact-ness therefore resembles that of ice being less dense than water, of 19 being prime or of √2 being irrational. It can be demonstrated within a logical framework that systematises our observations of real-world experiences. So it is a fact not about reality but about the models we build to explain reality. Demonstration is the only tool available for establishing its truth.

Mathematics without proof is not like an omelette without salt and pepper; it is like an omelette without egg.

*Headline image credit: Floating ice sheets in Antarctica. CC0 via Pixabay. *

The post Recurring decimals, proof, and ice floes appeared first on OUPblog.

]]>The post Why study paradoxes? appeared first on OUPblog.

]]>In 2002 I was attending a conference on self-reference in Copenhagen, Denmark. During one of the breaks I got a chance to chat with Raymond Smullyan, who is amongst other things an accomplished magician, a distinguished mathematical logician, and perhaps the most well-known popularizer of `Knight and Knave’ (K&K) puzzles.

K&K puzzles involve an imaginary island populated by two tribes: the Knights and the Knaves. Knights always tell the truth, and Knaves always lie (further, members of both tribes are forbidden to engage in activities that might lead to paradoxes or situations that break these rules). Other than their linguistic behavior, there is nothing that distinguishes Knights from Knaves.

Typically, K&K puzzles involve trying to answer questions based on assertions made by, or questions answered by, an inhabitant of the island. For example, a classic K&K puzzle involves meeting an islander at a fork in the road, where one path leads to riches and success and the other leads to pain and ruin. You are allowed to ask the islander one question, after which you must pick a path. Not knowing to which tribe the islander belongs, and hence whether she will lie or tell the truth, what question should you ask?

(Answer: You should ask “Which path would someone from the other tribe say was the one leading to riches and success?”, and then take the path *not* indicated by the islander).

Back to Copenhagen in 2002: Seizing my chance, I challenged Smullyan with the following K&K puzzle, of my own devising:

There is a nightclub on the island of Knights and Knaves, known as the Prime Club. The Prime Club has one strict rule: the number of occupants in the club must be a prime number at all times.

The Prime Club also has strict bouncers (who stand outside the doors and do not count as occupants) enforcing this rule. In addition, a strange tradition has become customary at the Prime Club: Every so often the occupants form a conga line, and sing a song. The first lyric of the song is:

“At least one of us in the club is a Knave.”

and is sung by the first person in the line. The second lyric of the song is:

“At least two of us in the club are Knaves.”

and is sung by the second person in the line. The third person (if there is one) sings:

“At least three of us in the club are Knaves.”

And so on down the line, until everyone has sung a verse.

One day you walk by the club, and hear the song being sung. How many people are in the club?

Smullyan’s immediate response to this puzzle was something like “That can’t be solved – there isn’t enough information”. But he then stood alone in the corner of the reception area for about five minutes, thinking, before returning to confidently (and correctly, of course) answer “Two!”

I won’t spoil things by giving away the solution – I’ll leave that mystery for interested readers to solve on their own. (Hint: if the song is sung with any other prime number of islanders in the club, a paradox results!) I will note that the song is equivalent to a more formal construction involving a list of sentences of the form:

At least one of sentences S_{1} – S_{n} is false.

At least two of sentences S_{1} – S_{n} is false.

⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮

At least n of sentences S_{1} – S_{n} is false.

The point of this story isn’t to brag about having stumped a famous logician (even for a mere five minutes), although I admit that this episode (not only stumping Smullyan, but meeting him in the first place) is still one of the highlights of my academic career.

Instead, the story, and the puzzle at the center of it, illustrates the reasons why I find paradoxes so fascinating and worthy of serious intellectual effort. The standard story regarding why paradoxes are so important is that, although they are sometimes silly in-and-of-themselves, paradoxes indicate that there is something deeply flawed in our understanding of some basic philosophical notion (truth, in the case of the semantic paradoxes linked to K&K puzzles).

Another reason for their popularity is that they are a lot of fun. Both of these are really good reasons for thinking deeply about paradoxes. But neither is the real reason why I find them so fascinating. The real reason I find paradoxes so captivating is that they are much more mathematically complicated, and as a result much more mathematically interesting, than standard accounts (which typically equate paradoxes with the presence of some sort of circularity) might have you believe.

The Prime Club puzzle demonstrates that whether a particular collection of sentences is or is not paradoxical can depend on all sorts of surprising mathematical properties, such as whether there is an even or odd number of sentences in the collection, or whether the number of sentences in the collection is prime or composite, or all sorts of even weirder and more surprising conditions.

Other examples demonstrate that whether a construction (or, equivalently, a K&K story) is paradoxical can depend on whether the referential relation involved in the construction (i.e. the relation that holds between two sentences if one refers to the other) is symmetric, or is transitive.

The paradoxicality of still another type of construction, involving infinitely many sentences, depends on whether cofinitely many of the sentences each refer to cofinitely many of the other sentences in the construction (a set is cofinite if its complement is finite). And this only scratches the surface!

The more I think about and work on paradoxes, the more I marvel at how complicated the mathematical conditions for generating paradoxes are: it takes a lot more than the mere presence of circularity to generate a mathematical or semantic paradox, and stating exactly what is minimally required is still too difficult a question to answer precisely. And that’s why I work on paradoxes: their surprising mathematical complexity and mathematical beauty. Fortunately for me, there is still a lot of work that remains to be done, and a lot of complexity and beauty remaining to be discovered.

*Featured Image Credit: ‘Structure and Clarity’, by Dan Flavin, Photo uploaded by SuperCar-RoadTrip, CC by 2.0, via flickr.*

The post Why study paradoxes? appeared first on OUPblog.

]]>The post Special events and the dynamical statistics of Twitter appeared first on OUPblog.

]]>One main idea to derive warning signs is to monitor the fluctuations of the dynamical process by calculating the variance of a suitable monitoring variable. When the tipping point is approached via a slowly-drifting parameter, the stabilizing effects of the system slowly diminish and the noisy fluctuations increase via certain well-defined scaling laws.

Based upon these observations, it is natural to ask, whether these scaling laws are also present in human social networks and can allow us to make predictions about future events. This is an exciting open problem, to which at present only highly speculative answers can be given. It is indeed to predict *a priori* unknown events in a social system. Therefore, as an initial step, we try to reduce the problem to a much simpler problem to understand whether the same mechanisms, which have been observed in the context of natural sciences and engineering, could also be present in sociological domains.

In our work, we provide a very first step towards tackling a substantially simpler question by focusing on *a priori *known events. We analyse a social media data set with a focus on classical variance and autocorrelation scaling law warning signs. In particular, we consider a few events, which are known to occur on a specific time of the year, e.g., Christmas, Halloween, and Thanksgiving. Then we consider time series of the frequency of Twitter hashtags related to the considered events a few weeks before the actual event, but excluding the event date itself and some time period before it.

Now suppose we do not know that a dramatic spike in the number of Twitter hashtags, such as #xmas or #thanksgiving, will occur on the actual event date. Are there signs of the same stochastic scaling laws observed in other dynamical systems visible some time before the event? The more fundamental question is: Are there similarities to known warning signs from other areas also present in social media data?

We answer this question affirmatively as we find that the *a priori* known events mentioned above are preceded by variance and autocorrelation growth (see Figure). Nevertheless, we are still very far from actually using social networks to predict the occurrence of many other drastic events. For example, it can also be shown that many spikes in Twitter activity are not predictable through variance and autocorrelation growth. Hence, a lot more research is needed to distinguish different dynamical processes that lead to large outburst of activity on social media.

The findings suggest that further investigations of dynamical processes in social media would be worthwhile. Currently, a main focus in the research on social networks lies on structural questions, such as: Who connects to whom? How many connections do we have on average? Who are the hubs in social media? However, if one takes dynamical processes on the network, as well as the changing dynamics of the network topology, into account, one may obtain a much clearer picture, how social systems compare and relate to classical problems in physics, chemistry, biology and engineering.

The post Special events and the dynamical statistics of Twitter appeared first on OUPblog.

]]>The post A Fields Medal reading list appeared first on OUPblog.

]]>This year sees the first ever female recipient of the Fields Medal, Maryam Mirzakhani, recognised for her highly original contributions to geometry and dynamical systems. Her work bridges several mathematic disciplines – hyperbolic geometry, complex analysis, topology, and dynamics – and influences them in return.

We’re absolutely delighted for Professor Mirzakhani, who serves on the editorial board for *International Mathematics Research Notices*. To celebrate the achievements of all of the winners, we’ve put together a reading list of free materials relating to their work and to fellow speakers at the International Congress of Mathematicians.

**“Ergodic Theory of the Earthquake Flow” by Maryam Mirzakhani, published in International Mathematics Research Notices**

Noted by the International Mathematical Union as work contributing to Mirzakhani’s achievement, this paper investigates the dynamics of the earthquake flow defined by Thurston on the bundle *PMg *of geodesic measured laminations.

**“Ergodic Theory of the Space of Measured Laminations” by Elon Lindenstrauss and Maryam Mirzakhani, published in International Mathematics Research Notices**

A classification of locally finite invariant measures and orbit closure for the action of the mapping class group on the space of measured laminations on a surface.

**“Mass Forumlae for Extensions of Local Fields, and Conjectures on the Density of Number Field Discriminants” by Majul Bhargava, published in International Mathematics Research Notices**

Manjul Bhargava joins Maryam Mirzakhani amongst this year’s winners of the Fields Medal. Here he uses Serre’s mass formula for totally ramified extensions to derive a mass formula that counts all étale algebra extentions of a local field *F* having a given degree *n.*

**“Model theory of operator algebras” by Ilijas Farah, Bradd Hart, and David Sherman, published in International Mathematics Research Notices**

Several authors, some of whom speaking at the International Congress of Mathematicians, have considered whether the ultrapower and the relative commutant of a C*-algebra or II1 factor depend on the choice of the ultrafilter.

**“Small gaps between products of two primes” by D. A. Goldston, S. W. Graham, J. Pintz, and C. Y. Yildrim, published in Proceedings of the London Mathematical Society**

Speaking on the subject at the International Congress, Dan Goldston and colleagues prove several results relating to the representation of numbers with exactly two prime factors by linear forms.

**“On Waring’s problem: some consequences of Golubeva’s method” by Trevor D. Wooley, published in the Journal of the London Mathematical Society**

Wooley’s paper, as well as his talk at the congress, investigates sums of mixed powers involving two squares, two cubes, and various higher powers concentrating on situations inaccessible to the Hardy-Littlewood method.

*Image credit: (1) Inner life of human mind and maths, © agsandrew, via iStock Photo. (2) Maryam Mirzakhani 2014. Photo by International Mathematical Union. Public Domain via Wikimedia Commons.*

The post A Fields Medal reading list appeared first on OUPblog.

]]>Philosophy is a bit like a computer with a memory leak. It starts well, dealing with significant and serious issues that matter to anyone. Yet, in time, its very success slows it down. Philosophy begins to care more about philosophers’ questions than philosophical ones, consuming increasing amount of intellectual attention.

The post Rebooting Philosophy appeared first on OUPblog.

]]>** **

When we use a computer, its performance seems to degrade progressively. This is not a mere impression. An old version of Firefox, the free Web browser, was infamous for its “memory leaks”: it would consume increasing amounts of memory to the detriment of other programs. Bugs in the software actually do slow down the system. We all know what the solution is: reboot. We restart the computer, the memory is reset, and the performance is restored, until the bugs slow it down again.

Philosophy is a bit like a computer with a memory leak. It starts well, dealing with significant and serious issues that matter to anyone. Yet, in time, its very success slows it down. Philosophy begins to care more about philosophers’ questions than philosophical ones, consuming increasing amount of intellectual attention. Scholasticism is the ultimate freezing of the system, the equivalent of Windows’ “blue screen of death”; so many resources are devoted to internal issues that no external input can be processed anymore, and the system stops. The world may be undergoing a revolution, but the philosophical discourse remains detached and utterly oblivious. Time to reboot the system.

Philosophical “rebooting” moments are rare. They are usually prompted by major transformations in the surrounding reality. Since the nineties, I have been arguing that we are witnessing one of those moments. It now seems obvious, even to the most conservative person, that we are experiencing a turning point in our history. The information revolution is profoundly changing every aspect of our lives, quickly and relentlessly. The list is known but worth recalling: education and entertainment, communication and commerce, love and hate, politics and conflicts, culture and health, … feel free to add your preferred topics; they are all transformed by technologies that have the recording and processing of information as their core functions. Meanwhile, philosophy is degrading into self-referential discussions on irrelevancies.

The result of a philosophical rebooting today can only be beneficial. Digital technologies are not just tools merely modifying how we deal with the world, like the wheel or the engine. They are above all formatting systems, which increasingly affect how we understand the world, how we relate to it, how we see ourselves, and how we interact with each other.

The ‘Fourth Revolution’ betrays what I believe to be one of the topics that deserves our full intellectual attention today. The idea is quite simple. Three scientific revolutions have had great impact on how we see ourselves. In changing our understanding of the external world they also modified our self-understanding. After the Copernican revolution, the heliocentric cosmology displaced the Earth and hence humanity from the centre of the universe. The Darwinian revolution showed that all species of life have evolved over time from common ancestors through natural selection, thus displacing humanity from the centre of the biological kingdom. And following Freud, we acknowledge nowadays that the mind is also unconscious. So we are not immobile, at the centre of the universe, we are not unnaturally separate and diverse from the rest of the animal kingdom, and we are very far from being minds entirely transparent to ourselves. One may easily question the value of this classic picture. After all, Freud was the first to interpret these three revolutions as part of a single process of reassessment of human nature and his perspective was blatantly self-serving. But replace Freud with cognitive science or neuroscience, and we can still find the framework useful to explain our strong impression that something very significant and profound has recently happened to our self-understanding.

Since the fifties, computer science and digital technologies have been changing our conception of who we are. In many respects, we are discovering that we are not standalone entities, but rather interconnected informational agents, sharing with other biological agents and engineered artefacts a global environment ultimately made of information, the infosphere. If we need a champion for the fourth revolution this should definitely be Alan Turing.

The fourth revolution offers a historical opportunity to rethink our exceptionalism in at least two ways. Our intelligent behaviour is confronted by the smart behaviour of engineered artefacts, which can be adaptively more successful in the infosphere. Our free behaviour is confronted by the predictability and manipulability of our choices, and by the development of artificial autonomy. Digital technologies sometimes seem to know more about our wishes than we do. We need philosophy to make sense of the radical changes brought about by the information revolution. And we need it to be at its best, for the difficulties we are facing are challenging. Clearly, we need to reboot philosophy now.

Luciano Floridi is Professor of Philosophy and Ethics of Information at the University of Oxford, Senior Research Fellow at the Oxford Internet Institute, and Fellow of St Cross College, Oxford. He was recently appointed as ethics advisor to Google. His most recent book is

The Fourth Revolution: How the Infosphere is Reshaping Human Reality.

Subscribe to the OUPblog via email or RSS.

Subscribe to only philosophy articles on the OUPblog via email or RSS.

*Image credit: Alan Turing Statue at Bletchley Park. By Ian Petticrew. CC-BY-SA-2.0 via Wikimedia Commons.*

The post Rebooting Philosophy appeared first on OUPblog.

]]>The post Does the “serving-first advantage” actually exist? appeared first on OUPblog.

]]>Now we come to the seventh game, which some consider to be the most important game in the set. But is it? Nadal serves an ace at break point down (30-40). Of course! Real champions win the big points, but they win most points on service anyway. At first, it may appear that real champions outperform on big points, but it turns out that weaker players underperform, so that it only seems that the champions outperform. And Nadal goes on to win three consecutive games. He is in a winning mood, the momentum is on his side. But does a ‘winning mood’ actually exist in tennis? (*Spoiler*: It does, but it is smaller than many expect.)

To figure out whether the “serving-first advantage” actually exists, we can use data on more than one thousand sets played at Wimbledon in order to calculate how often the player who served first also won the set. This statistic shows that for the men there is a slight advantage in the first set, but no advantage in the other sets.

On the contrary, in the other sets, there is actually a disadvantage: the player who serves first in the set is more likely to lose the set than to win it. This is surprising. Perhaps it is different for the women? But no, the same pattern occurs in the women’s singles.

It so happens that the player who serves first in a set (if it is not the first set) is usually the weaker player. This is so, because (a) the stronger player is more likely to win the previous set, and (b) the previous set is more likely won by serving the set out rather than by breaking serve. Therefore, the stronger player typically wins the previous set on service, so that the weaker player serves first in the next set. The weaker player is more likely to lose the current set as well, not because of a service (dis)advantage, but because he or she is the weaker player.

This example shows that we must be careful when we try to draw conclusions based on simple statistics. The fact that the player who serves first in the second and subsequent sets often loses the set is true, but this primarily concerns weaker players, while the original hypothesis includes all players. Therefore, we must control for quality differences, and statistical models enable us to do that properly. It then becomes clear that there is no advantage or disadvantage for the player who serves first in the second or subsequent sets; but it does matter in the first set, so it is wise to elect to serve after winning the toss.

*Featured Image Credit: “Balle de Tennis”, Photo by Steve Lorillere, CC by SA 2.0, via flickr.*

The post Does the “serving-first advantage” actually exist? appeared first on OUPblog.

]]>Nowadays it appears impossible to open a newspaper or switch on the television without hearing about “big data”. Big data, it sometimes seems, will provide answers to all the world’s problems. Management consulting company McKinsey, for example, promises “a tremendous wave of innovation, productivity, and growth … all driven by big data”.

The post Statistics and big data appeared first on OUPblog.

]]>** **

Nowadays it appears impossible to open a newspaper or switch on the television without hearing about “big data”. Big data, it sometimes seems, will provide answers to all the world’s problems. Management consulting company McKinsey, for example, promises “a tremendous wave of innovation, productivity, and growth … all driven by big data”.

An alien observer visiting the Earth might think it represents a major scientific breakthrough. Google Trends shows references to the phrase bobbing along at about one per week until 2011, at which point there began a dramatic, steep, and almost linear increase in references to the phrase. It’s as if no one had thought of it until 2011. Which is odd because data mining, the technology of extracting valuable, useful, or interesting information from large data sets, has been around for some 20 years. And statistics, which lies at the heart of all of this, has been around as a formal discipline for a century or more.

Or perhaps it’s not so odd. If you look back to the beginning of data mining, you find a very similar media enthusiasm for the advances it was going to bring, the breakthroughs in understanding, the sudden discoveries, the deep insights. In fact it almost looks as if we have been here before. All of this leads one to suspect that there’s less to the big data enthusiasm than meets the eye. That it’s not so much a sudden change in our technical abilities as a sudden media recognition of what data scientists, and especially statisticians, are capable.

Of course, I’m not saying that the increasing size of data sets does not lead to promising new opportunities – though I would question whether it’s the “large” that really matters as much as the novelty of the data sets. The tremendous economic impact of GPS data (estimated to be $150-270bn per year), retail transaction data, or genomic and bioinformatics data arise not from the size of these data sets, but from the fact that they provide new kinds of information. And while it’s true that a massive mountain of data needed to be explored to detect the Higgs boson, the core aspect was the nature of the data rather than its amount.

Moreover, if I’m honest, I also have to admit that it’s not solely statistics which leads to the extraction of value from these massive data sets. Often it’s a combination of statistical inferential methods (e.g. determining an accurate geographical location from satellite signals) along with data manipulation algorithms for search, matching, sorting and so on. How these two aspects are balanced depends on the particular application. Locating a shop which stocks that out of print book is less of an inferential statistical problem and more of a search issue. Determining the riskiness of a company seeking a loan owes little to search but much to statistics.

Some time after the phrase “data mining” hit the media, it suffered a backlash. Predictably enough, much of this was based around privacy concerns. A paradigmatic illustration was the *Total Information Awareness* project in the United States. Its basic aim was to search for suspicious behaviour patterns within vast amounts of personal data, to identify individuals likely to commit crimes, especially terrorist offences. It included data on web browsing, credit card transactions, driving licences, court records, passport details, and so on. After concerns were raised, it was suspended in 2003 (though it is claimed that the software continued to be used by various agencies). As will be evident from recent events, concerns about the security agencies monitoring of the public continues.

The key question is whether proponents of the huge potential of big data and its allied notion of open data are learning from the past. Recent media concern in the UK about merging of family doctor records with hospital records, leading to a six-month delay in the launch of the project, illustrates the danger. Properly informed debate about the promise and the risks is vital.

Technology is amoral — neither intrinsically moral nor immoral. Morality lies in the hands of those who wield it. This is as true of big data technology as it is of nuclear technology and biotechnology. It is abundantly clear — if only from the examples we have already seen — that massive data sets do hold substantial promise for enhancing the well-being of mankind, but we must be aware of the risks. A suitable balance must be struck.

It’s also important to note that the mere existence of huge data files is of itself of no benefit to anyone. For these data sets to be beneficial, it’s necessary to be able to use the data to build models, to estimate effect sizes, to determine if an observed effect should be regarded as mere chance variation, to be sure it’s not a data quality issue, and so on. That is, statistical skills are critical to making use of the big data resources. In just the same way that vast underground oil reserves were useless without the technology to turn them into motive power, so the vast collections of data are useless without the technology to analyse them. Or, as I sometimes put it, *people don’t want data, what they want are answers*. And statistics provides the tools for finding those answers.

David J. Handis Professor of Statistics at Imperial College, London and author of Statistics: A Very Short Introduction

The Very Short Introductions (VSI) series combines a small format with authoritative analysis and big ideas for hundreds of topic areas. Written by our expert authors, these books can change the way you think about the things that interest you and are the perfect introduction to subjects you previously knew nothing about. Grow your knowledge with OUPblog and the VSI series every Friday and like Very Short Introductions on Facebook. Subscribe to on Very Short Introductions articles on the OUPblog via email or RSS.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS

*Image credit: Diagram of Total Information Awareness system designed by the Information Awareness Office. Public domain via Wikimedia Commons*

The post Statistics and big data appeared first on OUPblog.

]]>Politically, socially, and culturally, the 1960s were tumultuous times. But tucked away amidst the folds of the Cold War, civil rights activism, anti-war demonstrations, the feminist movement, revolts of students and workers, flower power, sit-ins, Marxist and Maoist revolutions – almost unnoticed -- a new science was born in university campuses across North America, Britain, Europe and even, albeit tentatively, certain non-Western parts of the world.

The post The genesis of computer science appeared first on OUPblog.

]]>

Politically, socially, and culturally, the 1960s were tumultuous times. But tucked away amidst the folds of the Cold War, civil rights activism, anti-war demonstrations, the feminist movement, revolts of students and workers, flower power, sit-ins, Marxist and Maoist revolutions — almost unnoticed — a new science was born in university campuses across North America, Britain, Europe. and even, albeit tentatively, certain non-Western parts of the world. This new science acquired a name of its own: *computer science* (or some variations thereof, ‘computing science’, ‘informatique’, ‘informatik’).

At the heart of this new science was the process by which symbols, representing information, could be automatically (or with minimal human intervention) transformed into other symbols (representing other kinds or new information). This process was called, variously, *automatic computation, information processing, *or *symbol processing*. The agent of this process was the artifact named, generically, *computer.*

The computer is an *automaton*. In the past, this word, ‘automation’ (coined in the 17th century) was used to mean an artifact which, largely driven by its own source of motive power, performs certain repetitive patterns of movement and action without any external influences. Often, these actions imitated those of humans and animals. Ingenious mechanical automata had been invented since antiquity, largely for the amusement of the wealthy though some were of a more utilitarian nature (such as the water clock, said to be invented in the 1st century CE by the engineer/inventor Hero of Alexandria).

So mechanical automata that carry out physical actions of one sort or another form a venerable tradition. But the automatic electronic digital computer marked the birth of a whole new genus of automata, for this artifact was designed or intended to imitate human thinking; and, indeed, to extend or even replace humans in some of their highest cognitive capacities. Such was the power and scope of this artifact, it became the fount of a socio-technological revolution now commonly referred to as the Information Revolution, and a brand new science, computer science.

But computer science is not a *natural* science. It is not of the same kind as, say, physics, chemistry, biology, or astronomy. The gazes of these sciences are directed toward the natural world, inorganic and organic. The domain of computer science is the artificial world, the world of made objects, of artifacts — in particular, *computational artifacts*. Computer science is a *science of the artificial*, to use a term coined by Nobel laureate polymath scientist Herbert Simon.

A fundamental difference between a natural science like physics and an artificial science such as computer science relates to the age old philosophical distinction between *is* and *ought*. The natural scientist is concerned with the world *as it is*; she is not in the business of deliberately changing the natural world. Thus, the astronomer peering at the cosmos does not desire to change it but to understand it; the paleontologist examining rock layers in search of fossils is doing this to learn more about the history of life on earth, not to change the earth (or life) itself. For the natural scientist, understanding the natural world is an end in itself.

The scientist of the artificial also wishes to understand, not nature but artifacts. However that desire is a means to an end, for the scientist of the artificial, ultimately, wishes to *alter *the world in some respect. Thus the computer scientist wants to alter some aspect of the world by creating computational artifacts as improvements on existing one, or by creating new computational artifacts that have never existed before. If the natural scientist is concerned with the world *as it is*, the computer scientist obsesses with the world as she thinks *it ought to be*. For computer scientists, like other scientists of the artificial (such as engineering scientists) their domain comprises of artifacts that are intended to serve some purpose. An astronomer does not ask what a particular galaxy or planet is *for*; it just *is*. A computer scientist, striving to understand a particular computational artifact begins with the purpose for which it was created. Artifacts are imbued with purpose, reflecting the purposes or goals imagined for them by their human creators.

So how was this science of the artificial called computer science born? Where, when, and how did it begin? Who were its creators? What kinds of purposes drove the birth of this science? What were its seminal ideas? What makes it distinct from other, more venerable, sciences of the artificial? Was the genesis of computer science evolutionary or revolutionary? A ‘big bang’ or a ‘steady state’ birth? These are the kinds of questions that interest historians of science peering into the origins of what is one of the youngest artificial sciences of the 20th century.

Subrata Dasgupta is the Computer Science Trust Fund Eminent Scholar Chair in the School of Computing & Informatics at the University of Louisiana at Lafayette, where he is also a professor in the Department of History. Dasgupta has written fourteen books, most recently

It Began with Babbage: The Genesis of Computer Science.

Subscribe to the OUPblog via email or RSS.

Subscribe to only technology articles on the OUPblog via email or RSS.

*Image Credit: A reflection of a man typing on a laptop computer. Photo by Matthew Roth. CC-BY-SA-3.0 via Wikimedia Commons.*

The post The genesis of computer science appeared first on OUPblog.

]]>