The post A person-less variant of the Bernadete paradox appeared first on OUPblog.

]]>Imagine that Alice is walking towards a point – call it *A* – and will continue walking past *A* unless something prevents her from progressing further.

There is also an infinite series of gods, which we shall call *G*_{1}, *G*_{2}, *G*_{3}, and so on. Each god in the series intends to erect a magical barrier preventing Alice from progressing further if Alice reaches a certain point (and each god will do nothing otherwise):

(1) *G*_{1} will erect a barrier at exactly ½ meter past *A* if Alice reaches that point.

(2) *G*_{2} will erect a barrier at exactly ¼ meter past *A* if Alice reaches that point.

(3) *G*_{3} will erect a barrier at exactly ^{1}/_{8} meter past *A* if Alice reaches that point.

And so on.

Note that the possible barriers get arbitrarily close to *A*. Now, what happens when Alice approaches *A*?

Alice’s forward progress will be mysteriously halted at *A*, but no barriers will have been erected by any of the gods, and so there is no explanation for Alice’s inability to move forward. Proof: Imagine that Alice did travel past *A*. Then she would have had to go some finite distance past *A*. But, for any such distance, there is a god far enough along in the list who would have thrown up a barrier before Alice reached that point. So Alice can’t reach that point after all. Thus, Alice has to halt at *A*. But, since Alice doesn’t travel past *A*, none of the gods actually do anything.

Some responses to this paradox argue that the Gods have individually consistent, but jointly inconsistent intentions, and hence cannot actually promise to do what they promise to do. Other responses have suggested that the fusion of the individual intentions of the gods, or some similarly complex construction, is what blocks Alice’s path, even though no individual God actually erects a barrier. But it turns out that we can construct a version of the paradox that seems immune to both strategies.

Image that *A*, *B*, and *C* are points lying exactly one meter from the next, in a straight line (in that order). A particle *p* leaves point *A*, and begins travelling towards point *B* at exactly one second before midnight. The particle *p* is travelling at exactly one meter per second. The particle *p* will pass through *B* (at exactly midnight) and continue on towards *C* unless something prevents it from progressing further.

There is also an infinite series of force-field generators, which we shall call *G*_{1}, *G*_{2}, *G*_{3}, and so on. Each force-field generator in the series will erect an impenetrable force field at a certain point between *A* and *B*, and at a certain time. In particular:

(1) *G*_{1} will generate a force-field at exactly ½ meter past *B* at ¼ second past midnight, and take the force-field down at exactly 1 second past midnight.

(2) *G*_{2} will generate a force-field at exactly ¼ meter past *B* at exactly ^{1}/_{8} second past midnight, and take the force-field down at exactly ^{1}/_{2} second past midnight.

(3) *G*_{3} will generate a force-field at exactly ^{1}/_{8} meter past *B* at exactly ^{1}/_{16} second past midnight, and take the force-field down at exactly ^{1}/_{4} second past midnight.

And so on. In short, for each natural number *n*:

(n) *G*_{n} will generate a force-field at exactly ^{1}/_{2}^{n} meter past *B* at exactly ^{1}/_{2}^{n+1 }second past midnight, and take the force-field down at exactly ^{1}/_{2}^{n-1 }second past midnight.

Now, what happens when *p* approaches *B*?

Particle *p*’s forward progress will be mysteriously halted at *B*, but *p* will not have impacted any of the barriers, and so there is no explanation for *p*’s inability to move forward. Proof: Imagine that particle *p* did travel to some point *x* past *B*. Let *n* be the largest whole number such that ^{1}/_{2}^{n} is less than *x*. Then *p* would have travelled at a constant speed between the point ½^{n+2} meter past *B* and ^{1}/_{2}^{n} meter past *B* during the period from ½^{n+2} second past midnight and ^{1}/_{2}^{n} second past midnight. But there is a force-field at ^{1}/_{2}^{n+1 }meter past *B* for this entire duration, so *p* cannot move uniformly from ½^{n+2} meter past *B* and ^{1}/_{2}^{n} meter past *B* during this period. Thus, *p* is halted at *B*. But *p* does not make contact with any of the force-fields, since the distance between the *m*^{th} force-field and *p* (when it stops at *B*) is ^{1}/_{2}^{m} meters, and the *m*^{th} force-field does not appear until ^{1}/_{2}^{m+1} second after the particle halts at *B*.

Notice that since there are no gods (or anyone else) in this version of the puzzle, no solution relying on facts about intentions will apply here. More generally, unlike the original puzzle, in this set-up the force-fields are generated at the appropriate places and times regardless of how the particle behaves – there are no instructions or outcomes that are dependent upon the particle’s behavior. In addition, arguing that, even though no individual force-field stops the particle, the fusion or union of the force-fields does stop the particle will be tricky, since although at any point during the first ½ second after midnight two different force-fields will exist, there is no time at which all of the force-fields exist.

Thanks go to the students in my Fall 2016 Paradoxes and Infinity course for the inspiration for this puzzle!

*Featured image credit: Photo by Nicolas Raymond, CC BY 2.0 via Flickr.*

The post A person-less variant of the Bernadete paradox appeared first on OUPblog.

]]>The post Is elementary school mathematics “real” mathematics? appeared first on OUPblog.

]]>There is little doubt that elementary students should know the multiplication tables, be able to do simple calculations mentally, develop fluency in using algorithms to carry out more complex calculations, and so on. Indeed, these topics are fundamental to students’ future learning of mathematics and important for everyday life. Yet, is elementary students’ engagement with these topics in itself engagement with “real” mathematics?

I suggest that classroom discourse in an elementary school classroom where students engage with “real” mathematics should satisfy two major considerations. First, it should be meaningful and important to the students. Elementary students’ engagement with the topics I mentioned earlier can offer a productive context in which to satisfy this first consideration, especially if students’ work is characterized by an emphasis not only on procedural fluency but also on conceptual understanding.

Second, the classroom discourse in an elementary school classroom where students engage with “real” mathematics should be a rudimentary but genuine reflection of the broader mathematical practice. One might interpret the second consideration as asking us to treat elementary students as little mathematicians. That would be a misinterpretation. The point is that some aspects of mathematicians’ work that are fundamental to what it means to do mathematics in the discipline should also be represented, in pedagogically and developmentally appropriate forms, in elementary students’ engagement with the subject matter.

In its typical form, classroom discourse in elementary school classrooms fails to satisfy the second consideration. A main reason for this is the limited attention it pays to issues concerning the epistemic basis of mathematics, including what counts as evidence in mathematics and how new mathematical knowledge is being validated and accepted. The notion of *proof* lies at the heart of these epistemic issues and is a defining feature of authentic mathematical work. Yet the notion of proof has a marginal place (if any at all) in many elementary school classrooms internationally, thus jeopardizing students’ opportunities to engage with “real” mathematics.

Consider, for example, a class of eight–nine-year-olds who have been writing number sentences for the number ten and have begun to develop the intuitive understanding that there are infinitely many number sentences for ten when subtracting two whole numbers (e.g., 15-5=10). In most elementary school classrooms the activity would finish here, possibly with the teacher ratifying students’ intuitive understanding thus giving it the status of public knowledge in the classroom. However, in a classroom that aspires to engage students with “real” mathematics, new mathematical knowledge isn’t established by appeal to the authority of the teacher, but rather on the basis of the logical structure of mathematics. Thus the teacher of this classroom may help the students think how they can prove their intuitive understanding.

Given appropriate instructional support, students of this age can prove that there are infinitely many number sentences for ten when subtracting two whole numbers. For example, a student called Andy in a class of eight–nine-year-olds I studied for my research generated an argument along the following lines:

To generate infinitely many subtraction number sentences for ten, you can start with 11-1=10. For each new number sentence you can add one to both terms of the previous subtraction sentence. This looks like this: 12-2=10, 13-3=10, 14-4=10, 15-5=10, and so on. This can go on forever and will maintain a constant difference of ten.

Andy’s argument used mathematically accepted ways of reasoning, which were also accessible to his peers, to establish convincingly the truth of an intuitive understanding. This argument illustrates what a proof can look like in the context of elementary school mathematics. The process of developing this argument contributed also a powerful element of mathematical sense making to Andy’s work with number sentences for ten: As he carried out calculations to write the various number sentences, he thought deeply about key arithmetical properties (e.g., how to maintain a constant difference) and he put everything together in a coherent line of reasoning. Thus an elevated status of proof in elementary students’ work can play a pivotal role in students’ meaningful engagement with mathematics. This presents a connection with the first consideration I discussed earlier.

To conclude, elementary school mathematics as reflected in typical classroom work internationally falls short of being “real.” Yet it has the potential to become “real” if the learning experiences currently offered to elementary students are transformed. A major part of this transformation needs to concern the epistemic basis of mathematics, with more opportunities offered for students to engage with proof in the context of mathematics as a sense-making activity. The teacher has an important role to play as the representative of the discipline of mathematics in the classroom and as the person with the responsibility to induct students into mathematically acceptable ways of reasoning and standards of evidence. This is a complex role that cannot be fully understood without a strong research basis about the kind of teaching practices and curricular materials that can facilitate elementary students’ access to “real” mathematics.

*Featured image credit: Math by Pixapopz. Public domain via Pixabay.*

The post Is elementary school mathematics “real” mathematics? appeared first on OUPblog.

]]>The post Measuring up appeared first on OUPblog.

]]>My interest was further aroused by complications arising from the interactions between statistics and the results of different kinds of measurement. Many textbooks say it’s meaningless to calculate the arithmetic mean of ordinal measurements — those where the numbers reflect only the order of the objects being measured — and yet a glance at scientific and medical practice shows that this is commonplace. Clearly, although measurement was ubiquitous throughout the entire world (or, as I have put it elsewhere, we view the world through the lens of measurement), there was more to it than met the eye. Things were not always as simple as they might seem. Indeed, it would not be stretching things to say that occasionally, consideration of measurement issues revealed apparent rips in the fabric of reality.

A simple example arises from the *Daily Telegraph* report of 8 February 1989, which said that “Temperatures in London were still three times the February average at 55 °F (13 °C) yesterday”, prompting the natural question: what is the average February temperature? The answer is obvious — we just divide the temperature by three. So the February average is a third of 55 °F, equal to 18⅓ °F. Alternatively, it is a third of 13 °C, equal to 4⅓ °C. But this is very odd, because these two results are different. Indeed, the first is below freezing, while the second is above. In fact, in this example a little thought shows where things have done wrong, and which average temperature is right. But things are not always so straightforward, and occasionally deep thought about the nature of measurement is needed to work out what is going on. This reveals that there are different kinds of measurement. At one extreme we have so-called representational measurement, and at the other pragmatic measurement, with most being a mixture of the two extremes.

The aim of representational measurement is to construct a simplified model of some aspect of the world. In particular, we assign numbers to objects so that the relations between the numbers correspond to the relations between the objects. This rock extends the spring further than that, so we say it is heavier, and assign it a larger weight number. These two rocks together stretch the spring the same distance as a third one alone, so we give them numbers which add up to the number we give the third rock. And so on.

Representational measurement is essentially based on certain symmetries in the mapping from the world to the numbers, and understanding of these symmetries can be very revealing about properties of the world — about the way the world works. A familiar example is through the use of dimensional analysis in physics, engineering, and elsewhere. In contrast, a provocative way of describing pragmatic measurement is that “we don’t know what we are talking about.” What this really means is that we must define the characteristic we aim to measure before we can measure it. Or, more precisely, we define it at the same time as we measure it. The definition is implicit in the measurement procedure, and it is only through the measurement procedure that we know precisely what it is we are talking about. At first this strikes some people as strange. But take the economic example of inflation rate. Inflation can be defined in various different ways. None is “right.” Rather, it depends what properties you want the measurement to have, and what questions you want to answer. It depends on what you want to use the concept and the measured numbers for.

The bottom line to all this is that decisions and understanding are (or at least should be!) based on evidence. Evidence comes from data. And data come from measurements. Given how central measurement is to our understanding of the universe about us, to education, to government, to medicine, to technology, and so on, it is entirely fitting that it should be the topic of the 500^{th} volume in the *Very Short Introduction *series.

*Featured image credit: Scale kitchen measure by Unsplash. CC0 Public Domain via Pixabay.*

The post Measuring up appeared first on OUPblog.

]]>The post Very short facts about the Very Short Introductions appeared first on OUPblog.

]]>- VSIs have been translated into 50 languages, including Gujarati (an Indo-Aryan language) and Belarusian. Arabic is the most popular translated language.
- When their VSIs published, the oldest VSI author was Stanely Wells at age 85, author of
*William Shakespeare: A Very Short Introduction*. - The first VSI,
*Classics*, was published 21 years ago, in 1995 and remains in its first edition. - The highest selling VSI is
*Globalization*, which will soon be on its fourth edition! When it was first proposed people were worried it might not be a success. - Someone once wrote in suggesting we needed a VSI to Olivia Newton John. Other suggestions have included a very short introduction to coconuts and a very short introduction to Harry Potter.
- One VSI author had a tie made to match his jacket cover. Unfortunately his cover then needed to be flipped around, so his tie is now upside down.
- There are 84 VSI titles starting with “The”.
- Discounting the word “The”, the most common initial letter of a VSI is ‘A’ (55 titles), followed closely by ‘C’ and ‘M’ (52 titles each). Between them the VSI titles cover every letter of the alphabet, with the letter ‘e’ appearing over 600 times.

So, where’s the gap in your knowledge?

*Featured image credit: Very Short Introductions © Jack Campbell-Smith, for Oxford University Press.*

The post Very short facts about the Very Short Introductions appeared first on OUPblog.

]]>The post Just because all philosophers are on Twitter… appeared first on OUPblog.

]]>It is not all bad news of course. The expansion and ready availability of communication technologies has meant that it is far easier for serious ideas to be tested and refined, far easier to develop diverse communities of scholarship, and far easier for new discoveries, theories and data to permeate beyond ivory towers. A modern counterpoint is that it is also far easier to spread misleading and self-serving theories, far easier to spread messages of hate and violence, and far easier for discourse to polarise as, with so many options available, people gravitate towards those sources which reinforce and intensify existing prejudices.

This explosion of available information and opinions also presents a challenge to traditional notions of education and citizenry. There may have been a time when the purpose of education was primarily to create an informed citizenry – to give them the relevant information – but that time is certainly not now. Now, information is more freely available and a far more important skill is the ability to independently discern reliable from unreliable sources, fact from fiction, genuine authority from charlatanism, feline ecology from lolcats. Where once scholarship meant perseverance and a dedication to tracking down otherwise inaccessible information, an increasingly vital skill in modern scholarship is a well-tuned bullshit detector.

A useful distinction to bear in mind here is between message and medium. (Philosophers love distinctions!) Each has its own hype cycle, and they are not always in sync. At the top of each cycle is the peak of inflated expectations. It is at this point in the cycle of new media technologies that we hear grand transformational claims, such as the view that virtual reality will end inequality, that the internet will kill traditional publishers and bookshops, that social networking will scupper academic peer-review, or that Massive Open Online Courses (MOOCs) will turn University campuses into ghost towns. After a tough period of disillusionment when initial enthusiasts and investors come to terms with the failure of their over-inflated expectations, the cycle reaches a plateau of productivity where the new medium is embraced by an increasingly significant portion of the population who see genuine usefulness beyond the hype.

Though the cycle repeats with each new medium – from telephones to Twitter – it is not futile. For what is gained through each iteration is a deeper understanding of the phenomenon which the technology was due to replace. So, for example, we learn something about the true (and changing) value of publishers, bookshops, peer-review and universities by understanding that they cannot be wholly replaced by new technologies. A strong theme running through these particular values is the notion of a discerning eye. With so many pieces of information, opinions and lolcats out there, we would simply be lost if we did not have some way of filtering reliable from unreliable research, scientific from wishful thinking, well-reasoned interpretations from self-serving propaganda.

This is not to say, of course, that the best way to navigate modern media seas is by blind deference to authority. All authorities are fallible (with the notable exception of OUP, of course). Far more important is the ability to critically evaluate pedigree for oneself. This is where universities can come in. My own engagement with MOOCs (through the Open University’s FutureLearn platform) has taught me that while large online courses are fantastic at bringing together a diverse range of students, they work best when those students are encouraged to engage critically with the ideas and experience they and others bring to the community. Inculcating and refining these skills is something that smaller scale teaching and face-to-face education are, im my experience, uniquely placed to do. So while everyone being on Twitter might not mean that everyone has interesting things to say, the resulting flood of information and opinion does mean that educators still have interesting things to do.

*Featured image: Mobile Phone by geralt. Public domain via Pixabay.*

The post Just because all philosophers are on Twitter… appeared first on OUPblog.

]]>The post Teaching teamwork appeared first on OUPblog.

]]>I think we can improve undergraduate and graduate students’ educational experiences by giving them the benefit of working in teams. This can be implemented in short-term (two-hour to two-week) or longer term (2-12 week) projects. I believe that working on a larger project with 2-4 other students, for at least 15-35% of their coursework in several courses, would build essential professional and personal skills. I agree that it is easier to plan and execute team projects in smaller graduate courses than larger undergraduate courses.

Unfortunately, many faculty members were trained through lecture, individual homework, and strictly solitary testing. They have weak teamwork skills and are little inclined to teach teamwork. In fact, they have many fears that increase their resistance. Some believe that teamwork takes extra effort for faculty or that teams naturally lead to one person doing most of the work.

Teamwork projects may require fresh thinking by faculty members, but it may be easier to supervise and grade ten teams of four students, than to mentor and grade 40 individuals. Moreover, well-designed teamwork projects could lead to published papers or start-up companies in which faculty are included as co-authors or advisers. In my best semester, five of the seven teams in my graduate course on information visualization produced a final report that led to a publication in a refereed journal or conference.

Another possible payoff is that teamwork courses may create more engaged students with higher student retention rates. Of course teams can run into difficulties and conflict among students. These are teachable moments when students can learn lessons that will help them in their professional and personal lives. These difficulties and conflicts may be more visible than individual students failing or dropping out, but I think they are a preferable alternative.

So if faculty members are ready to move towards teaching with team projects, there are some key decisions to be made. Sometimes two-person teams are natural, but larger teams of 3-5 allow more ambitious projects, while increasing the complexity. I’ve also run projects where the entire class acts as a team to produce a project such as the Encyclopedia of Virtual Environments (EVE), in which the 20 students wrote about 100 web-based articles defining the topic. Colleagues have told me about their teamwork projects that had their French students create an online newspaper for French alumni describing campus sports events or a timeline of the European philosophical movements leading up to the framing of the US Constitution.

**Team formation:** I have moved to assigning team membership (rather than allow self-formation) using a randomization strategy, which is recommended in the literature. This helps ensure diversity among the team members, speeds the process of getting teams started, and eliminates the problem of some students having a hard time finding a team to join.

**Project design (student-driven): **Well-designed team projects take on more ambitious efforts, giving students the chance to learn how to deal with a larger goal. I prefer student designed projects with an outside mentor, where the goal is to produce an inspirational pilot project that benefits someone outside the classroom and survives beyond the semester. I’ve had student teams work on software to schedule the campus bus routes or support a local organization that brings hundreds of foreign students for summer visits in people’s homes. Other teams helped a marketing company to assess consumer behavior in a nearby shopping mall or an internet provider to develop a network security monitor. Two teams proposed novel visualizations for the monthly jobs report of the US Bureau of Labor Statistics, which they presented to the Commissioner and her staff. I give a single grade to the team, but do require that their report includes a credits section in which the role of each person is described.

**Project design (faculty-driven):** Another approach is for the teacher to design the team projects, which might be the same ones for every team. With a four-person team, distinct roles can be assigned to each person, so it becomes easier to grade students individually. Just getting students to talk together, resolve differences, agree to schedules, etc. gives them valuable skills.

**Milestones:** Especially in longer projects, there should be deliverables every week, e.g. initial proposal, first designs, test cases, mid-course report, preliminary report, and final report.

**Deliverables:** With teams there can be multiple deliverables, e.g. in my graduate information visualization course, students produce a full conference paper, 3-5 minute YouTube video, working demo, and slide deck & presentation.

**Teamwork strategies:** For short-term teams (a few weeks to a semester), simple strategies are probably best. I use: (1) “Something small soon,” which asks students to make small efforts that validate concepts before committing greater energy and (2) “Who does what by when,” which clarifies responsibilities on an hourly basis, such as “If Sam and I do the draft by 6pm Tuesday, will Jose and Marie give us feedback by noon on Wednesday?” Teamwork does not require any meetings at all; it is a management strategy to coordinate work among team members.

**Critiques and revisions:** I ask students to post their preliminary reports on the class’s shared website two weeks before the end of the semester. Then students sign up to read and critique one of the reports, which they send to me and the report authors. They write one paragraph about what they learned and liked, then as much as constructive suggestions for improvements to the report’s overall structure, to proposed references and improved figures, to grammar and spelling fixes. When students realize that their work will be read by other students they are likely to be more careful. When students read another team’s project report, they reflect on their own project report, possibly seeing ways to improve it. I grade the critiques which can be 3-6% of their final grade. My goal is to help every team to improve the quality of their work. Sometimes the process of preparing their preliminary reports early and then revising does much to improve quality.

**Concerns:** I know that some faculty members worry that one person in a team will do the majority of the work, but if projects are ambitious enough then that possibility is reduced. Grading remains an issue that each faculty member has to decide on. I find that having students include a credits box in their final report helps, but other instructors require peer rating/reporting for team members.

In summary, anything novel takes some thinking, but embracing team projects could substantially improve education programs, engage more marginal students, and improve student retention rates. Learning to use teamwork tools such as email, videoconferencing, and shared documents provides students with valuable skills. Working in teams can be fun for students and satisfying for teachers.

*Featured image credit: Harvard Business School classroom by HBS 1908. CC BY-SA 3.0 via Wikimedia Commons.*

The post Teaching teamwork appeared first on OUPblog.

]]>The post What is combinatorics? appeared first on OUPblog.

]]>- How many possible sudoku puzzles are there?
- Do 37 Londoners exist with the same number of hairs on their head?
- In a lottery where 6 balls are selected from 49, how often do two winning balls have consecutive numbers?
- In how many ways can we give change for £1 using only 10p, 20p, and 50p pieces?
- Is there a systematic way of escaping from a maze?
- How many ways are there of rearranging the letters in the word “ABRACADABRA”?
- Can we construct a floor tiling from squares and regular hexagons?
- In a random group of 23 people, what is the chance that two have the same birthday?
- In chess, can a knight visit all the 64 squares of an 8 × 8 chessboard by knight’s moves and return to its starting point?
- If a number of letters are put at random into envelopes, what is the chance that no letter ends up in the right envelope?

What do you notice about these problems?

First of all, unlike many mathematical problems that involve much abstract and technical language, they’re all easy to understand – even though some of them turn out to be frustratingly difficult to solve. This is one of the main delights of the subject.

Secondly, although these problems may appear diverse and unrelated, they mainly involve selecting, arranging, and counting objects of various types. In particular, many of them have the forms. Does such-and-such exist? If so, how can we construct it, and how many of them are there? And which one is the ‘best’?

The subject of combinatorial analysis or combinatorics (pronounced *com-bin-a-tor-ics*) is concerned with such questions. We may loosely describe it as the branch of mathematics concerned with selecting, arranging, constructing, classifying, and counting or listing things.

To clarify our ideas, let’s see how various sources define combinatorics.

Oxford Dictionaries describe it briefly as:

“The branch of mathematics dealing with combinations of objects belonging to a finite set in accordance with certain constraints, such as those of graph theory.”

While the Collins dictionary present it as:

“the branch of mathematics concerned with the theory of enumeration, or combinations and permutations, in order to solve problems about the possibility of constructing arrangements of objects which satisfy specified conditions.”

Wikipedia introduces a new idea, that combinatorics is:

“a branch of mathematics concerning the study of finite or countable discrete structures.”

So the subject involves finite sets or discrete elements that proceed in separate steps (such as the numbers 1, 2, 3 …), rather than continuous systems such as the totality of numbers (including π, √2, etc.) or ideas of gradual change such as are found in the calculus. The Encyclopaedia Britannica extends this distinction by defining combinatorics as:

“the field of mathematics concerned with problems of selection, arrangement, and operation within a finite or discrete system … One of the basic problems of combinatorics is to determine the number of possible configurations (e.g., graphs, designs, arrays) of a given type.”

Finally, Wolfram Research’s *MathWorld* presents it slightly differently as:

“the branch of mathematics studying the enumeration, combination, and permutation of sets of elements and the mathematical relations that characterize their properties,”

adding that:

“Mathematicians sometimes use the term ‘combinatorics’ to refer to a larger subset of discrete mathematics that includes graph theory. In that case, what is commonly called combinatorics is then referred to as ‘enumeration’.”

The subject of combinatorics can be dated back some 3000 years to ancient China and India. For many years, especially in the Middle Ages and the Renaissance, it consisted mainly of problems involving the permutations and combinations of certain objects. Indeed, one of the earliest works to introduce the word ‘combinatorial’ was a *Dissertation on the combinatorial art* by the 20-year-old Gottfried Wilhelm Leibniz in 1666. This work discussed permutations and combinations, even claiming on the front cover to ‘prove the existence of God with complete mathematical certainty’.

Over the succeeding centuries the range of combinatorial activity broadened greatly. Many new types of problem came under its umbrella, while combinatorial techniques were gradually developed for solving them. In particular, combinatorics now includes a wide range of topics, such as the geometry of tilings and polyhedra, the theory of graphs, magic squares and Latin squares, block designs and finite projective planes, and partitions of numbers.

Much of combinatorics originated in recreational pastimes, as illustrated by such well-known puzzles such as the Königsberg bridges problem, the four-colour map problem, the Tower of Hanoi, the birthday paradox, and Fibonacci’s ‘rabbits’ problem. But in recent years the subject has developed in depth and variety and has increasingly become a part of mainstream mathematics. Prestigious mathematical awards such as the Fields Medal and the Abel Prize have been given for ground-breaking contributions to the subject, while a number of spectacular combinatorial advances have been reported in the national and international media.

Undoubtedly part of the reason for the subject’s recent importance has arisen from the growth of computer science and the increasing use of algorithmic methods for solving real-world practical problems. These have led to combinatorial applications in a wide range of subject areas, both within and outside mathematics, including network analysis, coding theory, probability, virology, experimental design, scheduling, and operations research.

*Featured image credit: ‘Sudoku’ by Gellinger. CC0 public domain via Pixabay.*

The post What is combinatorics? appeared first on OUPblog.

]]>The post A brief history of crystallography appeared first on OUPblog.

]]>So, what is crystallography? Put simply, it is the study of crystals. Now, let’s be careful here. I am not talking about all those silly websites advertising ways in which crystals act as magical healing agents, with their chakras, auras, and energy levels. No, this is a serious scientific subject, with around 26 or so Nobel prizes to its credit. And yet, despite this, it remains a largely hidden subject, at least in the public mind.

Crystallography as a science has a long and venerable history going back to the 17^{th} century when the sheer beauty of the symmetry of crystals suggested an underlying order of some kind. For the next three centuries, our knowledge of what crystals actually were was based on conjecture and argument, with a few simple experiments thrown in. From their symmetry and shapes it was argued that crystals must consist of ordered arrangements of minute particles: today we know them as atoms and molecules.

But it was the discovery of X-rays in 1895 that changed all that, for a few years later in 1912 in Germany, Max Laue, Walter Friedrich, and Paul Knipping showed that an X-ray beam incident on a crystal was scattered to form a regular pattern of spots on a film (we call this diffraction). Thus it was proved that X-rays consisted of waves and furthermore this gave direct evidence of the underlying order of atoms in the crystal. Hence Nobel Prize number 1 went to Laue in 1914. However, it was William Lawrence Bragg (WLB) who in 1912 at the age of 22 showed how the observed diffraction pattern could be used to determine the positions of atoms in the crystal, thus launching a completely new scientific discipline, X-ray crystallography. Working with his father, William Henry Bragg (WHB), they quickly determined the crystal structures of several materials starting with that of common salt and diamond. Both father and son shared Nobel prize number 2 in 1915. William Henry Bragg and William Lawrence Bragg went on to create world-class research groups working on a huge range of solid materials and incidentally they were active in encouraging women into science.

Since then X-ray crystallography, which today is used throughout the world, has been the method of choice for determining the crystal structures of organic and inorganic solids, pharmaceuticals, biological substances such as proteins and viruses, and indeed all kinds of solid substances. Crick and Watson’s determination of the double helix of DNA is probably the most well-known example of the use of crystallography, incidentally a discovery made in William Lawrence Bragg’s laboratory in Cambridge. Had it not been for X-ray (and later neutron and electron) crystallography we probably would not have today much of an electronics industry, computer technology, new pharmaceuticals, new materials of all sorts, nor the modern field of genetics. The Braggs left a huge legacy which today continues to make astonishing progress.

*Featured image credit: Protein Crystals Use in XRay Crystallography by CSIRO. CC BY 3.0 via Wikimedia Commons *

The post A brief history of crystallography appeared first on OUPblog.

]]>The post Today’s Forecast: Cloudy with a chance of seizures appeared first on OUPblog.

]]>For many years, experts in neurology, computer science, and engineering have worked toward developing algorithms to predict a seizure before it occurs. If an algorithm could detect subtle changes in the electrical activity of a person’s brain (measured by electroencephalography (EEG)) before a seizure occurs, people with epilepsy could take medications only when needed, and possibly reclaim some of those daily activities many of us take for granted. But algorithm development and testing requires substantial quantities of suitable data, and progress has been slow. Many early research reports developed and tested algorithms on relatively short intracranial EEG data segments from patients with epilepsy undergoing intracranial EEG before surgery. There are a number of problems with this. First, patients undergoing pre-surgical monitoring for epilepsy typically have their medications reduced to encourage seizures to occur, which causes a progressive decrease in the blood levels of medications which have been shown to affect the normal baseline pattern in a patient’s EEG. Second, hospital stays for pre-surgical monitoring by necessity rarely last more than two weeks, providing a very limited amount of data for any single patient. These short data segments with changing baseline EEG characteristics are particularly problematic when algorithm scientists attempt to measure an algorithm’s false positive rate, or the number of false alarms that a seizure forecasting algorithm might raise. Development of robust, reliable seizure prediction algorithms requires data on many seizures and many periods of baseline, non-seizure EEG with enough time between the seizures to allow the brain to recover. In addition, researchers are often reluctant to share algorithm data and programs; privacy concerns and the high cost of sharing large data sets makes testing and comparison very difficult.

In 2013 a group of physicians and scientists from Melbourne Australia reported a successful trial of an implanted device capable of measuring EEG from intracranial electrode strips, and telemetering the EEG data to a small external device about the size of a smart phone that could run seizure forecasting algorithms and provide warnings of impending seizures. The device used a proprietary seizure forecasting algorithm that performed well enough to be helpful for some patients in the trial, raising hopes that seizure forecasting might soon become clinically possible.

We recently made an effort to use Kaggle.com — a website that runs data science competitions to develop algorithms to predict everything from insurance rates to the Higgs Boson — to develop new algorithms for seizure forecasting. Our competition used intracranial EEG data from the same device in the Australian trial (implanted in eight dogs with naturally occurring epilepsy) as well as data from two human patients undergoing intracranial monitoring. In hope of winning $15,000 in prize money, plus bragging rights among elite data science circles, hundreds of algorithm developers, most with little or no experience with epilepsy or EEG, worked countless hours to build, test, and rebuild algorithms for seizure forecasting, and tested their algorithms on nearly 350 seizures recorded over more than 1,500 days. After four months, over half of these “crowdsourced” algorithms performed better than random predictions, and the winning algorithms accurately predicted over 70% of seizures with a 25% false positive rate. The data are available for researchers to continue developing new algorithms for predicting seizures, and can serve as a benchmark for new algorithms to be compared directly to one another and to the algorithms developed in this competition. The best performing algorithms in the competition used a mixture of conventional and complex approaches drawn from physics, engineering, and computer science, sometimes in unorthodox ways that proved to be surprisingly effective. The winning teams also made the source code for their algorithms publicly available, providing a benchmark and starting point for future algorithm developers.

While we applaud the talented algorithm scientists who took home the prize money, we hope the real winners of the contest will be our patients.

*Featured image: Sky cloudy. Uploaded by Carla Nunziata. CC-BY-SA-3.0 via Wikimedia Commons*

The post Today’s Forecast: Cloudy with a chance of seizures appeared first on OUPblog.

]]>The post Doing it with sensitivity appeared first on OUPblog.

]]>The most recent time this happened, it reminded me of a startling academic paper, first published in 1978, in the *New England Journal of Medicine.* Dr Ward Casscells and colleagues reported something very disturbing: that most doctors can’t calculate risks correctly.

The question they posed was this. Imagine a disease (let’s call it Gobble’s disease*), which has a prevalence of 1 in 1000 in your population. There is a test for Gobble’s disease, and you know it has a false positive rate of 5%. You meet a patient in your clinic, who has tested positive. What is the probability that the patient has Gobble’s disease?

A member of the public could be forgiven for thinking the answer is 100%. After all, medical tests are always reliable, right? Someone a bit savvier, say a doctor, might look at that 5% false positive rate, and decide the answer is 95%. That’s what most of the respondents in Casscells’ study said, and he offered his question to senior doctors, junior doctors, and medical students. (And, if you had offered it to me as a medical student or a junior doctor, that’s almost certainly what I would have said – even though, in all fairness, my medical school tried hard to teach us the truth).

But they would all be hopelessly wrong. A statistician would say this: suppose you test the population for Gobble’s disease; a 5% false positive rate means that 5% of your population will test positive for Gobble’s disease, *even when they don’t have it.* 5% of your population is 50 per 1000. But we know that only 1 in 1000 of the people in your population has Gobble’s disease; therefore your test will be wrong for those 50 people, and right only for that last 1 person. So the probability of your patient – who tested positive for Gobble’s disease – actually having Gobble’s disease is only 1 in 50, or 2%.

This result is so unexpected, so counter-intuitive, that it’s worth looking at more closely.

All medical tests have two basic properties. These are known as *sensitivity* and *specificity*. The sensitivity is the probability that the patient will test positive for the disease, if they actually have it. Our fictitious Gobble’s test is, we assume, 100% sensitive it will always detect someone with Gobble’s disease. In practice, few medical tests approach 100% sensitivity.

The specificity is the probability that the patient will test negative for the disease if they haven’t got the disease. Our Gobble’s test is 95% specific: if the patient doesn’t have Gobble’s disease, there is a 95% likelihood that they will test negative for the disease. That sounds great, until we remember that there’s a 5% likelihood they will test positive, which is the cause of all our problems. Sadly, in reality, few medical tests approach 95% specificity.

In reality, sensitivity and specificity are two sides of the same coin. One cannot improve the sensitivity of any test without including more false positives (which might, as we can see, drown out the true positives we are actually interested in). An extreme example is to make every test a positive result: you would never miss anyone with the disease, but there would be so many false positives that your test would be useless.

The reason our test for Gobble’s disease is so unhelpful is that Gobble’s disease is rare. The test becomes much more valuable if Gobble’s disease is more common. Therefore to make it more useful, we shouldn’t apply the test indiscriminately, but we should try to narrow down our focus to people with risk factors. If Gobble’s disease is rare in the young but gets more common in the elderly (as many cancers do), then we can improve the usefulness of the test by applying it only to the elderly.

The other way in which we can improve the usefulness of our test is to combine it with other tests. Say our test is quick and safe. We can apply it easily to a large number of people. But to those who test positive, we can then go on and apply a different test, perhaps one which is more invasive or more expensive. Patients who test positive for both are much more likely to actually *have* Gobble’s disease.

That security guard, having a quick look through my bag, is applying a diagnostic test: do I have a dangerous item in there, or not? Unfortunately his test isn’t very sensitive, since he might easily miss something down at the bottom. And, since most people going to the concert are there to enjoy the music, the prevalence of miscreants is low. Therefore the simple mathematics of the test tells us it is likely to be worthless. The effectiveness of the test is multiplied by applying a different test: an X-ray scan of my bag, or even of my body. These are much more expensive than a quick visual check, but airports, understandably, are prepared to foot the bill.

There are powerful lessons to be learned here. The first is that applying a single test to a whole population is likely to be very unhelpful, especially if what you are looking for is rare. The second is that medical tests seldom give a clear-cut answer; instead they lengthen or shorten the likelihood of a particular diagnosis being true. Finally, quite a lot of other tests (such as concert security) are subject to exactly the same mathematical rules as medical tests. A thorough understanding of the mathematics of probability will help no end in this endeavour. In the words of William Osler (often described as the father of modern medicine), “Medicine is a science of uncertainty and an art of probability”!

*‘Gobble’s Disease’ is an invented illness from the *Oxford Handbook of Clinical Medicine*.

*Featured Image Credit: ‘Dice, Die, Probability’ by Jody Lehigh. CCO Public Domain via Pixabay.*

The post Doing it with sensitivity appeared first on OUPblog.

]]>The post Can design thinking challenge the scientific method? appeared first on OUPblog.

]]>However, there is growing interest in design thinking, a research method which encourages practitioners to reformulate goals, question requirements, empathize with users, consider divergent solutions, and validate designs with real-world interventions. Design thinking promotes playful exploration and recognizes the profound influence that diverse contexts have on preferred solutions. Advocates believe that they are dealing with “wicked problems” situated in the real world, in which controlled experiments are of dubious value.

The design thinking community generally respects science, but resists pressures to be “scientized”, which they equate with relying on controlled laboratory experiments, reductionist approaches, traditional thinking, and toy problems. Similarly, many in the scientific community will grant that design thinking has benefits in coming up with better toothbrushes or attractive smartphones, but they see little relevance to research work that leads to discoveries.

The tension may be growing since design thinking is on the rise as a business necessity and as part of university education. Institutions as diverse as the Royal College of Art, Goldsmiths at the University of London, Stanford University’s D-School, and Singapore University of Technology and Design are leading a rapidly growing movement that is eagerly supported by business. Design thinking promoters see it as a new way of thinking about serious problems such as healthcare delivery, community safety, environmental perseveration, and energy conservation.

The rising prominence of design thinking in public discourse is revealed by these two graphs. (see Figures 1 and 2).

These two sources both appear to show that after 1975 design took over prominence from science and engineering.

Scientists and engineers might dismiss this data and the idea that design thinking could challenge the scientific method. They believe that controlled experiments with statistical tests for significant differences are the “gold standard” for collecting evidence to support hypotheses, which add to the trusted body of knowledge. Furthermore, they believe that the cumulative body of knowledge provides the foundations for solving the serious problems of our time.

By contrast, designing thinking activists question the validity of controlled experiments in dealing with complex socio-technical problems such as healthcare delivery. They question the value of medical research by carefully controlled clinical trials because of the restricted selection criteria for participants, the higher rates of compliance during trials, and the focus on a limited set of treatment possibilities. Flawed clinical trials have resulted in harm such as when a treatment is tested only on men, but then the results are applied to women. Even respected members of the scientific community have made disturbing complaints about the scientific method, such as John Ioannidis’s now-famous 2005 paper “Why Most Published Research Findings Are False.”

Designing thinking advocates do not promise truth, but they believe that valuable new ideas, services, and products can come from their methods. They are passionate about immersing themselves in problems, talking to real customers, patients, or students, considering a range of alternatives, and then testing carefully in realistic settings.

Of course, there is no need to choose between design thinking and the scientific method, when researchers can and should do both. The happy compromise may be to use design thinking methods at early stages to understand a problem, and then test some of the hypotheses with the scientific method. As solutions are fashioned they can be tested in the real world to gather data on what works and what doesn’t. Then more design thinking and more scientific research could provide still clearer insights and innovations.

Instead of seeing research as a single event, such as a controlled experiment, the British Design Council recommends the Double Diamond model which captures the idea of repeated cycles of divergent and then convergent thinking. In one formulation they describe a 4-step process: “Discover”, “Define”, “Develop”, and “Deliver.”

The spirited debates about which methods to use will continue, but as teachers we should ensure that our students are skilled with both the scientific method and design thinking. Similarly, as business leaders we should ensure that our employees are well-trained enough to apply design thinking and the scientific method. When serious problems need solutions, such as healthcare of environmental preservation are being addressed, we will all be better served when design thinking and scientific method are combined.

*Featured image credit: library books education literature by Foundry. Public domain via Pixabay.*

The post Can design thinking challenge the scientific method? appeared first on OUPblog.

]]>The post The consistency of inconsistency claims appeared first on OUPblog.

]]>In 1931 Kurt Gödel published one of the most important and most celebrated results in 20^{th} century mathematics: the incompleteness of arithmetic. Gödel’s work, however, actually contains two distinct incompleteness theorems. The first can be stated a bit loosely as follows:

*First Incompleteness Theorem*: If *T* is a consistent, sufficiently strong, recursively axiomatizable theory, then there is a sentence “P” in the language of arithmetic such that neither “P” nor “not: P” is provable in *T*.

A few terminological points: To say that a theory is *recursively axiomatizable* means, again loosely put, that there is an algorithm that allows us to decide, of any statement in the language, whether it is an axiom of the theory or not. Explicating what, exactly, is meant by saying a theory is *sufficiently strong* is a bit trickier, but it suffices for our purposes to note that a theory is sufficiently strong if it is at least as strong as standard theories of arithmetic, and by noting further that this isn’t actually very strong at all: the vast majority of mathematical and scientific theories studied in standard undergraduate courses are sufficiently strong in this sense. Thus, we can understand Gödel’s first incompleteness theorem as placing a limitation on how ‘good’ a scientific or mathematical theory *T* in a language *L* can be: if *T* is consistent, and if *T* is sufficiently strong, then there is a sentence *S* in language *L* such that *T* does not prove that *S* is true, but it also doesn’t prove that *S* is false.

The first incompleteness theorem has received a lot of attention in the philosophical and mathematical literature, appearing in arguments purporting to show that human minds are not equivalent to computers, or that mathematical truth is somehow ineffable, and the theorem has even been claimed as evidence that God exists. But here I want to draw attention to a less well-known, and very weird, consequence of Gödel’s other result, the second incompleteness theorem.

First, a final bit of terminology. Given any theory *T*, we will represent the claim that *T* is consistent as “Con(*T*)”. It is worth emphasizing that, if *T* is a theory expressed in language *L*, and *T* is sufficiently strong in the sense discussed above, then “Con(*T*)” is a sentence in the language *L* (for the cognoscenti: “Con(*T*)” is a very complex statement of arithmetic that is *equivalent* to the claim that *T* is consistent)! Now, Gödel’s second incompleteness theorem, loosely put, is as follows:

*Second Incompleteness Theorem*: If *T* is a consistent, sufficiently strong, recursively axiomatizable theory, then *T* does not prove “Con(*T*)”.

For our purposes, it will be easier to use an equivalent, but somewhat differently formulated, version of the theorem:

*Second Incompleteness Theorem*: If *T* is a consistent, sufficiently strong, recursively axiomatizable theory, then the theory:

*T* + not: Con(*T*)

* *is consistent.

In other words, if *T* is a consistent, sufficiently strong theory, then the theory that says everything that *T* says, but also includes the (false) claim that *T* is inconsistent is nevertheless consistent (although obviously not true!) It is important to note in what follows that the second incompleteness theorem does not guarantee that a consistent theory *T* does not prove “not: Con(*T*)”. In fact, as we shall see, some consistent (but false) theories allow us to prove that they are not consistent even though they are!

We are now (finally!) in a position to state the main result of this post:

*Theorem*: There exists a consistent theory *T* such that:

*T* + Con(*T*)

is inconsistent, yet:

*T* + not: Con(*T*)

is consistent.

In other words, there is a consistent theory *T* such that adding the __true__ claim “*T* is consistent” to *T* results in a contradiction, yet adding the __false__ claim “*T* is inconsistent” to *T* results in a (false but) consistent theory.

Here is the proof: Let *T*_{1} be any consistent, sufficiently strong theory (e.g. Peano arithmetic). So, by Gödel’s second incompleteness theorem:

*T*_{2} = *T*_{1} + not: Con(*T*_{1})

is a consistent theory. Hence “Con(*T*_{2})” is true. Now, consider the following theories:

*(i) T*_{2} + not: Con(*T*_{2})

*(ii) T*_{2} + Con(*T*_{2})

Since, as we have already seen, *T*_{2} is consistent, it follows, again, by the second incompleteness theorem, that the first theory:

*T*_{2} + not: Con(*T*_{2})

is consistent. But now consider the second theory (ii). This theory includes the claim that *T*_{2} does not prove a contradiction – that is, it contains “Con(*T*_{2})”. But it also contains every claim that *T*_{2} contains. And *T*_{2} contains the claim that *T*_{1} __does__ prove a contradiction – that is, it contains “not: Con(*T*_{1})”. But if *T*_{1} proves a contradiction, then *T*_{2} proves a contradiction (since everything contained in *T*_{1} is also contained in *T*_{2}). Further, any sufficiently strong theory is strong enough to show this, and hence, *T*_{2} proves “not: Con(*T*_{2})”. Thus, the second theory:

*T*_{2} + Con(*T*_{2})

is inconsistent, since it proves both “Con(*T*_{2})” and “not: Con(*T*_{2})”. QED.

Thus, there exist consistent theories, such as *T*_{2} above, such that adding the (true) claim that that theory is consistent to that theory results in inconsistency, while adding the (false) claim that the theory is inconsistent results in a consistent theory. It is worth noting that part of the trick is that the theory *T*_{2} we used in the proof is itself consistent but not true.

This, in turn, suggests the following: in some situations, when faced with a theory *T* where we believe *T* to be consistent, but where we are unsure as to whether *T* is true, it might be safer to add “not: Con(*T*)” to *T* than it is to add “Con(*T*)” to *T*. Given that the majority of our scientific theories are likely to be consistent, but many will turn out to be false as they are overturned by newer, better, theories, this then suggests that sometimes we might be better off believing that our scientific theories are inconsistent than believing that they are consistent (if we take a stand on their consistency at all). But how can this be right?

*Featured image credit: Random mathematical formulæ illustrating the field of pure mathematics. Public domain via Wikimedia Commons.*

The post The consistency of inconsistency claims appeared first on OUPblog.

]]>The post Mary Somerville: the new face on Royal Bank of Scotland’s ten-pound note is worthy of international recognition appeared first on OUPblog.

]]>In celebrating the good news that Somerville is the people’s choice for the new gig, we could do worse than listen to the accolade given to her writing by one of the men she defeated in the public poll: James Clerk Maxwell. Father of the wireless electromagnetic era, he no doubt studied *Mechanism of the Heavens* as a student at Cambridge – and he certainly knew of Somerville’s second book, *On the Connexion of the Physical Sciences*. This was popular science rather than an advanced textbook, but Maxwell described it as “one of those suggestive books, which put into definite, intelligible, and communicable form the guiding ideas that are already working in the minds of men of science… but which they cannot yet shape into a definite statement.” This is high praise indeed.

If Maxwell’s ‘men of science’ sounds sexist in hindsight, it is doubly important to remember that women were not allowed to join the academic academies – not even the Royal Society, whose aim was not so much the doing of science as promoting it. In other words, ‘men of science’ was fact, not opinion. Which makes Mary Somerville all the more remarkable. She went on to write two more science books – and a delightful memoir completed when she was 91 – but she was also a scientist in her own right. In 1826 she published a paper in the prestigious *Philosophical Transactions of the Royal Society*, based on her experiments on a possible connection between violet light and electromagnetism. Although her results were ultimately proved incorrect, initially such famous scientists as her friends John Herschel and William Wollaston, had regarded her experiment as authoritative. Her friend Michael Faraday would find the first correct experimental connection between light and electromagnetism, and then Maxwell would complete the puzzle with his magnificent electromagnetic theory of light. But he had such respect for Somerville that nearly fifty years after her experiment, he took the trouble to analyse its underlying flaw, in his *Treatise on Electricity and Magnetism*.

Somerville corresponded with Faraday during her next series of experiments, in 1835. These involved testing the effects of different coloured light on photographic paper (photography was a new and fledgling invention at the time), and her paper was published by the French Academy of Sciences. The results of her third set of experiments – on the effect of different coloured light on organic matter – were published by the Royal Society in 1845.

In her book *Connexions*, she had also conjectured that observed discrepancies in the orbit of Uranus – which had been discovered by another friend of hers, William Herschel, father of John – might be due to the effects of another body as yet unseen. After John Couch Adams and Urbain Leverrier independently discovered the existence of Neptune in 1846, Adams told Somerville’s husband, William Somerville, that his search for the planet had been inspired by that passage in *Connexions.*

When Mary Somerville died in 1872, just before her 92^{nd} birthday, she was widely acknowledged as the nineteenth century’s ‘Queen of Science.’ The day before she died, she had been studying cutting edge mathematics (‘quaternions’, which Maxwell was also studying, as it happens – he discussed their application to electromagnetism in his *Treatise* of the following year). But what makes Mary Somerville’s story timeless is her monumental struggle to understand the mysteries of science in the first place. It might be tempting to think she owed her success to the support of all her famous friends – and indeed, they did support her. But she had gained entry to the society of ‘men of science’ in a most extraordinary way.

As a child in Burntisland, a village across the Firth of Forth from Edinburgh, Mary Fairfax had grown up ‘a wild creature’, as she put it. While her brothers were sent to school, she had been left free to roam along the seashore. Her mother had taught her enough literacy to read the Bible, and later she was taught some basic arithmetic. But everything changed when a friend showed fifteen-year-old Mary the needlework patterns in a women’s magazine. Leafing through it, Mary was mesmerized not by exquisite needlework but by a collection of x’s and y’s in strange, alluring patterns. Her friend knew only that “they call it algebra” (it was a worked solution to one of the magazine’s mathematical puzzles). Tantalized, Mary began studying mathematics in secret, reading under the covers at night. When the household stock of candles ran low too quickly, Mary’s secret was discovered and her candles confiscated – her father accepted the prevailing belief that intellectual study would send a girl mad or make her seriously ill.

Mary persevered for decades, teaching herself mathematics, Latin, and French. Eventually, she was able to read and understand both Newton’s *Principia* in Latin, and Newton’s disciple Pierre Laplace in French. She did it all alone, just for the love of knowledge. But when Britain’s ‘men of science’ and their wives finally discovered her learning, they were stunned. They quickly embraced her, but her public success was also possible because of the support of her three surviving children (three had died in childhood), and especially her medical doctor husband.

At 91, while studying quaternions, she revealed one of the secrets of her success: whenever she encountered a difficulty, she remained calm, never giving up, because “if I do not succeed today, I will attack [the problem] again on the morrow.” It is a great aphorism to remember her by, as we celebrate the accolades that are still coming her way: Oxford’s Somerville College was named in her honour soon after her death, and now RBS’s ten-pound note.

*Featured image credit: Somerville College by Philip Allfrey. CC-BY-SA-3.0 via Wikimedia Commons.*

The post Mary Somerville: the new face on Royal Bank of Scotland’s ten-pound note is worthy of international recognition appeared first on OUPblog.

]]>The post Earth’s climate: a complex system with mysteries abound appeared first on OUPblog.

]]>The climate has evolved through massive changes to where we are today. It continues to evolve on long time scales but is also impacted by two factors acting on human time scales. First, there is the ongoing internal variability resulting from a plethora of natural cycles. El Niño is an exemplar, but variability occurs at all levels of the system—in the atmosphere, ocean, cryosphere, biosphere, and through their connectedness—and spans a wide range of time scales from weeks to centuries. Moreover, modes of variability can conspire together to produce unanticipated and seemingly unrelated effects. Secondly, there are changes in radiative forcing, recently dominated by anthropogenic emissions, but also affected by other factors including land use, ocean carbon uptake, solar variability, and feedbacks such as impacts on albedo from melting ice and changing cloud patterns. It is this complex mixture in a dynamically evolving system that the scientific community is striving to unravel.

Climate science is in an unusual situation in that it is an experimental science but one in which the experiments are not restricted to a traditional laboratory. Because experiments cannot be carried out on the full climate system, mathematical replicas of the Earth have been developed in order to test scientific hypotheses about how the planet will react in different circumstances. These are the climate models hosted at around 30 or so climate centres around the world that provide the main source of predictive information in the Intergovernmental Panel on Climate Change assessment reports. They are each highly complex entities in themselves involving massive computational codes. The upkeep and development of these codes raises significant mathematical issues, but the involvement of the mathematical sciences in the study of climate goes far beyond these operational tasks.

The complementary view to the climate as a deterministic dynamical system involves compiling information from observational data. These observations, both from the modern instrumental systems and from the distant past using palaeoclimate proxies, are uncertain, which means that we need to use sophisticated statistical methodology to estimate and map properties of the Earth’s climate system. The two viewpoints come together as significant uncertainty accompanies any model projection and so their output is also properly regarded as statistical.

The mathematical sciences are playing a growing role in climate studies at all levels. Models of more modest dimension than the models residing at climate centres are gaining prominence. These conceptual models can help us see the relations between different internal mechanisms that can be hidden in the full model. Key processes can often be studied in isolation and their modelling brings considerable insight into the overall climate. Such processes include biogeochemical cycles, melting of sea and land ice and land use changes. The internal structure of such processes and their impact on other aspects of the climate are revealed by mathematical and statistical analyses. Such analysis is also critical in the proper inclusion of these processes through parameterization.

Ultimately, both understanding and prediction of the climate depend equally on models, that encode physical laws, and observations, that bring direct insight into the real world. Melding these together to tease out the optimal information is an extraordinary mathematical challenge that demands a blend of statistical and dynamical thinking, in both cases at the frontiers of these areas.

* Featured image credit: Windräder by fill. Public Domain via pixabay.*

The post Earth’s climate: a complex system with mysteries abound appeared first on OUPblog.

]]>The post What is information, and should it be free? appeared first on OUPblog.

]]>Fortunately there are some bright spots, such as the fact that it is now possible to measure information. This is the result of the pioneering work of Claude Shannon in the 1940s and 1950s. Shannon’s definitions can be used to prove theorems in a mathematically precise way, and in practice they provide the foundation for the machines which handle the vast amounts of information that are now available to us. However, that is not the end of the story.

In 1738 Daniel Bernoulli pointed out that the mathematical measure of ‘expectation’ did not allow for the fact that different people can value the outcome of an event in different ways. This observation led him to introduce the idea of ‘utility’, which has come to pervade theoretical economics. In fact, Shannon’s measure of information can be thought of as a generalization of expectation, and it leads to similar difficulties. As far as I am aware, academic work on this subject has not yet found applications in practice.

In the absence of a theoretical model, many countries (including the UK) have tried to set up a legal framework for information. First we had the Data Protection Act (1998), intended to prevent the misuse of the large amounts of personal data stored on computers. But it is very loosely worded, and consequently open to many different interpretations. (In one case, a police force believed that the Act prevented them from passing on information about a person whom they suspected of being a serial sex offender. This person then obtained employment elsewhere as a school caretaker, and murdered two pupils.) The next step was the Freedom of Information (FoI) Act (2000), which now seems to be regarded as unsatisfactory – on all sides. Those who see themselves as guardians of our ‘right to know’ are dissatisfied with the wide range of circumstances which can be considered as exceptions. Those who see themselves as guardians of our ‘security’ are concerned that attempts to prevent terrorist activities may be compromised.

We are expected to observe certain rules and regulations about how we use our money, but we are allowed to keep it safe. Will similar rules and regulations about information emerge?

Recently I looked at a question which led me into these muddy waters. The question was a simple instance of a very general one. Suppose a piece of information is only partly revealed to us: it may have been corrupted by transmission through a ‘noisy channel’, or it may have been encrypted, or some important details may have been intentionally withheld. How much useful information can we deduce from the data that we do have?

My example came from the unlikely source of the popular BBC television programme, Strictly Come Dancing. The problem was as follows. In order to determine which contestants should be eliminated from the show, the programme’s creators have devised a complex voting algorithm. First the judges award scores, and these are converted into points. Then the public is invited to vote, and the result is also converted into points. Finally the two sets of points are combined to produce a ranking of all the contestants. But the public points and the final ranking are not revealed on the results show, only the identity of the two lowest contestants. I was able to show that in some circumstances the revealed data can indeed provide a great deal of information about the public vote.

Strictly Come Dancing arouses great passion among its followers, and the lack of transparency of the voting system has led to numerous requests under the FoI Act. Most of these requests have been refused by the BBC, on grounds that appear to be valid in law. It seems that the FoI Act was originally based on some rather idealistic notions. When the Act was passing into law, it had to be converted into a more realistic instrument, and the resulting form of words therefore provides for a large number of exceptions to the general principle. Specifically, the BBC is able to claim that the details of the voting are exempt, because they are being used ‘for the purposes of journalism, art, or literature.’

In my view the FoI Act is simply a shield that deflects attention from the heart of the matter. The public is invited to vote and therefore has good reason to be interested in the mechanics of the voting procedure and its outcome. In addition to details of the method used to combine the public ranking with the judges’ ranking, there are other causes for concern. For example, multiple voting is allowed, and this opens up the possibility of misuse by agents who have a vested interest in a particular contestant.

I began by remarking that there is a close relationship between money and information. We are expected to observe certain rules and regulations about how we use our money, but we are allowed to keep it safe. Will similar rules and regulations about information emerge, and when?

*Featured image credit: binary code by Christiaan Colen. CC-BY-SA 2.0 via Flickr.*

The post What is information, and should it be free? appeared first on OUPblog.

]]>The post Conversations in computing: Q&A with Editor-in-Chief, Professor Steve Furber appeared first on OUPblog.

]]>**Justin: Can you tell us a bit more about your current role?**

**Steve:** At Manchester I am a regular research professor and I’ve served my term as head of department, that’s some time ago now. I lead a group of 40 or 50 staff and students and our general research area covers computer engineering to computer architecture. On the engineering side we’re interested in the design of silicon chips and how you can make the most of the enormous transistor resource that the manufacturing industries can now give us on a chip. On the architecture side we are interested in particular in how we exploit the many core resources that are increasingly available in all computer products today.

**Justin: What was the topic of your recent Lovelace Lecture?**

**Steve:** The title is ‘Computers and Brains’ and basically this is a lecture which talks about some of the history of artificial intelligence from some of the early writings of Ada Lovelace herself – 2015 was the 200th anniversary of her birth – through to Alan Turing’s thoughts on AI and then onto the research that I’m leading today, which is building a very large parallel computer for real-time brain modelling applications.

**Justin: Can you tell us about your SpiNNaker project?**

**Steve:** SpiNNaker is the massive parallel computer for real time brain modelling and the name is a rather crude compression of spiking your network architecture – it’s not quite an acronym. We’re using a million ARM processors – those are the processors that you find in your mobile phone designed by a British company in Cambridge – in a single machine and with a million ARM cores we can model about 1% of the scale of the network in the human brain. The brain is a very challenging, modelling target, you can think of it as 1% of the human brain, but I sometimes prefer to think of it as ten whole mouse brains. The network we’re using is quite simplified as there’s a lot about brain connectivity that’s still not known, so there’s a lot of guesswork in building any such model.

**Justin: You’ve said in the past that accelerating our understanding of brain function would represent a major scientific breakthrough. Can you expand a little bit more on that thought?**

**Steve:** It is clear to anybody who uses a computer that they are incredibly fast and capable at the set of things that they are good at, but they really struggle with things that we humans find simple. Very young babies learn to recognise their mother, whereas programming a computer to recognise an individual human face is possible but extremely hard. My view is that if we understood more about how humans learn to recognise faces and solve similar problems then we’d be much better placed to build computers that could do this easily.

**Justin: Where do you see AI processing going in the next five to ten years?**

**Steve:** The big issue with AI is understanding what intelligence is in the first place. I think one of the reasons why we have found true AI so difficult to reproduce in machines is that we’ve not quite worked out how natural intelligence works, hence my interest in going back to look at the brain as the substrate from which human intelligence emerges. If we can understand that better then we might be able to reproduce it more faithfully in our computing machines.

**Justin: What about the ethics of AI?**

**Steve:** Ultimately AI will lead to ethical issues. Clearly if machines become sentient then the issue as to whether you can or can’t switch them off becomes an ethical consideration. I think we are a very long way from that at the moment so that isn’t foremost among ethical issues we have to consider. I think there are much more pragmatic engineering issues, for example to do with driverless cars. If a driverless car is involved in a crash whose fault is it, who is responsible? If the crash turns out to be the result of the software bug, if it turns out to be the result of the human interfering with the car, there’s a whole set of issues that will have to be thought through there and they come a long time before the issue of the machine itself having any kind of rights.

**Justin: What do you think are the biggest challenges the IT industry faces?**

**Steve:** I think high on the list is the issue of cybersecurity. We are seeing increasing numbers of attacks on IT systems and it’s very technical to work out how to build defences that don’t compromise the performance of the systems too much. So as consumers we install antivirus software on our PCs but sometimes the antivirus software makes the PC almost useless. So there’s a compromise in security, always. Most of us live in houses where the front door will succumb to a few decent kicks, but the bank chooses something more substantial for its vault. Security has to be proportionate to the risk. But I think security is going to loom increasingly large in the IT industry.

**Justin: What do you see as the most exciting emerging technologies at the moment?**

**Steve:** The most exciting technologies around the corner I think are the cognitive systems, machines becoming less passive, they don’t just sit waiting for human imports but they actually respond to the environment, interact with it, engage with it and that requires some degree of understanding. I don’t want understanding to be interpreted in too anthropomorphic a way – their understanding may be quite prosaic, it might be at the level of an insect. But an insect has an adequate understanding of its environment for its purposes. That’s how I would expect to see computers developing increasingly in the future.

**Justin: What do you think the IT industry as a whole should be doing to improve its image?**

**Steve:** I think the image of the industry is particularly important in the way it comes over in schools and in the choices that pupils make about their future careers. We certainly had a problem recently with the kind of exposure to IT that’s happened in a lot of schools being de-motivating, it has discouraged pupils from computing. I think the changes that are needed to remedy that are now in place and it will take a little while for them to filter through, but of course BCS has played a very active role in seeing those changes through, so hopefully computing will have a better image where it matters most, which is in schools.

**Justin: Why do you think that we aren’t seeing so many women going into IT?**

**Steve:** If I knew why women did not find IT so attractive, then I’d do something about it. It’s a major problem that for some reason culturally we think IT and computers are a male preserve and of course if we talk numbers then they are predominately male. It’s a problem that we’ve been worrying about all the time I’ve been in the university and many things have been tried and nothing has really made much difference, so it concerns me hugely but I don’t know what to do about it. I don’t think there is any shortage of female role models, there are plenty of very high-powered women in the computing business. I really don’t understand why the subject is not attractive to girls at school, which is where the problem starts. I welcome any suggestions as to what we can do to remedy this.

**Justin: Talking of role models, did you have any of your own?**

**Steve:** My role models were probably not in computing, as I said I came through the mathematics and aerodynamics route at university and was really drawn into computing by what I saw as the new wave of computing based on the microprocessor, which in the late 1970s was a very new approach to building machines. So who do I hold up as a role model? Well, one of the lecturers at the university was John Conway who was always a very inspiring mathematician and it was great fun to listen to his lectures.

**Justin: Looking back at your career so far is there anything you would have done differently if you had your time again?**

**Steve:** I don’t think so, there are no decisions in my career path that I particularly regret and I think the advice I give to people is roughly the advice I follow myself, which is to make decisions that keep the maximum number of doors open. So look for opportunities, but when there’s nothing obvious staring you in the face then think about what subject creates the most possibilities in the area you’re interested in. Maximise the number of doors.

*The full interview between Justin and Steve was originally published in ITNOW, and may also be viewed on YouTube as a two part recording. Watch Part One and Part Two online.*

*Featured image credit: Mother board by Magnascan. CC0 Public Domain via Pixabay.*

The post Conversations in computing: Q&A with Editor-in-Chief, Professor Steve Furber appeared first on OUPblog.

]]>The post Addressing anxiety in the teaching room: techniques to enhance mathematics and statistics education appeared first on OUPblog.

]]>Mathematics and statistics anxiety is one of the major challenges involved in communicating mathematics and statistics to non-specialists. Students enrolled on degree programmes in several areas other than mathematics or statistics are required to study mandatory courses in mathematics and statistics as core elements of their degree programmes. Academics, educators, and researchers presented papers on how they have addressed this issue of anxiety using history, enhancing students’ self-belief, and individual support, demonstrating the relevance of the subjects to their respective degree work and making the learning process enjoyable.

The general consensus from the session was that:

- Students with low confidence experience high levels of mathematics anxiety, which has an adverse impact on their academic performance;
- University students are far from resilient to experiencing mathematics anxiety ;
- It is most common in non-specialist university students;
- It is not always related to students’ academic abilities but their prior learning experience of the subjects, self-efficacy and self-beliefs;
- The increasing diversity of the university student population as a result of the high proportion of international students, widening participation and access to higher education, add new dimensions to this challenge;
- This range of cultural, socio-economic and academic backgrounds of students manifests itself through diverse expectations and individual learning requirements that need to be carefully considered.

Delegates agreed that if educators involved in designing and delivering mathematics and statistics courses for non-specialist university students are aware of the implications of this diversity in student backgrounds, they should be able to appreciate the indispensable role of using a variety of teaching and learning approaches.

My personal view is that thinking like social scientists would make higher education practitioners more empathetic towards students. I think making course delivery student focused as well as student led would encourage students to share responsibility for their education. Focusing on connecting with students and being perceptive as well as receptive to students’ feedback and willing to revise teaching delivery can enhance the learning climate in teaching rooms. This would promote student interaction and encourage active learning.

Undergraduates can face several issues during their transition to university education, such as key gaps in their mathematical skills despite the fact that they have A-level Mathematics or equivalent. Effective practices were shared to include a blended learning project using online formative assessment followed by feedback and encouraging students to work within their Zone of Proximal Development (Vygotsky, 1978). Delegates were informed about two innovative Mathematics Support Centres (MSCs) that facilitate distant learning. MSCs have become important features of universities in the UK as well as overseas.

Undergraduates can face several issues during their transition to university education, such as key gaps in their mathematical skills despite the fact that they have A-level Mathematics or equivalent.

Educators shared their projects on scenario based training of statistics support teachers, instruction methods developed by mathematics teachers and using census data as well as other publicly available large data sets to support statistics literacy. Social media was explored as a tool to facilitate deep learning, enhance student engagement in science as well as engineering and improve students’ learning experience. There were presentations on the effective use of a virtual learning environment, audio feedback, and an online collaboration model to encourage students’ participation.

Delegates seemed to find online formative assessment practices worth incorporating into their teaching. The innovative Mathematics Support Centres (MSCs) that facilitate distant learning sounded appealing to several others. Delegates who had not experimented with Facebook were convinced after a paper presentation that Facebook is an area worth exploring to enhance student engagement.

I have used Facebook for promoting scholarly dialogue and collaborative research as well as enhancing student engagement with statistics and operational research methods since 2012. My rationale is to address mathematics and statistics anxiety by connecting with students which can be done without intruding into their personal territory, i.e. becoming their Facebook friends. I would argue that Facebook is an excellent online system which academics can use for posting topics for discussions, promoting interaction, addressing students’ queries, uploading course material and monitoring students’ progress. These study groups are easy to set up and promote inclusive education. It is a platform students are used to and view extremely positively.

Barriers to learning such as neurodiversity were also explored focusing on learning difficulties faced by visually-impaired and hearing-impaired learners. Other areas covered included language difficulties as a barrier to reading mathematics, dyslexia/dyscalculia and teachers’ negative bias against students from certain backgrounds. Gender imbalance was also discussed as a significant barrier with the general consensus being that more women should be encouraged, as well as supported, to pursue careers in mathematics.

In light of the existing literature and research relating to the difficulties blind learners face, it was agreed that this is an area that calls for further research to make mathematics more accessible to the blind. It was proposed that research on combining lexical rules, speech prosody and non-speech sounds would be desirable. Furthermore, providing tools for carrying out mathematical analysis may improve the situation for blind learners.

A critique on the fallacy of assuming a homogeneous student body and homogeneous teaching in a ‘what works’ approach introduced an interesting point of controversy in the midst of excitement and optimism about a range of initiatives. These exchanges of information on research, initiatives and projects should promote multi-disciplinary research collaboration in mathematics and statistics education.

The conference might impact research in a variety of themes related to statistics and mathematics education to include mathematics anxiety, inclusive practice and statistics anxiety.

*Featured image credit: calculator mathematics maths finance by Unsplash. Public domain via Pixabay.*

The post Addressing anxiety in the teaching room: techniques to enhance mathematics and statistics education appeared first on OUPblog.

]]>The post Is an engineering mind-set linked to violent terrorism? appeared first on OUPblog.

]]>The process by which young people are radicalised is very complex and poorly understood. As Scott Atran has said, the “first step to combating Isis is to understand it. We have yet to do so … What inspires the most uncompromisingly lethal actors in the world today is not so much the Qur’an or religious teachings. It’s a thrilling cause that promises glory and esteem … Youth needs values and dreams.”

Rose notes that engineering, medicine and other technical subjects are regarded as superior education in many MENA countries. These subjects may attract people with mind-sets that like simple solutions; little ambiguity, nuance, or debate. Rose calls this an ‘engineering mind-set’. He says that in these courses there is a tendency to concentrate on rote learning and exam-passing with little or no questioning. Those mind-sets may then be re-enforced by the way they are taught. Rose emphasises that young people need to be taught how to think to immunise their minds against ideologies that seek to teach them what to think. In other words they need to be encouraged to think critically as in the social sciences.

There are two main points I want to highlight here. First, as Rose states, the sparse data that we have indicates that there are a disproportionate number of STEM students and graduates recruited into Jihadist terrorism. That needs to be explained.

At least part, but only part, of the complex answer may rest on the second point. How do Rose and others characterise an engineering mind-set and how does it relate to the way engineers actually think? For sure the way Rose describes it needs unpacking. Rose quotes Diego Gambetta in 2007 (who in turn quotes the work of Seymour Lipset and Earl Raab who wrote on right wing and Islamic extremism in 1971). He says this mind-set has three components; (a) ‘monism’ – the idea that there exists one best solution to all problems; (b) ‘simplism’ – the idea that if only people were rational remedies would be simple with no ambiguity and single causes and remedies; (c) ‘preservatism’ – an underlying craving for a lost order of privileges and authority as a backlash against deprivation in a period of sharp social change – in jihadist ideology the theme of returning to the order of the prophet’s early community.

Rose’s characterisation, like many attempts to capture something complex and protean, contains some truth – but it is far from adequate.

A good start at an analysis would be the report by the Royal Academy of Engineering, ‘Thinking like an engineer’. One could easily counter Rose’s three components, and be nearer the mark, by using the trio pluralism, complexity, and sustainability. I believe that we should perhaps look for an explanation by examining the huge gap that has existed (and still exists – though reduced) between engineering science and practice. For example theoretical engineering mechanics rests on the certainty of deterministic physics from Newton to Einstein with its consequent time invariant dynamics. Determinism means that all events have sufficient causes – literally that the past decides the future. Einstein is reputed to have said “time is an illusion.” No practitioner takes these interpretations of certainty seriously, but she uses them as a model to make decisions because they are the best we have and they work. But there is one big and important proviso – they work in a context that must be understood. The Nobel Prize winner Ilya Prigogine has shown that evolutionary thermodynamics rests on complex processes far from equilibrium. Contrary to dynamics theories the laws of thermodynamics show that time is an arrow going only in one direction.

Quantum physics has blown away all pretence at certainty. Practitioners intuitively know that their theories are human constructs – imperfect models built to provide us with meaning and guide our ways of behaving. They use them to help make safe and functional decisions to provide systems of artefacts that are fit for purpose as set out in a specification. They use them but they know there are risks. There is no certainty in engineering practice (as some seem to believe) – witness the few tragic engineering failures (like Chernobyl). Risk and uncertainty is managed by safe dependable practice – always testing always checking – taking a professional duty of care. Were it not for the creativity, dedication and ingenuity of engineers such disasters would be more frequent. Just think of the amazing complexity of building and maintaining the international space station – truly inspirational and built by engineers. So yes there is a paradox. Deterministic theory points to single solution, to black and white answers – but in practice we use it only as a model – a human construct which has enabled us to achieve some incredible things.

Engineers use a plurality of methodologies and solutions. Anyone who has designed and made anything knows that are multiple solutions. Any attempt at optimising a solution will only work in a context and may be dangerously vulnerable outside of that context. Even a simple hinged pendulum behaves in a complex chaotic way. Bifurcations in its trajectory make its actual performance very sensitive to initial conditions. All successful practitioners know that people make decisions for a variety of reasons – some rational some not so rational. Only in theory does simplism apply. The financial crash of 2008 put paid to simplism in economic theory. Lastly, yes engineers do like order. Like life itself they create negentropy. They impose order on nature but do it to improve the human condition. The modern challenge is to do it more sustainably and to create resilience in the face of climate change.

So these are the ideas that engineering educators work too. Most engineering courses include design projects. Students learn about understanding a need, turning it into a specification and delivering a reality. To do so they must think creatively, consider the needs of multiple stakeholders, think critically to exercise judgement to determine criteria to make choices. Many undergraduate engineering courses (but admittedly not all) now include ethics. In practice the products of the work of engineers is continually tested by use. If engineers didn’t think creatively and critically about such use they would soon be out of a job.

In summary to characterise the engineering mind-set as one that thinks problems have single solutions devoid of ambiguity and uncertainty is derogatory and disparaging of our ingenious engineers. It is quite wrong to characterise the engineering mind-set as one that does not help students how to think though of course engineering educators are constantly striving to do better. Any claim that an engineering mind-set is linked to violent terrorism needs to be examined with great care.

*Featured image credit: Engineering, by wolter_tom. Public domain via Pixabay.*

The post Is an engineering mind-set linked to violent terrorism? appeared first on OUPblog.

]]>The post How do people read mathematics? appeared first on OUPblog.

]]>But it turns out that there are interesting questions here. There are, for instance, thousands of mathematics textbooks–many students own one and use it regularly. They might not use it in the way intended by the author; research indicates that some students–perhaps most–typically use their textbooks only as a source of problems, and essentially ignore the expository sections. That is a shame for textbook authors, whose months spent crafting those sections do not influence learning in the ways they intend. It is also a shame for students, especially for those who go on to more advanced, demanding study of upper-level university mathematics. In proof-based courses it is difficult to avoid learning by reading. Even successful students are unlikely to understand everything in lectures – the material is too challenging and the pace is too fast – and reading to learn is expected.

Because students are not typically experienced or trained in mathematical reading, this returns us to the opening questions. Does this lack of training matter? Undergraduate students can read, so can they not simply apply this skill to mathematical material? But it turns out that this is not as simple as it sounds, because mathematical reading is not like ordinary reading. Mathematicians have long known this (“you should read with a pencil in hand”), but the skills needed have recently been empirically documented in research studies conducted in the Mathematics Education Centre at Loughborough University. Matthew Inglis and I began with an expert/novice study contrasting the reading behaviours of professional mathematicians with those of undergraduate students. By using eye-movement analyses we found that, when reading purported mathematical proofs, undergraduates’ attention is drawn to the mathematical symbols. To the uninitiated that might sound fine, but it is not consistent with expert behaviour; the professional mathematicians attended proportionately more to the words, reflecting their knowledge that these capture much of the logical reasoning in any written mathematical argument.

Another difference appeared in patterns of behaviour, which can best be seen by watching the behaviour of one mathematician when reading a purported proof to decide upon its validity (see below). Ordinary reading, as you might expect, is fairly linear. But mathematical reading is not. When studying the purported proof, the mathematician makes a great many back-and-forth eye movements, and this is characteristic of professional reading: the mathematicians in our study did this significantly more than the undergraduate students, particularly when justifications for deductions were left implicit.

This work is captured in detail in our article “Expert and Novice Approaches to Reading Mathematical Proofs”. Since completing it, Matthew and I have worked with PhD and project students Mark Hodds, Somali Roy and Tom Kilbey to further investigate undergraduate mathematical reading. We have discovered that research-based Self-Explanation Training can render students’ reading more like that of mathematicians and can consequently improve their proof comprehension (see our paper Self-Explanation Training Improves Proof Comprehension); that multimedia resources designed to support effective reading can help too much, leading to poorer retention of the resulting knowledge; and that there is minimal link between reading time and consequent learning. Readers interested in this work might like to begin by reading our AMS Notices article, which summarises much of this work.

In the meantime, my own teaching has changed – I am now much more aware of the need to help students learn to read mathematics and to provide them with time to practice. And this research has influenced my own writing for students: there is no option to skip the expository text, because expository text is all there is. But this text is as much about the thinking as it is about the mathematics. It is necessary for mathematics textbooks to contain accessible text, explicit guidance on engaging with abstract mathematical information, and encouragement to recognise that mathematical reading is challenging but eminently possible for those who are willing to learn.

*Feature Image: Open book by Image Catalog. CC0 1.0 via Flickr.*

The post How do people read mathematics? appeared first on OUPblog.

]]>The post Very Short Resolutions: filling the gaps in our knowledge in 2016 appeared first on OUPblog.

]]>“This year I’m going to read *Algebra: A Very Short Introduction*. An unlikely choice for a History grad but author Peter M. Higgins convinced me of its importance in his article on mathematical literacy. Bad math can lead to silly mistakes and poor choices that are easily avoided otherwise!

—*Katie Stileman, VSI Publicity*

“This year I’m going to read *Classical Mythology: A Very Short Introduction*. I have an embarrassing lack of knowledge in this area so it’s definitely time. It will also help me to hold my own in conversations with my classics loving chum Malcolm!”

—*Julie Gough, VSI Marketing*

“This year I’m going to brush up on my Shakespeare in time for the 400^{th} anniversary of the Bard’s death by reading *William Shakespeare: A Very Short Introduction* by Stanley Wells and eagerly anticipating *Shakespeare’s Comedies: A Very Short Introduction* after enjoying *Much Ado About Nothing* at the Bodleian last summer.”

—*Amy Jelf, VSI Marketing*

“Next year I want to find the time to read *Buddhism: A Very Short Introduction*. My interest was first piqued by reading the top ten facts about Buddhism, and I look forward to learning more about meditation and mindfulness in the new year. Any tips I can glean to remove the stress from my life would be welcome too!”

—*Dan Parker, VSI Social Media*

“After working on VSIs for a number of years, not having them as part of my day-to-day life for the first time this year meant some pretty serious withdrawal symptoms from this incredible series. In 2016, I plan to fill the gap by reading *Circadian Rhythms: A Very Short Introduction*. The same author wrote the VSI to *Sleep*, which I think we’re all fascinated by – not getting enough, getting too much, and the quality of it.”

—*Chloe Foster, VSI Publicity (2012-15)*

“Next year I am going to read *Exploration: A Very Short Introduction *which was recommended to me by Nancy Toff, who commissions VSIs from the US office. As a VSI commissioning editor in the UK, it’s really nice to read a VSI from the other side of the pond!”

—*Andrea Keegan, VSI Editorial*

Wishing you a happy new year from everyone in the VSI team!

*Featured image credit: VSIs, by the VSI team. Image used with permission.*

The post Very Short Resolutions: filling the gaps in our knowledge in 2016 appeared first on OUPblog.

]]>The post Can one hear the corners of a drum? appeared first on OUPblog.

]]>At the heart of both of these equations is the Laplace operator, ∆, also known as the Laplacian, named for Pierre-Simon, marquis de Laplace (1749-1827). It turns out that if one can solve the Laplace equation: ∆*f* = λ*f*, then one can solve both the wave and heat equations.

It is not necessary to understand these mathematical symbols and jargon because they are connected to something everyone understands: music. The sound produced by a stringed instrument is made by the vibration of the strings. The note one hears, such as an A, C, or B-flat, depends on how long the string is, and of what material it is composed. This note is also referred to as the fundamental tone or fundamental frequency.

The vibration of the string also produces overtones, known as harmonics, and these play an important role in creating the sound we hear. The collection of values obtained from solving the Laplace equation provide all these different frequencies: the fundamental frequency and all of the harmonics. Altogether these determine the sound of the string.

In the case of a string, the Laplace equation can be solved rather easily, and it turns out that all the harmonics are integer multiples of the fundamental frequency. This mathematical fact is one of the reasons that stringed instruments and pianos are so popular; it causes the sound that we hear from such instruments to have a pleasant and “clean” quality. In fact, every other instrument in a classical orchestra also has this property, with the exception of the percussion instruments.

Drums are fundamentally different. The sound created by beating a drum comes from the vibrations of the drumhead. Mathematically, this means that the Laplace equation is now in two dimensions. Acoustically, you may observe that the sound produced by vibrating drums is “messier” in a certain sense as compared to the sound produced by a vibrating string. The reason is that for drums it is no longer true that the harmonics are integer multiples of the fundamental frequency.

Although we can mathematically prove the preceding fact, we cannot, apart from a few notable exceptions, solve the Laplace equation in two dimensions. Facing this impasse, mathematicians have turned to investigate questions such as: if two drums sound the same, in the sense that their fundamental frequencies as well as *all* their harmonics are identical, then what geometric features do they have in common? Such features are known as *geometric spectral invariants*.

Hermann Weyl (1885-1955) discovered the first geometric spectral invariant: if two drums sound the same, then their drumheads have the same area. About a half century later, Åke Pleijel (1913-1989) proved that the perimeters of the drumheads must also be the same length. Shortly thereafter, M. Kac (1914-1984) wrote the now famous paper, “Can one hear the shape of a drum?” He wanted to know whether or not the drumheads must have the same shape? It took about a quarter century to solve the problem, which was achieved by Carol Gordon, David Webb, and Scott Wolpert in 1991. The answer is *no*.

In contrast to a nice round drumhead, the “identical sounding drums,” in Figure 1 both have corners. A natural question is therefore: can one hear the corners? This means, is it possible for two drums to sound the same, and one of them has a nicely rounded, but not necessarily circular, shape, whereas the other has at least one sharp corner? In other words, *can one hear the corners of a drum*? We have proven that the answer is *yes*. The sound produced by a drumhead with at least one sharp corner will always be different from the sound produced by any drumhead without corners. Mathematically, this marks the discovery of a new geometric spectral invariant.

Inquiring minds still have several questions to investigate. For example, if we now assume that both drums have nicely rounded, not necessarily circular shapes, and no sharp corners, is it possible that they can sound identical but be of different shapes? Can one hear the shape of a convex drum? What happens when we consider these types of problems for three-dimensional vibrating solids? We continue to work alongside our fellow mathematicians on problems such as these, and there is plenty of room for further investigation by young researchers.

*Image credit: Drum by PublicDomanImages, Public Domain via Pixabay.*

The post Can one hear the corners of a drum? appeared first on OUPblog.

]]>The post Predictive brains, sentient robots, and the embodied self appeared first on OUPblog.

]]>My personal grail, though, was always something rather more systematic: a principled science of the embodied mind. I think we may now be glimpsing the shape of that science. It will be a science built around an emerging vision of the brain as a guessing engine – a multi-layer probabilistic prediction machine. This is an idea that, in one form or another, has been around for a long time. But exciting new developments are taking this vision to some brand-new places. In this short post, I highlight a few of those places. First though, what’s the basic vision of the predictive brain?

A prediction machine of the relevant stripe is a multi-layer neural network that uses rich downwards (and sideways) connectivity to try to perform a superficially simple, yet hugely empowering, task. That task is the ongoing prediction of its own evolving flows of sensory stimulation. When you see that steaming coffee-cup on the desk in front of you, your perceptual experience reflects the multi-level neural guess that best reduces visual prediction errors. To visually perceive the scene in front of you, your brain attempts to *predict *the scene in front of you, allowing the ensuing error signals to refine its guessing until a kind of equilibrium is achieved.

Such an architecture makes full use of the huge amounts of downwards and recurrent connectivity that characterize advanced biological brains. This is important since the bulk of our actual neural connectivity is recurrent, involving loops in which information flows downwards and sideways. So much so that the AI pioneer Patrick Winston wrote, in a 2012 paper, that “Everything is all mixed up, with information flowing bottom to top and top to bottom and sideways too. It is a strange architecture about which we are nearly clueless”.

One key role of all that looping connectivity, it now seems, is to try to predict the streams of sensory stimulation before they arrive. Systems like that are most strongly impacted by sensed *deviations* from their predicted sensory states. It is these deviations from predicted states (known as prediction errors) that now bear much of the information-processing burden, informing us of what is salient and newsworthy within the dense sensory barrage.

Systems like this are already deep in the business of understanding. To perceive a hockey game using multi-level prediction machinery is to be able to predict distinctive sensory patterns as the play unfolds. And the more experience has taught you about the game and the teams, the better those predictions will be. What we quite literally see, as we watch a game, is here constantly informed and structured by what we know and what we are thus already busy (consciously and non-consciously) expecting.

This, as has recently been pointed out in a *New York Times* piece by Lisa Feldman Barrett, has real social and political implications. You might really seem to see your beloved but recently deceased pet start to enter the room, when the curtain moves in just the right way. The police officer might likewise really seem to see the outline of that gun in the hands of the unarmed, cellphone-wielding suspect. In such cases, the full swathe of good sensory evidence should soon turn the tables – but that might be too late for the unwitting suspect.

On the brighter side, a system that has learnt to predict and expect its own evolving flows of sensory activity in this way is one that is already positioned to imagine its world. For the self-same prediction machinery can also be run ‘offline’, generating the kinds of neuronal activity that would be expected (predicted) in some imaginary situation. Sometimes, however, the delicate balances between top-down prediction and the use of incoming sensory evidence are disturbed, and our grip on the world loosens in remarkable ways. Thinking about perception as tied intimately to multi-level prediction is thus also delivering new ways to think about the emergence of delusions, hallucinations, and psychoses, as well as the effects of various drugs, and the distinctive profiles of non-neurotypical (for example, autistic) agents.

The most tantalizing (but least developed) aspect of the emerging framework concerns the origins of conscious experience itself. To creep up on this suppose we ask: what might it take to build a sentient robot? By that I mean: what might it take to build a robot that begins to have some sense of *itself* as a material being, with its own concerns, encountering a structured and meaningful world?

A growing body of work by Professor Anil Seth (University of Sussex) and others may – and I say this with all due caution and trepidation – be suggesting a clue. That work involves the stream of interoceptive information specifying the physiological state of the body – the state of the gut and viscera, blood sugar levels, temperature, and much much more (Bud Craig’s recent book *How Do You Feel* offers a wonderfully rich account of this).

What happens when a multi-level prediction engine crunches all that interoceptive information together with information specifying structure in the external world? Our multi-layered predictive grip on the external world is then superimposed upon another multi-layered predictive grip – a grip on the changing physiological state of our own body. And predictions along each of these dimensions will constantly interact with predictions along the other. To take a very simple case, the sight of water, when we are thirsty, should incline us to predict drinking in ways that the sight of water otherwise need not. Our predictive grip upon the external world thus becomes inflected, at every level, by an accompanying grip upon ‘how things are (physiologically) with us’. Might this be part of what enables a robot, animal, or machine to start to experience a low-grade sense of being-in-the-world? Such a system has, in some intuitive sense, a simple grip not just on the world, but on the world ‘as it matters, right here, right now, for the embodied being that is you’. Agents like that experience a structured and – dare I say it – meaningful world, a world where each perceptual moment presents salient affordances for action, permeated by a subtle sense of our own present and unfolding bodily states. A recipe for Sentient Robotics 101.beta perhaps?

There is much that I’ve left out from this post – most importantly, the crucial role of self-estimated sensory uncertainty (‘precision’), and the role of action and environmental structuring in altering the predictive tasks confronting the brain: changing what we need to predict, and when, in order to get things done. That’s where these stories score major points by dovetailing very neatly with large bodies of work in embodied cognition.

Nor have I mentioned the many outstanding problems and puzzles. For example, it is not known whether multi-level prediction machinery characterizes all, or even most, aspects of the neural economy. Most importantly of all, perhaps, it is not yet clear how best to factor human motivation into the overall story. Are human motivations (e.g. for play, novelty, and pleasure) best understood as disguised predictions – deep-seated expectations that there will be play, novelty, and pleasure? That is a challenging vision, but one that could offer a deeply unifying perspective indeed.

*Feature Image: “I love water,” by Derek Gavey. CC-BY-2.0 via Flickr. *

The post Predictive brains, sentient robots, and the embodied self appeared first on OUPblog.

]]>The post Why know any algebra? appeared first on OUPblog.

]]>It may have been a joke but some nevertheless found this argument compelling. Their mistake was quickly pointed out – to give everyone a million dollars would cost 317 million million dollars – but some persisted with the error. Their reasoning seemed to go “if you have 360 dollars you can give 317 people one dollar each so if you have 360 million dollars you can give 317 million people a million dollars each!” Others explained: “No Joe, just imagine you have 360 boxes, each containing a million dollars cash. When you go to give each person one of them you will run out after 360 people, leaving the remaining 316,999,640 people empty handed.”

Everyone would agree that adults need to have a grasp of numbers to the extent that they can spot nonsense arguments like this. At the same time however, it may be said that the x and y stuff can safely be forgotten once you leave school as it is practically never used. Anyone will forget the details of any subject if they never go back to it. That is why occasional reading about things mathematical renews confidence and allows you to question what is going on when a topic becomes complex.

For instance, it will give you the power to ask killer questions in pretentious presentations. Often a couple of graphs and equations flashed up on the screen will cower an audience into submissive silence. Never put up with that. Ask the presenter how that equation relates to the topic of their talk. Better still, ask what each of the symbols in the slide stand for. You don’t need to know anything in particular about maths to do that and everyone will soon see how competent your presenter is.

Taking this a little further, it is genuinely useful to know some algebra as it lets you deal with simple mathematical problems and to know that you have them right. And this does happen in real life. A friend once gave a presentation pitch for a contract that involved two factors whose graph was a straight line. He laboured to explain this and an audience member lost patience and pointed out that the two quantities had an obvious linear relationship so of course the graph had to be a perfect straight line. My friend was made to look clueless and, not surprisingly, failed to land the contract. He explained to me later that what most annoyed him was that he had figured that out the night before, and had even written down the equation of the line and checked it was right. However, he lacked the nerve to say that in his presentation and so when it was pointed out to him, he looked stupid. With just a touch more algebraic confidence he could have carried the day.

It is a worthwhile skill just to be able to see an algebraic problem for what it is even if your own attempts to solve it are a bit clumsy. A recent example concerned the controversial film *The Interview*, which is about a fictional assassination plot of the North Korean leader, Kim Jong-un. A magazine article said that the film grossed $15 million on the weekend of its release, that the cost of the movie was $15 to buy and $6 to rent, and that two million copies were distributed overall. The article went on to say however that the company did not state how many copies were rented and how many were bought. It seemed that it did not occur to anyone at the magazine that they ought to be able to figure that out. A person with some mathematical habits of mind however would at least pause to think, and then would get the answer somehow, as it is not difficult. Working in units of millions there are two equations here: r + s = 2 and 6r + 15s = 15; the first equation counts units of r (rentals) and s (sales) while the second equation counts the money. From these we may deduce that there were 1/3 million sales and 5/3 million rentals overall. (Google simultaneous equations for further details.)

“I have a dream, which is that people will not run for cover whenever anything mathematical appears but rather will pause, think a little, ask a question or two and, if still out of their depth, seek a more qualified person to clear the matter up.”

The most salutory experiences of harm caused by mathematical ignorance however often stem from probability questions, which can fool even intelligent and educated people. It is one thing not to be able to do a problem but it is quite another to imagine that you can do it and be seduced into an utterly false conclusion. As an example, the following question was put to a large group of medical students. A certain condition affects one person in 1,000 and a particular medical test will certainly give a positive result if the person tested has the disease but has a 5% probability of coming out positive for people who have not got the condition. A randomly chosen person tests positive. What is the probability that they have the disease?

The most popular response was that the test was 95% accurate so the probability that it was right was 95%. I’m afraid that answer is not only wrong but represents a mistake on a par with poor Joe’s analysis of the cost of ObamaCare. The correct answer is at the other end of the scale: the chance that the person actually has the condition is less than 2%.

This is a tricky question and even a mathematically aware person might find it difficult to answer. However, I hope that the same person would instinctively be sceptical of the guess of 95% as that number takes no account of the prevalence of the condition in the population, which surely affects the answer. If the condition were very rare, then the chances of a false positive must be high. A quick way to see the right answer is to note that for every 1000 people in the general population, one person will have the condition but about 50 (5% of 1, 000) will be patients who generate a false positive; for that reason a random positive test only has a chance of one in fifty-one of detecting a person with the disease.

We might hope that qualified doctors would know how to interpret any test results that they call for. However, bad mistakes may still happen in serious situations such as court cases where an ‘expert’ witness makes a probability statement. Landmark cases involving cot deaths have led to gross miscarriages of justice. For example, once a probability statement on the likelihood of DNA matches is accepted as fact by the court, there may be only one verdict possible, and that may be the wrong one. I trust that lessons have been learnt from past errors but the risk of blunders remains unless any statement of probability is checked by a qualified statistician. Being an expert in the field of the testimony is not enough. Before a precise probability claim is admitted as evidence, it should be professionally scrutinised with the underlying assumptions, the calculation of the actual probability number and, just as importantly, its margin of error, all checked. I am not sure that would necessarily happen in a British law court.

In conclusion, I have a dream, which is that people will not run for cover whenever anything mathematical appears but rather will pause, think a little, ask a question or two and, if still out of their depth, seek a more qualified person to clear the matter up. It is a modest sounding dream but its realisation would make the world a better place.

*Featured image credit: Maths and calculator. Public domain via Pixabay.*

The post Why know any algebra? appeared first on OUPblog.

]]>The post From number theory to e-commerce appeared first on OUPblog.

]]>In this blog series, Leo Corry, author of *A Brief History of Numbers*, helps us understand how the history of mathematics can be harnessed to develop modern-day applications today. The final post in the series takes a look at the history of factorization and its use in computer encryption.

The American Mathematical Society held on October 1903 its regular meeting in New York City. The program announced a talk by Frank Nelson Cole (1861-1921), with the unpretending title of *On the factorization of large numbers*. In due course, Cole approached the board and started to multiply the number 2 by itself, step after step and without saying a word, sixty seven times. He then subtracted 1 from the result obtained. He went on to multiply, by longhand, the following two large numbers:

193,707,721 x 761,838,257,287.

Realizing that the two calculations agreed, an astonished audience offered an enthusiastic applause, while Cole returned to his seat and continued to remain silent. One can only guess how satisfied he was for having shown to his colleagues this little gem of calculation that he discovered after much hard work.

The number 2^{67} – 1 is commonly known as the Mersenne number *M*_{67}. The question whether a given Mersenne number of the form 2* ^{n}* – 1 is prime had attracted some attention since the seventeenth century, but it was only in the last third of the nineteenth century that Edouard Lucas (1842-1891) came up with an algorithm to test this property. The algorithm was improved in 1930 by Derrick Henry (Dick) Lehmer (1905-1991) and turned into a widely used tool for testing primality, the Lucas-Lehmer test. But it is one thing to know, with the help of this test, whether or not a certain Mersenne number is prime, but a much more difficult task is finding the factors of one such large number even if we know it to be composite. When asked in 1911 how long it had taken to crack

Almost hundred years later, another remarkable factorization was achieved, this one involving much larger numbers. In 1997 a team of computer scientists, led by Samuel Wagstaff at Purdue University, factorized a 167-digit number, (3^{349} – 1)/2, into two factors of eighty and eighty-seven digits respectively. According to Wagstaff’s report, the result required about 100,000 computer hours. Quite a bit more than in Cole’s story. Wagstaff had previously been involved in many other remarkable computations. For instance, in 1978 he used a digital computer to prove that Fermat’s last theorem (FLT) is valid for prime exponents up to 125,000.

Factorization results such as those of Cole and Wagstaff will at the very least elicit a smile of approval on the side of anyone with a minimum of sympathy and appreciation for remarkable mathematical results. But when faced with the price tag (in terms of human time spent or computer resources used to achieve it), the same sympathetic listener (and by all means the cynical one) will immediately raise the question whether all this awful lot of time was worth spending.

Central to the mainstream conceptions of pure mathematics over the twentieth century was the idea that numerical calculations with individual cases is at best a preliminary exercise to warm up the mind and start getting a feeling for the situations to be investigated. Still nowadays, many a mathematician is proud of stressing his slowness in calculating and pointing out the mistakes he makes in restaurants when splitting a bill among friends. David Hilbert (1862-1943) one of the most influential mathematicians at the turn of the twentieth century, was clear in stating that from “the highest peak reached on the mountain of today’s knowledge of arithmetic” one should “look out on the wide panorama of the whole explored domain” only with the help of elaborated, abstract theories. He consciously sought to avoid the use of any “elaborate computational machinery, so that … proofs can be completed not by calculations but purely by ideas.”

But with the rise of electronic computing a deep change has affected the status of time-consuming computational tasks from the time of Hilbert and Cole to the time of Wagstaff, via Lehmer and up to our own days. If in 1903 Cole found it appropriate to remain silent about his result and its significance, in 1997 the PR department of Purdue University rushed to publish a press release announcing Wagstaff’s factorization result: “Number crunchers zero in on record-large number”. Wagstaff cared to stress to the press the importance of knowing the limits of our abilities to perform such large factorizations, arguing that the latter are “essential to developing secure codes and ciphers.”

General perceptions about the need, and the appropriate ways for public scrutiny of science, its tasks and its funding, changed very much in the period of time between 1903 and 1997, and this in itself would be enough to elicit different kinds of reactions to both undertakings. But above all, it was the rise of e-commerce and the need for secure encryption techniques for the Internet that brought about a deep revolution in the self-definition of the discipline of number theory in the eyes of many of its practitioners, and in the ways it could be presented to the public. Whereas in the past, this was a discipline that prided itself above all for its detachment from any real application to the affairs of the mundane world, over the last four decades it has turned into the showpiece of mathematics as applied to the brave new world of cyberspace security. The application of public-key encryption techniques, such as those based on the RSA cryptosystem, turned the entire field of factorization techniques and primarily testing, from an arcane, highly esoteric, purely mathematical pursuit into a most coveted area of intense investigation with immediate practical applications, and expected to yield enormous economic gains to the experts in the field.

*Featured image credit: Transmediale 2010 Ryoji Ikeda Data Tron 1 by Shervinafshar. CC BY-SA 3.0 via Wikimedia Commons.
*

The post From number theory to e-commerce appeared first on OUPblog.

]]>The post World Statistics Day: a reading list appeared first on OUPblog.

]]>* Analyzing Wimbledon,* by Franc Klaassen and Jan R. Magnus

The world’s most famous tennis tournament offers statisticians insight into examining probabilities. This study attempts to answer many questions, including whether an advantage is given to the person who serves first, whether new balls influence gameplay, or whether previous champions win crucial points. Looking at a unique data set of 100,000 points played at Wimbledon, Klaassen and Magnus illustrate the amazing power of statistical reasoning.

‘**Asking About Numbers: Why and How**,’ by Stephen Ansolabehere, Marc Meredith, and Erik Snowberg, published in *Political Analysis*

How can designing quantitative standardized questions for surveys yields findings that can later be linked to statistical models? The authors offer a full analysis about why quantitative questions are feasible and useful, particularly for the study of economic voting.

* The Credit Scoring Toolkit: Theory and Practice for Retail Credit Risk Management and Decision Automation*, by Raymond Anderson

This textbook demonstrates how statistical models are used to evaluate retail credit risk and to generate automated decisions. Aimed at graduate students in business, statistics, economics, and finance, the book introduces likely situations where credit scoring might be applicable, before presenting a practical guide and real-life examples on how credit scoring can be learned to implement on the job. Little prior knowledge is assumed, making this textbook the first stop for anyone learning the intricacies of credit scoring.

‘**Big data and precision**’ by D. R. Cox, published in *Biometrika*

Professor D.R. Cox of Nuffield College, Oxford, explores issues around big data, statistical procedure, and precision, in addition to oulining a fairly general representation of the accretion of error in large systems.

* The New Statistics with R: An Introduction for Biologists*, by Andy Hector

This introductory text to statistical reasoning helps biologists learn how to manipulate their data sets in R. The text begins by explaining the classical techniques of linear model analysis and consequently provides real-world examples of its application. With all the analyses worked in R, the open source programming language for statistics and graphics, and the R scripts included as support material, Hector presents an easy-to-use textbook for students and professionals with all levels of understanding of statistics.

‘**Housing Wealth and Retirement Timing**’ by Martin Farnham and Purvi Sevak, published in* CESifo Economic Studies*

Having found that rising house prices cause people to revise their planned retirement age, Farnham and Sevak explore movements in the housing market and the implications for labour-supply.

* An Introduction to Medical Statistics*, by Martin Bland

Every medical student needs to have a firm understanding of medical statistics and its uses throughout training to become doctor. The fourth edition of *An Introduction to Medical Statistics* aims to do just that, summarising the key statistical methods by drawing on real-life examples and studies carried out in clinical practice. The textbook also includes exercises to aid learning, and illustrates how correctly employed medical data can improve the quality of research published today.

‘**Getting policy-makers to listen to field experiments**’ by Paul Dolan and Matteo M. Galizzi, published in *Oxford Review of Economic Policy*

On the premise that the greater use of field experiment findings would lead to more efficient use of scarce resources, this paper from Dolan and Galizzi considers what could be done to address this issue, including a consideration of current obstacles and misconceptions.

* Stochastic Analysis and Diffusion Processes*, by Gopinath Kallianpur and P. Sundar

Building the basic theory and offering examples of important research directions in stochastic analysis, this graduate textbook provides a mathematical introduction to stochastic calculus and its applications. Written as a guide to important topics in the field and including full proofs of all results, the book aims to render a complete understanding of the subject for the reader in preparation for research work.

‘**Statistical measures for evaluating protected group under-representation**’ by Joseph L. Gastwirth, Wenjing Xu, and Qing Pan, published in *Law, Probability & Risk*

The authors explore the conflicting inferences drawn from the same data in the cases of *People* v*. Bryant* and *Ambrose *v.* Booker*. Based on their full analysis, they argue that when assessing statistics on the demographic mix of jury pools for legal significance, courts should consider the possible reduction in minority representation that can occur in the peremptory challenge proceedings.

** Bayesian Theory and Applications**, edited by Paul Damien, Petros Dellaportas,

Nicholas G. Polson, and David A. Stephens

Beginning by introducing the foundations of Bayesian theory, this volume proceeds to detail developments in the field since the 1970s. It includes an explanatory chapter for each conceptual advance followed by journal-style chapters presenting applications, targeting those studying statistics at every level.

‘**Representative Surveys in Insecure Environments: A Case Study of Mogadishu, Somalia**,’ by Jesse Driscoll and Nicholai Lidow, published in *Journal of Survey Statistics and Methodology*

How do we get accurate statistics from politically unstable areas? This paper discusses the challenges of conducting a representative survey in Somalia and the opportunities for improving future data collection efforts in these insecure environments.

* Stochastic Population Processes: Analysis, Approximations, Simulations*, by Eric Renshaw

Talking about random processes in real-life is tricky, as the world has no memories of those processes which depend only on the current state of the system and not on its previous history. This book is driven by the underlying Kolmogorov probability equations for population size. It’s the first title on stochastic population processes that focuses on practical application. It is not intended as a text for pure-minded mathematicians who require deep theoretical understanding, but for researchers who want to answer real questions.

*Image Credit: Statistics by Simon Cunningham. CC BY 2.0 via Flickr. *

The post World Statistics Day: a reading list appeared first on OUPblog.

]]>